Friday, 25 March 2022

Lupine Publishers| The Need for Ethical Artificial Intelligence and the Question of Embedding Moral and Ethical Principles

 Lupine Publishers| Journal of Robotics and Mechanical Engineering

Introduction

The issue of Facebook moderators made topical headlines in 2019. When confronted with horrifying images such as terrorist assaults filmed online, moderators are required to be extremely reactive in order to eliminate the scenes violating human dignity from the social network as rapidly as possible. In [1], my colleagues and I explored whether it was possible or not to model human values such as virtue. Modeling could eventually make it possible to automatize all or part of the moderators’ arduous and ungrateful work. In a first part, this article deals with the need for a reflection on ethical Artificial Intelligence (AI). After providing a definition of what AI is, I then discuss how ethical rules could be implemented in a system using AI. In a second part, I ask myself whether it is possible to embed moral and ethical principles. Using a utility function can help agents to make ethical decisions, but it is also important to be cautious about the limitations of such a sometimes-simplistic approach. Ethics must be appreciated in the social and technical context in which it is first developed and then implemented [2].

The Need for Ethical Artificial Intelligence

Artificial intelligence

A definition AI can be described by a universal triad, namely the data brought by the environment, the operations defined as the logic which mimic human behavior and finally, a control phase aiming at retroacting over its previous actions. Its definition is essentially based on two complementary views: one focused on the behavior and how it acts, especially as a human and the other which emphasizes the reasoning processes and how it reproduces human skills [3]. However, both points of view insist on the rational behavior that an AI must have. Moreover, it is important to pay attention to which kind of AI we are dealing with: strong or weak AI [4]. Weak AI, also known as narrow AI is shaped by behaviors answering to observable and specific tasks that may be represented by a decisional tree. On the other side, the strong AI, or artificial general intelligence, can copy human-like mental states. For this type of AI, this means that decision abilities or ethical behavior are issues that need to be taken care of. Finally a strong AI could find the closest solution to the given objective and learn with an external reversions. The latter is the one that is posing unprecedented problems that researchers are just starting to study. In fact, a system embedding strong AI is able to learn without human assistance or injection of additional data since the AI algorithm generates its own knowledge. Therefore, the exterior observer or the user of such agents will no longer know what the AI knows, what it is capable of doing nor the decisions it is going to take. Hence the need to establish an ethical framework that defines an area of action and prevents the system from taking decisions contrary to the ethics.

How to implement ethics rules ?

In order to implement ethical rules, there are two approaches. The first one named top down approach is based on ethical ruleabiding machines [5-6]. The strategy is to respect unconditionally the ethical principles related to morality such as “Do not kill”. However, without understanding the potential consequences of the empirical decisions taken, an AI system creates numerous approximations that are a significant drawback. This can make rules conflict even for the 3 laws of robotics [7]; it may also lead to unintended consequences due to added rules [8]. It should be noted that even inaction can be taken into account for injuring humans. Moreover, the complexity of interaction between humans’ priorities may lead to interpersonal inappropriate comparisons of various added laws [9]. The second method called bottom up, focuses on case studies in order to learn general concepts. The case studies make it possible for a strong AI to autonomously learn wrong and biased principles and even generalize them by applying them in new situations it encounters. These types of approaches are considered to be dangerous. In fact, in this process of learning, basic ethical concepts are acquired through a comprehensive assessment of the environment and its compliance with previous knowledge. This is done without any bottom-up procedure. The result of this learning will be taken into account for future decision making [10,11]. Eventually, in the case where an AI algorithm is facing a new situation it has not encountered before, the extrapolation without a control phase may result in perilous situations for humans [12].

Artificial Intelligence Embedding Moral and Ethical Principles

A utilitarian approach of ethics consists in choosing in the case of a set of possibilities the solution that leads to the action consequently maximizing intrinsic good or net pleasure [13]. This involves quantifying Good or Evilfrom a given situation. However, certain situations supported by ethical reasons with an empirical study may prohibit the combined execution of certain actions. These complex cases are at the origin of dilemmas and the ethical principles do not make it possible to establish a preference. Therefore, autonomous agents need to be endowed with the ability to distinguish the most desirable option in the light of the ethical principles involved.
To achieve this goal, this article proposes in the following subsections a method called a utility function as a mean of avoiding ethical dilemmas.

Using a utility function to help agents make ethical decisions

In order to achieve this goal, a number of solutions have been proposed [14]. One of them it the utility function, also known as objective function. This function is used to assign values to outcomes or decisions. The optimal solution is the one that maximizes the utility function.This approach based on quantitative ethics determines which action maximizes benefit and minimizes harm. Its objective is to make it possible for an AI algorithm to take the right decisions particularly when it encounters an ethical dilemma. From a mathematical point of view, the utility function takes a state or a situation as an input parameter and gives as a result an output which is a number [15]. This number is an indication of how good the given state or situation is for the agent. The agent should then make the decision that leads to the state that maximizes the utility function. For instance, let’s take the case of an autonomous vehicle, and let’s assume that the car is in a situation where harm is unavoidable, and that it would inevitably hit either two men on the road or crash into a wall killing the passenger it is carrying [16]. Based on our previous utilitarian ethics definition, the decision that will minimize harm is the one that will lead to kill as few people as possible. Therefore, the car should crash and kill the passenger to save the two pedestrians because the utility function of this outcome is the highest. The same reasoning applies for military drones when they have to choose between multiple outcomes that involve moral and ethical principles.

Autonomous cars embedding AI algorithms using the utility function are not yet marketed. Some models available to the general public have an autopilot mode that still requires the presence of a human being behind the steering wheel who will make a decision in case of a problem. Fully autonomous cars still ride in test environments [17]. In the near future, the people who are likely to buy this type of car will primarily be public institutions such as municipalities. For instance, the city of Helsinki is testing an autonomous bus line, RoboBusLine which carries passengers on a defined road with a limited speed and an autonomous shuttle is also in service in Las Vegas [18]. However, these are still prototypes in a test phase with an operator on board. The other customers that may be interested in using autonomous vehicles are companies that make deliveries given the advantage of automating the tasks resulting in cost reduction and efficiency. In fact, Amazon, FedEx and UPS are investigating solutions for driverless trucks. The utility function is currently under investigation as an active solution to avoid ethical dilemmas non-modifying on-policy [19]. Autonomous robots are expanding, and the aim is not only to deal with ethical dilemmas but also to reduce uncertainty by quantifying problems such as exploration or unknown mapping; both can be stochastically defined (Shannon or Rényi’s entropy) [20,21]. Describing and taking actions in a world incompletely defined can be done with the help of estimators but utility functions describe the perceptual state in line with the rules, and an active strategy can hence be implemented. This is already done for robot vision for example [22].

Limits and dangers of the utilitarian approach

The approach previously described and consisting in quantifying situations and assessing them with a utility function through a model has its own limits as far as strong AI is concerned. For weak AI, engineers at the design stage can implement decision trees to establish rules. They can anticipate the behavior of the AI more easily. On the other hand, as mentioned in section 2.1, advanced AI systems learn directly from the environment and adapt accordingly. By doing so, an external observer cannot always predict or anticipate the actions of such systems [23]. This is true for the Alpha Go algorithm that is taking decisions and implements strategies even experts in the game cannot understand although it leads to an optimum solution. The intelligent agent behaves like a black box whose internal functioning is unknown. This is particularly dangerous when it comes to autonomous vehicles or UAV drones that put human life at stake. Using only a utility function to decide whether or not a UAV could be used in an armed conflict could be considered as a war crime. Indeed, it is essential to test AI algorithms in different environments [23] and cover as many situations as possible before they are registered for use. This involves confronting algorithms with different situations and ensuring they behave properly by taking the most ethical decisions possible. It will then be possible to identify anomalies and correct them immediately.

Conclusion

In just a few years, artificial intelligence has become a strategic issue in Europe, the USA and China. For fundamental considerations of balance of powers between that of the GAFAM on the one hand and ethics at the service of the greatest number of people on the other hand, it will be crucial to develop an “ethics in context”. This is more than computational ethics and utility functions developed by economists. It will be possible to implement an embedded code of ethics using artificial ethical agents, but only if ethical principles remain submitted to a democratic deliberation involving all participants.My future research will focus on the question of “human values and AI” at different levels: universality, continentality (USA vs. China for example), nation-state and local communities using AI.

 Read More Lupine Publishers Robotics Journal Articles: https://robotics-engineering-lupine-journal.blogspot.com/ 


Thursday, 17 March 2022

Lupine Publishers| Defining Quantum Artificial Intelligence (Q.A.I)

    Lupine Publishers| Journal of Robotics and Mechanical Engineering

Abstract

As we knew already Artificial Intelligence (A.I) is mimic of Natural Intelligence (N.I) of Human Brain on Silicon Chips having all intelligence processing as similar and possible for human brain with self-decisions, self-controls, self-programs, self-thinking with time management. Due to the advancement in A.I each day Humankind lives becoming comfortable, fast, smart and successful and those things which are seems to be impossible started to become possible from past to present in all walks of life. Now refocus our attention on Natural Intelligence (God-made) of mankind and other intelligent species on planet earth. When we genuinely observed we can see the biggest “Commonness” in the pattern and structure of Universe, Superclusters, Thunders, Veins, Arteries, Roots, Branches, Seas and “Neurons-Schemas” of human brain and that is the point of rethinking for new dimensions exploration in Natural Intelligence (God-made) to apply with reengineering and modification to mimic Artificial Intelligence (Man-made) on the basis of those findings which I coined as “Quantum Artificial Intelligence (Q.A.I)”.

Keywords: Natural Intelligence (N.I), Artificial Intelligence (A.I), Quantum Natural Intelligence (Q.N.I), Quantum Artificial Intelligence (Q.A.I).

Introduction

(Figure 1) As I mentioned already analogically pattern of our Universe, Superclusters and Human Brain Neural-Schemas are the same. Hence not only signals transmissions and receptions for Intelligence processing are the same but also Intelligence signals nature (format) are the same which is Quanta (Light). Entire Universe is the mixture of light and dark energies and matter as its building blocks, therefore all the living/non-living creatures, objects, entites, elements, occurrences, appearances, illusions either made from or born using same building blocks lights with various frequencies for isolations and that’s why human brain too. On the basis of available some proved research write-ups and with my perspective Universe and Every Human Brain directly connected and link up/down with light frequency and Quantum Mechanics also engaged to prove it. Our Thoughts are the Things, our perception and formulation to life and Universe also because of same happened, thus what all we feel physical in actual “Virtual” and are Quantum images, frames, pictures or illusions of our thoughts frequency in front appearance of our eyes to develop, structure and enhance our intelligence called wisdom too. Hence Natural Intelligence is made up of light energies forms and various tuned frequencies (according to Stings theory) called “Quantum Natural Intelligence (Q.N.I)” and when after full flagged understanding and clearance in Q.N.I we will be able to mimic it artificially called “Quantum Artificial Intelligence (Q.A.I)”. the concept herein instead of Silicon Chips or Electronics Artificial Intelligence, A.I engineering using lights forms, light signals circuits or Quantum Circuits which are just a right and precise combinations of different wavelengths and light frequencies (Quanta/Photons) with behaving as wave (light data buses) and particles (Signals) for light data buses and signals to formulate processing logics and build-up Artificial Intelligence using light. In mankind in future develop such a light which seems to be just radiation but in actual complicated light engineering using light waves and particles (Quanta) in billions of Quanta’s and acting as complete Quantum Artificial Intelligence (Q.A.I) based robotic assembly to win the world, Universe and Multiverse.

Conclusion

If in near future scientist and researchers able to understand what I am tried to share as biggest fact of the Universe and the Human Brain that, all brains directly connected with light frequency with entire Universe and the structure of Universe, Superclusters and Human Brain Neural-Schemas are the same as well as intelligence processing also same using the concepts and principles of Quantum Physics and Quantum Mechanics. Therefore, on this ground Human Brain is Quantum Natural Intelligence (Q.N.I) and all formulation in brain of life and Universe due to the light intelligence. After understanding this phenomenon of Q.N.I scientist and engineers can move for the engineering of “Quantum Artificial Intelligence (Q.A.I) which would be just appearances of light bundles and bunches with various color lights with different wavelength confined at single spot but in actual it would be space robots, space craft, and transportation sources and so on. Hence this intelligent light form can send/transmit with the speed of light for instant time and space travels in Universe to explore other planets, galaxies, Superclusters, stars and also to prove the concept of “Multiverse (Parallel Universes)”. Human Intelligence as Quantum Natural Intelligence (Q.N.I) and its mimic Artificial Intelligence as Quantum Artificial Intelligence (Q.A.I) will become exactly same, therefore direct connection between mankind and all robots will be possible without encodings, decoding’s and interfaces for communication with intelligent signals processing, conversions, translation and actuations. Hence Q.A.I is nothing but Ultra Artificial Intelligence light form (light A.I based robotics) and might be such a several Q.A.I robots, space craft, objects and Alien lives already exist in entire Universe and Multiverse and we are surrounded with them, and that what we just consider or think as light or radiation but which is not.

Read More Lupine Publishers Robotics Journal Articles: https://robotics-engineering-lupine-journal.blogspot.com/ 

Friday, 11 March 2022

Lupine Publishers| Business in Artificial Intelligence

   Lupine Publishers| Journal of Robotics and Mechanical Engineering


Abstract

There is no need to get understand what is Artificial Intelligence with the relevance of it in day by day expansion and coverage in all needs and applications of life and how its changing all facet and scenarios on planet earth and might be in space in near future which is itself initiatives startup now. Hence, I wrote this article with the intention what are the possibilities in near future for business scopes, market demands, customers/consumer’s needs, future job forms, future employments skills for survival and future employments in brief being one of the successful Scientist, Practitioner, Educator and worldwide Speaker in the field of Artificial Intelligence who coined several new future research terms in A.I.

Keywords: Future Business; Advanced A.I; Space Robotics; Virtual A.I; DeepMind; Bionic Brain; Medical Robotics; Humanoid; Virtual Robotics; Intelligence Devices

Modeling

There are many more domains to work and market in Artificial Intelligence, but I choose only some most promising change agent and strong market players in my Hexagonal Model below. These are Bionic/DeepMind & Humanoid, Space Robotics & Cyborgs, Consumer Robotics & NLP echo Devices/Assistance, Military & Defense Robotics, Medical & Nano Robotics and Bionic, DeepMind & Humanoid (Figure 1). The most market scope field is Bionic Brain (Human Brain like Neural Schemas and Processing), DeepMind learning and Humanoid (Human-like-Robots) engineering/ designing, manufacturing and selling as well as its respective work skills and employments for future jobs openings. Equal scope and attention to Space Robotics and Cyborg Devices/Human elements which has already started mankind journey and soon technologies near to reach Moon and Mars explorations and has great market and hence future jobs in the same. Consumers Robotics, NLP echo Devices/Assistance also has sustainable future business and jobs opportunities after the success of Google Assistant, Amazon Alexa etc. and several smart phones and likewise devices /products connected with Internet of Things (IoT) to make human lives easy, comfortable and better each day like smart kitchens, smart vehicles, smart phones/devices, smart homes, consumer appliances and so on. A.I has also great expansion in the domain of Military and Defense Robots for high end securities with saving solders lives as well as Medical Robots, Surgical Robots, Nano Robots for inside body diagnosis has large market and jobs options. In this race equal scope, attention and market to develop human brain like software intelligence called virtual humanoid which would be hardware platform independent as compare to windows, IOS , MAC and Android intelligence and companies like Google, Apple, Microsoft, Amazon will work towards it which leads to future job skills and market.

Figure 1: A.I Market Hexagonal, Source: Dr. Sadique Shaikh & Tanvir Begum.

Conclusion

The future employments, future jobs and future skills only based on two strong technologies Artificial Intelligence (A.I) and Internet of Things (IoT) to get connected all living non-living Natural Intelligence and Artificial Intelligence based human, objects, elements and things to establish communication, to command and process for task accomplishment.

Read More Lupine Publishers Robotics Journal Articles: https://robotics-engineering-lupine-journal.blogspot.com/ 

Friday, 4 March 2022

Lupine Publishers| Statistical Model of The Postcombustion Subprocess in an Oven of Multiple Hearth Furnace

  Lupine Publishers| Journal of Robotics and Mechanical Engineering


Abstract

Complex multivariable processes are generated in multi-hearth furnaces, and their modeling contains a high index of uncertainty. The main variables that characterize the post-combustion subprocess were identified and data were taken that comprise a period of three months of operation of the installation, to which a regression analysis was carried out step by step backwards. This analysis allowed us to determine that the linear correlation coefficient for hearth temperature four was 0.79 and 0.65 for hearth temperature six, in addition to identifying the independent variables that most influence these process output variables.

Keywords: Furnaces, Post-Combustion subprocess, Regression analysis

 

Introduction

Nickel-producing companies are characterized by continuous processes of great complexity; that require automation to achieve greater efficiency in their productions. The Company under study operates according to the carbonate-ammoniacal leaching scheme for reduced ore. This company has a multi-hearth reduction furnace plant, which constitutes a key stage in the production process. The reduction furnaces are large metal cylinders, where basically the reduction of nickel oxide and cobalt is carried out to their corresponding metallic forms [1]. In these equipments it is required to maintain a profile of temperature and reducing gases (carbon monoxide and hydrogen), for each hearth, its noncompliance produces significant losses due to the formation of crystalline structures of iron spinels, olivines and pyroxenes that trap nickel and to cobalt in the form of oxides and to a lesser extent in a metallic state, and to the appearance of high contents of metallic iron in the reduced mineral. This results in a decrease in nickel and cobalt extraction in the leaching process [2]. To contribute to the establishment of the thermal profile required by the furnace, secondary air is introduced into hearths four and six (post-combustion), with the purpose of guaranteeing the complete combustion of residual carbon monoxide and other combustible gases that come from incomplete combustion in lower hearths. In this exothermic reaction, an amount of heat is generated that contributes to the preheating and drying of the mineral. The control loop of hearth four operates automatically and in hearth six manually, as a consequence the chemical physical process that takes place in these hearths is not carried out efficiently; observing temperature oscillations, which affect the thermal and aerodynamic processes that take place in the furnace. The literature consulted shows linear mathematical models for the furnaces of a company with similar characteristics, which operated under different operating conditions [3]. These models were achieved through experimental identification, for mean square fit values between 0.72 and 6.1. Also defined as input variables: the air flow to hearths four and six, and as output variables: the temperature corresponding to these hearths. Montero [4], obtained dynamic mathematical models, with adjustment between 62 and 72%, which characterize the reduction furnaces of the company in question; where they were selected as input variables: the flow of ore fed to the furnace; Air flow to hearths four and six. As output variables: temperature of these hearths and concentration of residual carbon monoxide. To design an effective control strategy for the post-combustion subprocess, it is necessary to know the behavior of the variables in different situations and to obtain a process model. The objective of the work is to obtain a statistical model that represents the behavior of the post-combustion thread.

Materials and Methods

Description of The Reactor

Herreshoff type furnaces [5], are composed of a metal cylinder, in an upright position, lined internally with chamotte bricks or high alumina, protected externally by a metal housing, agitation facilities, feed and discharge of ore and combustion chambers. They are formed internally by 17 hearths or hearths that are shaped like spherical vaults (Figure 1). The furnace has a central rotating shaft to which 68 arms are articulated, four for each hearth. Each arm has, depending on the hearth, eight to 12 vanes or inclined teeth. Depending on the area of the furnace, they will be withholding or sweeping and depending on the odd or even hearths, they will allow the discharge from one hearth to another in the form of zigzag. In peer hearths, the discharge is carried out through 30 holes located equidistant from the periphery, in odd hearths by a hole located in the center around the central axis. The combustion chambers are equipped with high pressure oil burners, which are located two for each hearth in: hearth six, hearth eight, hearth 10, hearth 12; except hearths 14 and 15 that only have one camera. In each chamber: the oil distributor consists of a main valve, a filter to separate impurities, a solenoid valve, a thermometer, a manometer, a flow meter with bypass, a pressure regulator and the burner [6].

Figure 1: Schematic diagram of the reduction furnace seen from the SCADA (CITECT).

Influence of Temperature in The Reduction Process

Temperature is a fundamental parameter in pyrometallurgical processes, because it facilitates the weakening of the crystalline structures of the mineral and therefore the development of reduction reactions. During the operation, a certain prescribed profile must be maintained, increasing from the top to the bottom, in order to guarantee a gradual heating of the mineral. Special attention is paid to the temperature values of hearths four, 10 and 15. The temperature stability in hearth four is extremely important because of the influence it has on the temperature in the other furnace hearths. If there are temperature values below the norm, there is a displacement of the thermal zones of the furnace, which entails effects on the extraction of nickel and cobalt [7].

Description of The Post-Combustion Installation

The system consists basically of a centrifugal fan (Table 1) installed on the upper floor (ceiling) of the furnace, with hot air suction intake (150 to 2000C) from the chimney from the central axis and an air duct from the fan to hearths four and six, with flow regulation system through butterfly valves. The post-combustion air duct at the fan outlet , it has an internal diameter of 0.407 m and falls parallel to the furnace body to the four hearth, where it branches into two ducts of equal diameter, one goes to hearth four and the other to hearth six (Figure 1). Figure 2 shows the characteristics of the fan for the constant conditions in which it is operating. The fan curve intercepts the system characteristic curve at the operating point (A); for an approximate air flow that circulates before the fork of 6 796 m3/ h, with a pressure of 3 kPa. Considering that the total air flow that is guaranteed before branching is constant and corresponds to a fixed duct air system; are presented in Table 2 airflow hearths four six after the split, according to the valve opening.

Table 1: Technical data of the post-combustion fan.

Table 2: Equivalent air flow based on valve opening.

Figure 2: Characteristics of the afterburner fan

Statistical Analysis of The Data

The presentation of the data allows any researcher to easily interpret them. This presentation can be done in two ways:
a) Frequency Tables: It consists of grouping the data into classes or categories with their respective frequencies. It is applicable to any type of variable.
b) Graphics: example (Histogram). It is the representation of the data by rectangles that are based on a horizontal axis and their area proportional to the frequency of the class interval. They are used primarily for continuous variables.

Regression Analysis

Through a step-by-step regression analysis, the main variables that influence the process-dependent variables are determined, as well as the linear correlation coefficient.

Results and Discussion

As an auxiliary tool to select the variables to be used in the control, a statistical analysis was carried out based on a set of operating data, measured appropriately and continuously. The statistical analysis was carried out with the objective of determining the variables that have the greatest influence on the temperature behavior of hearths four and six of the furnace. For this analysis, five furnace operation data were taken during the months of May to July of a recent year and processed with Microsoft Excel and Statgraphics Plus 5.1 software. These data were obtained from the reports issued by the CITECT Supervisory System. Table 3 shows the average values represented by (X) and the standard deviations by (S) for each of the variables measured in the furnace, which are:

Table 3: Behavior of the furnace variables for three months of work.

1. ApH4, ApH6 [Opening of the air flow regulating valve to hearths four and six (%)].
2. TH0, TH2, TH4, TH6, TH7, TH9, TH11, TH13, TH14, TH15 [Hearth temperature zero, two, four, six, seven, nine, 11, 14 and 15 respectively (° C)].
3. PH0, PH16 [Pressure in hearths zero and 16 (Pa)].
4. TC6S, TC8N, TC8S, TC10N, TC10S, TC12N, TC12S, TC15S [Temperature of combustion chambers, hearths six, eight, 12 and 15, north and south side (° C)].
5. CO [Residual carbon monoxide concentration (%)].
In the months indicated above, the post-combustion air flow meter was not installed, so that the openings of the air flow regulating valves to hearths four and six were taken, as proportional measures to the air flow.
The ore processed during this period was of very good characteristics, with a high iron content. A descriptive statistical analysis of the general trend of the thermal profile of hearths four and six during these three months of work, the results of which are presented in Table 4. For this case it is observed that the values of Kurtosis and the Asymmetry Coefficient allow us to state that the dependent variables (temperature of hearths four and six) behave like normal distributions. The frequency histograms for TH4 and TH6 are also presented during the three months of work in Figures 3-8. histogram. With the data for the month of May, a step-by-step regression analysis was carried out to determine the independent variables that most influence TH4 and TH6 (see equations 1 and 2). 1) TH4=-166.9 + 0.3TC8S - 0.5TH0 + 0.13TH13 + 1.4TH2 - 0.8ApH4 - 0.1TH6
2) TH6=171.6 + 0.3TC6S + 0.8TH13 - 1,1TH14 + 0.7TH15 - 0.13TH4 - 0.2ApH4 + 0.6ApH6 + 0.9TH7 - 0.9TH9

Table 4: Summary of the descriptive statistical analysis of the sample for three months.

Figure 3: Characteristics of the post-combustion fan

Figure 4: Hearth temperature histogram six (TH6). May

Figure 5: Hearth temperature histogram four (TH4). June

Figure 6: Hearth temperature histogram six (TH6). June

Figure 7: Hearth temperature histogram four (TH4). July

Figure 8: Hearth temperature histogram six (TH6). July

In Table 5, the statistic R2 indicates that model 1 explains 0.62 of the variability in hearth temperature four, while model 2 explains 0.42 of the variability in hearth temperature six. The statistic R2 adjusted, which is more convenient for comparing models with different numbers of independent variables, is 0.62 for TH4 and 0.42 for TH6. The standard error of the estimate shows the standard deviation of the residuals, which is 50.62 for TH4 and 47.35 for TH6. Tables 6 & 7 show the analysis of variance for the dependent variables TH4 and TH6. It is noted that the percentages are less than 0.01, so there is a statistically significant relationship between the variables for a 99% confidence level.

Table 5: Summary of the regression analysis for TH4 and TH6.

Table 6: Analysis of variance for hearth temperature four.

Table 7: Analysis of variance for hearth temperature six.

Conclusion

As a result of the statistical analysis, the influence of six variables can be seen for hearth temperature four and nine for hearth temperature six. The openings of the air flow regulating valves, which can be manipulated by a final action element, are highlighted. The multivariable nature of the thermal profile of hearths four and six was verified, with respect to the flow of air supplied to these hearths.

 Read More Lupine Publishers Robotics Journal Articles: https://robotics-engineering-lupine-journal.blogspot.com/ 


 

Additive Manufacturing for Fabrication of Robotic Components

Abstract Additive manufacturing is a promising technology in the fabrication of robotic components, because of its capability of producing...