Academic literature on the topic 'Statistically evaluates parameters of precision'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Statistically evaluates parameters of precision.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Statistically evaluates parameters of precision"

1

Koumpia, Effimia, Athanasios E. Athanasiou, Theodore Eliades, and Michael Knösel. "Precision of a Reflectance Spectrophotometer in Measuring Anterior Tooth Color." Open Dentistry Journal 12, no. 1 (October 30, 2018): 884–95. http://dx.doi.org/10.2174/1874210601812010884.

Full text
Abstract:
Background: Intraorally, a common instrumental approach for measuring tooth color is reflectance spectrophotometry. Objective: To evaluate the precision of a reflectance spectrophotometer in accurately measuring anterior tooth color. Methods: The twelve labial surfaces of the anterior teeth of sixteen patients were measured spectrophotometrically (SpectroShadeTM Micro) on three non-consecutive days (1st, 2nd, 8th). Tooth color was converted to L*, a* and b* colorimetric values; intra-examiner repeatability was assessed in ΔΕ-units between two same day repeated measurements. Intra-examiner reproducibility was measured for the effect of tooth type, time and their interaction.The linear effect of the acquisition angle on the colorimetric values of each tooth was also estimated. Results: The highest values of systematic or random error occurred for teeth #33, #43 and #32. There were no statistically significant differences in systematic or random errors for any tooth between the three measurement days. Statistically significant differences were found for tooth type (p=0.039), whereas time and tooth and time interaction were not statistically significant. A statistically significant linear correlation was found between the L* and a* values and the acquisition angle for teeth #12 and #31, (p<0.008). Conclusion: The reflectance spectrophotometer provided a precise measurement of tooth color in-vivo since the systematic and random errors generated were below the threshold for perceivable color mismatches (ΔΕ<1). In rejection of the null hypotheses, the tooth type (maxillary central incisors) and variation of the acquisition angle of image capture (L* and a* parameters in teeth #12 and #31) affected the reproducibility of intraoral spectrophotometric measurements.
APA, Harvard, Vancouver, ISO, and other styles
2

Stephens, T. W. "Performance of two new algorithms for estimating within- and between-method carryover evaluated statistically." Clinical Chemistry 34, no. 9 (September 1, 1988): 1805–11. http://dx.doi.org/10.1093/clinchem/34.9.1799.

Full text
Abstract:
Abstract Accurate and precise algorithms for estimating within-method carryover, based on the minimization of a unique "carryover sum of squares," and between-method carryover, based on a weighted Deming regression of first sample recovery vs carryover-corrected "true" recovery, are described and compared with traditional methods by use of a Monte Carlo study. In addition, I have studied the experimental parameters that influence the accuracy and precision of carryover estimation. The new algorithm for estimating within-method carryover is unbiased under most conditions, whereas the traditional algorithm is biased low under most conditions. The new algorithm is also more precise, owing to more-efficient utilization of information contained in an analytical run performed for carryover estimation. Between-method carryover in a random-access analyzer is estimated quantitatively by the second proposed algorithm and is found to be readily and precisely determinable. Use of these methods in combination to evaluate analytical interaction should allow the prediction of carryover error under most current analytical situations.
APA, Harvard, Vancouver, ISO, and other styles
3

Park, Il Joong, Sunhyun Ahn, Young In Kim, Seon Joo Kang, and Sung Ran Cho. "Performance Evaluation of Samsung LABGEOHC10 Hematology Analyzer." Archives of Pathology & Laboratory Medicine 138, no. 8 (August 1, 2014): 1077–82. http://dx.doi.org/10.5858/arpa.2013-0439-oa.

Full text
Abstract:
Context.—The Samsung LABGEOHC10 Hematology Analyzer (LABGEOHC10) is a recently developed automated hematology analyzer that uses impedance technologies. The analyzer provides 18 parameters including 3-part differential at a maximum rate of 80 samples per hour. Objective.—To evaluate the performance of the LABGEOHC10. Design.—We evaluated precision, linearity, carryover, and relationship for complete blood cell count parameters between the LABGEOHC10 and the LH780 (Beckman Coulter Inc) in a university hospital in Korea according to the Clinical and Laboratory Standards Institute guidelines. Sample stability and differences due to the anticoagulant used (K2EDTA versus K3EDTA) were also evaluated. Results.—The LABGEOHC10 showed linearity over a wide range and minimal carryover (&lt;1%) for white blood cell, hemoglobin, red blood cell, and platelet parameters. Correlation between the LABGEOHC10 and the LH780 was good for all complete blood cell count parameters (R &gt; 0.92) except for mean corpuscular hemoglobin concentration. The bias estimated was acceptable for all parameters investigated except for monocyte count. Most parameters were stable until 24 hours both at room temperature and at 4°C. The difference by anticoagulant type was statistically insignificant for all parameters except for a few red cell parameters. Conclusions.—The accurate results achievable and simplicity of operation make the unit recommendable for small to medium-sized laboratories.
APA, Harvard, Vancouver, ISO, and other styles
4

Journal, Baghdad Science. "Spectrophotometric Determination of Paracetamol in bulk and Pharmaceutical Preparations." Baghdad Science Journal 12, no. 2 (June 7, 2015): 317–23. http://dx.doi.org/10.21123/bsj.12.2.317-323.

Full text
Abstract:
A simple, and rapid spectrophotometric method for the estimation of paracetamol has been developed. The methods is based on diazotisation of 2,4-dichloroaniline followed by a coupling reaction with paracetamol in sodium hydroxide medium. All variables affecting the reaction conditions were carefully studied. Beer's law is obeyed in the concentration range of 4-350 ?gml?1 at 490 nm .The method is successfully employed for the determination of paracetamol in pharmaceutical preparations. No interferes observed in the proposed method. Analytical parameters such as accuracy and precision have been established for the method and evaluated statistically to assess the application of the method.
APA, Harvard, Vancouver, ISO, and other styles
5

Cunali, Rafael Schlögel, Rafaella Caramori Saab, Gisele Maria Correr, Leonardo Fernandes da Cunha, Bárbara Pick Ornaghi, André V. Ritter, and Carla Castiglia Gonzaga. "Marginal and Internal Adaptation of Zirconia Crowns: A Comparative Study of Assessment Methods." Brazilian Dental Journal 28, no. 4 (August 2017): 467–73. http://dx.doi.org/10.1590/0103-6440201601531.

Full text
Abstract:
Abstract Marginal and internal adaptation is critical for the success of indirect restorations. New imaging systems make it possible to evaluate these parameters with precision and non-destructively. This study evaluated the marginal and internal adaptation of zirconia copings fabricated with two different systems using both silicone replica and microcomputed tomography (micro-CT) assessment methods. A metal master model, representing a preparation for an all-ceramic full crown, was digitally scanned and polycrystalline zirconia copings were fabricated with either Ceramill Zi (Amann-Girrbach) or inCoris Zi (Dentslpy-Sirona), n=10. For each coping, marginal and internal gaps were evaluated by silicone replica and micro-CT assessment methods. Four assessment points of each replica cross-section and micro-CT image were evaluated using imaging software: marginal gap (MG), axial wall (AW), axio-occlusal angle (AO) and mid-occlusal wall (MO). Data were statistically analyzed by factorial ANOVA and Tukey test (a=0.05). There was no statistically significant difference between the methods for MG and AW. For AO, there were significant differences between methods for Amann copings, while for Dentsply-Sirona copings similar values were observed. For MO, both methods presented statistically significant differences. A positive correlation was observed determined by the two assessment methods for MG values. In conclusion, the assessment method influenced the evaluation of marginal and internal adaptation of zirconia copings. Micro-CT showed lower marginal and internal gap values when compared to the silicone replica technique, although the difference was not always statistically significant. Marginal gap and axial wall assessment points showed the lower gap values, regardless of ceramic system and assessment method used.
APA, Harvard, Vancouver, ISO, and other styles
6

Sente, Jelena, Dragoslav Jakonic, Miroslav Smajic, Ilona Mihajlovic, Goran Vasic, Romana Romanov, and Lela Maric. "Reduction of juvenile obesity by programmed physical exercise and controlled diet." Vojnosanitetski pregled 69, no. 1 (2012): 9–15. http://dx.doi.org/10.2298/vsp1201009s.

Full text
Abstract:
Background/Aim. Obesity is the most common disease of nutrition and is the consequence of reduced movement. Unfortunately, this problem is increasingly present in juvenile age, so that the pediatric outpatient offices are dominated by obese young people. The aim of this study was to evaluate and quantify the effects of the reducing treatment for juvenile obesity conducted by programmed physical exercise and controlled diet. Methods. We tested a sample of 136 respondents of both sexes (76 girls and 60 boys) aged 13 ? 0.6 years. This prospective study took 3 months in 2007 using the experimental methods of longitudinal weather precision. The data obtained after the measurement were processed by the use of statistical programs to calculate the basic and dispersion parameters. To determine the difference between the initial and final measurements we applied the univariate analysis of variance (ANOVA) and differences in the variables system in the space were determined by multivariate analysis of variance (MANOVA). Results. The results of ANOVA in the form of F values indicated that the differences between the initial and final measurements in all parameters of circumference dimensionality and subcutaneous fat tissue are significant (p = 0.00). Also, differences in parameters of body constitution and indicators of alimentation showed a high statistical significance (p = 0.00). The results of multivariante analysis (MANOVA), using Wilk's Lambda test, also indicated that the differences between initial and final measurements in the area of anthropometric measures and indicators of alimentation and constitution, were statistically significant (p = 0.00). Conclusion. Application of physical exercise and controlled diet leads to a significant reduction of anthropometric parameters and anthropological indicators of alimentation.
APA, Harvard, Vancouver, ISO, and other styles
7

Parasonis, Josifas. "POSSIBILITIES OF OPERATIONAL USE OF RELIABILITY THEORY METHODS/PATIKIMUMO TEORIJOS METODŲ PRAKTINIO TAIKYMO GALIMYBĖS." JOURNAL OF CIVIL ENGINEERING AND MANAGEMENT 7, no. 5 (October 31, 2001): 339–44. http://dx.doi.org/10.3846/13921525.2001.10531751.

Full text
Abstract:
The possibilities of using methods of a reliability theory are considered from the point of view of solving three problem groups. At first, collecting representative statistical data about loadings, design schemes, physical-mechanical characteristics of materials, geometrical parameters of structures, etc. Secondly, it is necessary to investigate reliability of the applied deterministic calculation and to evaluate statistically possible inaccuracies in calculations. At last, rated probabilities of the failure of structures. Use of reliability theory methods can be extended. It is necessary to accumulate statistical data about changes in time of strength and deformation properties of structural materials for reinforced concrete structures, variability of concrete strength in structures. It is necessary to accumulate statistics about actions and to solve the problem of values of factors. It has been marked that the studies of reliability of the design methods are realized for rather simple members subjected to bending and compression without consideration of materials properties in time. The expediency of experimental research on reliability of structures is discussed. Taking into account our experience of influence on reliability of frames of one-storey industrial buildings of the precision of geometrical parameters of mounting, it is expedient study reliability separately from design, mounting and maintenance stages. The new approach to a reliability estimation on the basis of ensuring functional reliablity of buildings is discussed. Thus the probability of failure should be equal to the magnitude of probability.
APA, Harvard, Vancouver, ISO, and other styles
8

Shamanna, Paramesh, Mala Dharmalingam, Arun Vadavi, Jahangir Mohammed, Terrence Poon, Mohamed Thajudeen, Ashok Keshavamurthy, and Suchitra Bhonsley. "Response to Twin Enabled Precision Treatment for Reversing Diabetes: An Initial Analysis at 4 Weeks of the Ongoing Randomised Controlled Trial." Journal of the Endocrine Society 5, Supplement_1 (May 1, 2021): A474—A475. http://dx.doi.org/10.1210/jendso/bvab048.970.

Full text
Abstract:
Abstract Introduction: Technology enabled precision nutrition, a combination of macro, micro and biota nutrients, along with Continuous Glucose Monitoring (CGM) have been demonstrated to be a key for reversal of diabetes. Methods: We conducted an initial analysis (n=23) of the ongoing randomized controlled trial of Twin Precision Treatment (TPT): a novel whole-body digital twin enabled precision treatment for reversing diabetes. The clinical and the biochemical parameters were evaluated as the longitudinal follow up at the first follow up visit at 4 weeks. The target sample size is 300 with an estimated duration of 5 years. Descriptive statistics were used Results: 8/23 (35%) patients achieved the intended outcome of reversal of HbA1c and off any anti-diabetic medications. There was a statistically significant improvement in HbA1c % (8.5 ± 1.6 to 6.8 ± 0.66; p&lt;0.0001), Fasting Blood Glucose mg/dL (FBS) (151 ± 44 to 98 ± 18; p&lt;0.0001), HOMA2-IR (1.7 ± 0.64 to 1 ± 0.45; p=0.0001), HOMA2-Beta (53 ± 28 to 86 ± 38; p=0.0013), Systolic BP (129 ± 11 to 120 ± 11; p=0.008) and serum albumin g/dL (4.5 ± 0.21 to 4.2 ± 0.31; p=0.0042). The baseline values for the other parameters including body weight, waist circumference, Diastolic BP, Alanine transaminase (ALT), Gamma-glutamyl transferase (GGT), eGFR, WBC, Platelet, Globulin, ESR, demonstrated a clinically relevant, superior change Discussion: The initial analysis for the prospectively designed trial reveals a remarkable improvement in the clinical and the biochemical parameters that would determine the complete and the prolonged remission of diabetes. The initial results are an early indicator for the translation of the scientific rationale for the technological intervention, through digital twin technology, powered by Internet of Things (IoT) and Artificial Intelligence (AI), as a modality to enable reversal of diabetes into an achievable outcome that would be durable. The impactful glycemic control appears to have positive meaningful metabolic health consequences Trial Registration: The trial has been prospectively registered in Clinical Trial Registry – India: Reference no. CTRI/2020/08/027072 on August 10, 2020
APA, Harvard, Vancouver, ISO, and other styles
9

Todorovic, Ana, Aleksandar Todorovic, Aleksandra Spadijer-Gostovic, Vojkan Lazic, Biljana Milicic, and Slobodan Djurisic. "Reliability of conventional shade guides in teeth color determination." Vojnosanitetski pregled 70, no. 10 (2013): 929–34. http://dx.doi.org/10.2298/vsp110513019t.

Full text
Abstract:
Background/Aim. Color matching in prosthodontic therapy is a very important task because it influences the esthetic value of dental restorations. Visual shade matching represents the most frequently applied method in clinical practice. Instrumental measurements provide objective and quantified data in color assessment of natural teeth and restorations. In instrumental shade analysis, the goal is to achieve the smallest ?E value possible, indicating the most accurate shade match. The aim of this study was to evaluate the reliability of commercially available ceramic shade guides. Methods. VITA Easyshade spectrophotometer (VITA, Germany) was used for instrumental color determination. Utilizing this device, color samples of ten VITA Classical and ten VITA 3D - Master shade guides were analyzed. Each color sample from all shade guides was measured three times and the basic parameters of color quality were examined: ?L, ?C, ?H, ?E, ?Elc. Based on these parameters spectrophotometer marks the shade matching as good, fair or adjust. Results. After performing 1,248 measurements of ceramic color samples, frequency of evaluations adjust, fair and good were statistically significantly different between VITA Classical and VITA 3D Master shade guides (p = 0.002). There were 27.1% cases scored as adjust, 66.3% as fair and 6.7% as good. In VITA 3D - Master shade guides 30.9% cases were evaluated as adjust, 66.4% as fair and 2.7% cases as good. Conclusion. Color samples from different shade guides, produced by the same manufacturer, show variability in basic color parameters, which once again proves the lack of precision and nonuniformity of the conventional method.
APA, Harvard, Vancouver, ISO, and other styles
10

Olivka, Petr, Michal Krumnikl, Pavel Moravec, and David Seidl. "Calibration of Short Range 2D Laser Range Finder for 3D SLAM Usage." Journal of Sensors 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/3715129.

Full text
Abstract:
The laser range finder is one of the most essential sensors in the field of robotics. The laser range finder provides an accurate range measurement with high angular resolution. However, the short range scanners require an additional calibration to achieve the abovementioned accuracy. The calibration procedure described in this work provides an estimation of the internal parameters of the laser range finder without requiring any special three-dimensional targets. This work presents the use of a short range URG-04LX scanner for mapping purposes and describes its calibration. The precision of the calibration was checked in an environment with known ground truth values and the results were statistically evaluated. The benefits of the calibration are also demonstrated in the practical applications involving the segmentation of the environment. The proposed calibration method is complex and detects all major manufacturing inaccuracies. The procedure is suitable for easy integration into the current manufacturing process.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Statistically evaluates parameters of precision"

1

Tichý, Štěpán. "Technologie drátové elektroeroze." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231987.

Full text
Abstract:
This master’s thesis deals with technology of wire electrical discharge machining in theoretical and practical level. Theoretical part of the thesis explains in detail the principle of electrical discharge machining, describes functional parts and settings of a current wire EDM machines and also the possibility of using method for production of specific parts. Practical part of the thesis solves manufacturing of gearing on pinion manufactured by wire cutter EXCETEK V650 and statistically evaluates precision parameters on surfaces of the carriers taken by specific technological conditions with the same machine.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Statistically evaluates parameters of precision"

1

"Biology and Management of Dogfish Sharks." In Biology and Management of Dogfish Sharks, edited by Joel S. Rice, Vincent F. Gallucci, and Gordon H. Kruse. American Fisheries Society, 2009. http://dx.doi.org/10.47886/9781934874073.ch15.

Full text
Abstract:
Abstract.—Effective management of dogfish fisheries requires knowledge of both their life history and population dynamics. Age data contribute to this knowledge and provide critical information for the estimation of stock–recruitment relationships, life-history parameters (e.g., mortality and growth rates), and lifetime reproductive potential. The estimation of age based on the second dorsal spine is subject to a variety of errors, including reader bias in the interpretation of which marks constitute annuli and how the structure is prepared. We estimated the precision of spiny dogfish <em>Squalus acanthias </em>ages by comparing age estimates made by four independent laboratories on a reference collection of 100 spines from spiny dogfish from Washington State waters. Personnel at each laboratory had been trained in the same manner and followed the same aging methodology. Age estimates were compared among laboratories and against the calculated median age estimation. Systematic differences in estimates of dogfish age among laboratories were found by graphical and statistical analysis. A coefficient of variation of 19% represents the overall precision of age readings for spiny dogfish based on this study. This level of precision was associated with statistically different laboratory-specific growth curves; however, this relative bias did not always result in a statistical difference between parameters derived from age–length relationships. Based on our results, an inter-laboratory study should be conducted to compare and resolve aging criteria before future, large-scale aging or population studies of dogfish are carried out.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Statistically evaluates parameters of precision"

1

Derbanne, Quentin, Jean-Franc¸ois Leguen, Thierry Dupau, and Etienne Hamel. "Long-Term Non-Linear Bending Moment Prediction." In ASME 2008 27th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2008. http://dx.doi.org/10.1115/omae2008-57326.

Full text
Abstract:
Long-term analysis is more and more used to establish the design loads by performing direct loads evaluation. The long-term distribution of wave loads acting on a ship depends on the short-term contributions of the response in all the wave conditions the ship encounters in her life: sea state, relative heading, speed, load case... For each short-term condition the statistical parameters that describe the response are considered to be constant. Therefore a long-term analysis needs a correct evaluation of the short-term parameters that characterise the short-term response. The Weibull distribution is often used to model the extreme response on a given sea state. The precision of the long-term analysis depends directly on the precision of the Weibull parameters. The first part of this paper is a study of the influence of the simulations parameters (number of wave components, simulation time) and of the different methods used to fit a Weibull distribution on the bending moment extremes, on the precision of the Weibull parameters and on the extreme values. Every choice of parameter used for the final calculations will be justified. The conclusion is that by using a correct fitting method, and provided that there are at least 128 wave components, the overall precision is only dependent on the simulation time: the precision on the 10−5 extreme value is only ±6.4% with 400 extremes, and ±1.9% with 3200 extremes! In order to increase the precision of the evaluation of the Weibull parameters over the entire scatter diagram, without increasing the simulation time, a smoothing method is proposed, based on a polynomial smoothing of the A1/3 and A1/10 values obtained from linear and non linear calculations on the same wave signal, and on the method of moments. This method leads to an increase of precision of about 3 times, that is equivalent to increase the simulation time by 8 or 9! The second part of this paper presents the results of the long-term analysis carried out on 14 ships (ferries, container vessel, naval ships,...), using a non-linear sea-keeping time-domain software. Calculations have been done without forward speed in head waves and for all the sea states of the IACS scatter diagram (more than 200 sea states). The smoothing method has been used to compute all the Weibull coefficients. Results show that it is possible to model the non-linear effects by applying a non-linear coefficient on the linear bending moment for one speed, one scatter diagram and one extreme value probability. But this coefficient can’t be applied, and must be recalculated, if other cases are needed (other speed, other scatter diagram, relative heading distribution or other extreme value probabilities). Every ships will be compared in the same graph in order to evaluated the influence of the design hull form (as overall length and bow flare) on the non linear long term bending moments value (in hogging and in sagging). The calculations were focused on the case of a particular frigate where more parameters were studied as forward speed, operational profile (in speed and relative headings) and scatter diagram choice. In the third part results from model test performed on a height segmented model of the frigate will be compared to the short term results computed by the sea-keeping software. This frigate has been monitored for three years, and the strain measurements at sea will be compared to the numerical long-term analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Trotta, Gianluca, Vincenzo Bellantone, Rossella Surace, and Irene Fassi. "Effects of Process Parameters on the Properties of Replicated Polymeric Parts." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-71049.

Full text
Abstract:
The increasing demand for small and even micro scale parts is boosting the development of reliable micro system technologies. Micro-fabrication process capabilities should expand to encompass a wider range of materials and geometric forms, by defining processes and related process chains that can satisfy the specific functional and technical requirements of new emerging multi-material products, and ensure the compatibility of materials and processing technologies throughout these manufacturing chains. Micro injection moulding is the process of transferring the micron or even submicron precision of microstructured metallic moulds to a polymeric products. It represents one of the key technologies for micro manufacturing because its mass production capability and relatively low production cost. Polymers have relatively low cost, and offer good mechanical and thermal strength, electrical insulation, optical transparency, chemical stability and biocompatibility. In this work the authors investigate the micro injection moulding process parameters on the overall quality of a miniaturized dog-bone shaped specimen. The aim of the experimentation is to calibrate the process and set the machine for the correct filling of the component. A set of injection parameters are selected for study by experimental plan and simulation tool and then discussed. Simulation results are used to better understand the polymer flow behaviour during the filling phase. A commercial software is used and input data, collected during the micro injection moulding process, are included using as performance indicators flow front position and moulded mass. Process simulation can provide, at the present time, mostly qualitative input to the designer and process engineer. Two different polymers materials are tested and evaluated in relation to the process replication capability: Polyoxymethylene (POM) and Liquid Cristal Polymer (LCP). Finally, the moulding factors with significant statistical effects are identified. The holding pressure and holding time for POM and the holding pressure and injection velocity for LCP show the highest influence on achieving high part mass.
APA, Harvard, Vancouver, ISO, and other styles
3

Bicknell, Gregory, and Guha Manogharan. "A Comparison of the Effects of Wire Electrical Discharge Machining Parameters on the Processing of Traditionally Manufactured and Additively Manufactured 316L Stainless Steel Specimens." In ASME 2018 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/imece2018-88014.

Full text
Abstract:
Wire electric discharge machining (EDM) is a non-traditional machining method that has the ability to machine hard, conductive materials, with no force and high precision. This technology is used in industries, like the aerospace industry, to create precision parts used in high stress applications. Wire EDM is also commonly used in additive manufacturing (AM) applications to remove printed parts from the base-plates onto which they are printed. Numerous studies show the effects of EDM parameters, like pulse-on time, pulse-off time, and cutting voltage, on the processing of traditionally fabricated metal parts. However, very few studies identify how the parameters of wire EDM affect the processing of AM parts. This paper studies the effect of wire EDM pulse-on time, pulse-off time, and cutting voltage on the machining time, surface roughness, and hardness of additively manufactured 316L stainless steel cylinders. The effects of these wire EDM parameters are then tested on the machining time, surface roughness, and hardness of wrought 316L stainless steel cylinders. It was found that machining time of AM samples was statistically significantly lower than wrought samples and also had better surface finish and lower surface hardness.
APA, Harvard, Vancouver, ISO, and other styles
4

Kamel, Ahmed H., Ali S. Shaqlaih, and Essam A. Ibrahim. "Statistical Significance and Hypothesis for Friction Factor Correlations of Non-Newtonian Crude Oils in Pipelines." In ASME 2015 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/imece2015-50017.

Full text
Abstract:
In pipelines, non-Newtonian fluids are generally pumped under turbulent flow conditions where frictional pressure losses are required for hydraulic design. The friction factor is a crucial parameter in calculating frictional pressure losses. However, determination of the friction factor is a decisive challenge, especially for turbulent flow of non-Newtonian fluids. This is mainly due to the large number of friction factor equations and the precision of each. The main objective of the present paper is to evaluate the published friction factor correlations for non-Newtonian fluids over a wide range of friction factor data to select the most accurate one. An analytical comparative study adopting the recently introduced Akaike information criterion (AIC) and the traditional coefficient of determination (R2) is conducted. Data reported by several researchers are used individually and collectively. The results show that each model exhibits accuracy when examined with a specific data set while El-Emam et al. model proves its superiority to other models when examining the data mutually. In addition to its simple and explicit form, it covers a wide range of flow behavior indices and generalized Reynolds numbers. It is also shown that the traditional belief that a higher R2 corresponds to better models may be misleading. AIC overcomes the shortcomings of R2 as it employs the parsimonious principle to trade between the complexity of the model and its accuracy not only to find the best approximating model but also to develop statistical inference based on the data. Although it has not yet been used in oil and gas industry, the authors present the AIC to initiate an innovative strategy that has been demonstrated in other disciplines to help alleviate several challenges faced by professionals in the oil and gas industry. Finally, a detailed discussion and models’ ranking according to AIC and R2 is presented showing the numerous advantages of AIC.
APA, Harvard, Vancouver, ISO, and other styles
5

Bartkowiak, Tomasz, and Roman Staniek. "Application of Order Statistics in the Evaluation of Flatness Error: Sampling Problem." In ASME 2017 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/imece2017-71295.

Full text
Abstract:
The main purpose of this initial paper is to demonstrate the application of order statistics in the estimation of form error from a CMM measurement. Nowadays, modern industry sets high standards for geometrical precision, surface texture and material properties. There are many parameters that can characterize mechanical part, out of which flatness error plays important in the assembly process and performance. Recently, due to the greater availability and price reduction, Coordinate Measurement Techniques have increased their popularity in the industry for on-line and off-line measurements as they allow automated measurements at relatively low uncertainty level. Data obtained from CMM measurements have to be processed and analyzed in order to evaluate component compliance with the required technical specification. The article presents an analysis of a minimal sample selection for the evaluation of flatness error by means of coordinate measurement. In the paper, a statistical approach was presented, assuming that, in the repetitive manufacturing process, the distribution of deviations between surface points and the reference plane is stable. Based on the known, statistical distribution, order statistics theorem was implemented to determine maximal and minimal point deviation statistics, as it played a dominant role in flatness error estimation. A brief analysis of normally distributed deviations was described in the paper. Moreover, the case study was presented for the set of the machined parts which were components of a machine tool mechanical structure. Empirical distributions were derived and minimal sample sizes were estimated for the given confidence levels using the proposed theorem. The estimation errors of flatness values for the derived sample sizes were analyzed and discussed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
6

Franco, Fermin, and Yasuhide Fukumoto. "Mathematical models for turbulent round jets based on “ideal” and “lossy” conservation of mass and energy." In ILASS2017 - 28th European Conference on Liquid Atomization and Spray Systems. Valencia: Universitat Politècnica València, 2017. http://dx.doi.org/10.4995/ilass2017.2017.4778.

Full text
Abstract:
We propose mathematical models for turbulent round atomized liquid jets that describe its dynamics in a simplebut comprehensive manner with the apex angle of the cone being the main disposable parameter. The basic assumptions are that (i) the jet is statistically stationary and that (ii) it can be approximated by a mixture of two fluids with the phases in local dynamic equilibrium, or so-called locally homogeneous flow (LHF). The models differ in their particular balance of explanatory capability and precision. To derive them we impose partial conservation of the initial mass and energy fluxes, introducing loss factors again as disposable parameters. Depending on each model, the equations admit explicit or implicit analytical solutions or a numerical solution in the discretized model case. The described variables are the the two-phase fluid’s composite density and velocity, both as functions of thedistance from the nozzle, from which the dynamic pressure is calculated.DOI: http://dx.doi.org/10.4995/ILASS2017.2017.4778
APA, Harvard, Vancouver, ISO, and other styles
7

Lautala, Pasi, and Hamed Pouryousef. "Sensitivity Analysis of Track Maintenance Strategies for the High Speed Rail (HSR) Services." In 2011 Joint Rail Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/jrc2011-56112.

Full text
Abstract:
Track Maintenance (TM) is one of the critical parts of rail operations and asset management. It has been estimated that about 25–35% of all operational costs are related to the track maintenance performance which can be typically classified as either corrective maintenance (CM) or preventive maintenance (PM). The first is a more reactive approach, where maintenance is conducted when inspections have revealed a need for action. The latter is a strategic approach that is mainly applied through maintenance planning. High Speed Rail (HSR) systems, especially on shared HSR corridors, can complicate track maintenance conditions due to the tight tolerances and precision, reliability and safety issues required by HSR. This paper evaluates strategic approaches for track maintenance planning along selected HSR corridors with either shared or dedicated operation pattern. The paper uses analytical and descriptive parameter tables to evaluate how sensitive a corridor is to changes in essential criteria for developing track maintenance strategies (TMS). These criteria may include: - HSR traffic conditions and operations regime; - TMS approach (or strategy) on current rail lines connected to the new HSR line; - Operation and Maintenance (O&M) structure of HSR management system. The descriptive parameter tables have been used by a specific PM modeling approach called preventive maintenance scheduling problem (PMSP). Before running PMSP model over a designated line, the respective parameters of the model should be calibrated and analyzed based on the line specifications. The descriptive parameter tables can be used during calibration procedure to assist in analyzing the sensitivity of model’s parameters and variables for the above mentioned criteria. This paper discusses and compares TMS approaches on three planned HSR corridors in Europe (Lisbon-Madrid HSR), Asia (Tehran-Isfahan HSR) and USA (California HSR). All three HSR rail corridors are under development, but each presents specific sensitivities to the given PM model’s parameters that can affect track maintenance strategic planning along these corridors. We concluded that TMS model calibration by these descriptive analytical tables can assist maintenance strategic planners in identifying different TMS approaches when dealing with maintenance contractors, HSR operators and public rail authorities.
APA, Harvard, Vancouver, ISO, and other styles
8

Keldenich, Magali, Anthony Barbier, Marie-Christine Grouhel, and Ce´line Pignoly. "Hot Rod Statistical Method for Large Break Loss of Coolant Accident With CATHARE." In 16th International Conference on Nuclear Engineering. ASMEDC, 2008. http://dx.doi.org/10.1115/icone16-48591.

Full text
Abstract:
For nearly 30 years, Areva NP has contributed, with CEA, EDF and IRSN, to the development of CATHARE, one of the most advanced computer codes in the field of thermal-hydraulics calculations on the system-wide scale, with the objective to have at its disposal a high-performance tool, as the basis for the development of best-estimate methods for design and safety studies. At the same time, Areva NP has conceived with EDF a generic safety analysis method called Deterministic Realistic Method (DRM), which has been applied to elaborate and license ECCS evaluation models on the basis of the current CATHARE 2 V1.3L code version. DRM has then been extended by Areva NP with the Hot Rod Statistical Method (HRSM) which adds a statistical treatment of the hot rod-related uncertainties to the deterministic reactor system thermal hydraulic calculation and that can be used with other LB LOCA evaluation models. With the HRSM, the value of the peak cladding temperature envelope with a 95% probability level is determined based on direct Monte Carlo calculations statistically combining three key hot rod parameters randomly varied according to their probability distribution. The final value of the peak cladding temperature is then calculated taking into account an uncertainty calculated using the Bootstrap method. The HRSM is a method, based on CATHARE that is easy to implement as a complement to LB LOCA evaluation model. The large number of direct calculations performed ensures a very high degree of precision when evaluating the peak cladding temperature and the equivalent cladding reacted. The HRSM is currently used and licensed in France for the LB LOCA transients, and is now extended to SB LOCA transients using CATHARE 2 v2.5. An illustrative comparison of HRSM versus DRM typical results for Areva NP 3-loop and 4-loop plants then underlines the significant margins provided by this method.
APA, Harvard, Vancouver, ISO, and other styles
9

Phan, Andrew M., and John P. Parmigiani. "A Device for Performing Controlled Cutting Operations." In ASME 2012 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/imece2012-87964.

Full text
Abstract:
Cutting operations using blades appear in several different industries such as food processing, surgical operations, gardening equipment, and so forth. Many practitioners of cutting operations will notice that it is easier to cut something by pressing and slicing at the same time versus doing each motion individually. They will also notice that certain angles or certain blade geometries make it easier to cut certain materials. As our society continues to increase our technological prowess, there is an ongoing need to better understand the underlying causes of simple tasks such as cutting so that cutting operations can be performed with more precision and accuracy than ever before. For many applications it is not possible to achieve the most optimum cutting force, cutting angle, and push to slice ratio and a compromise must be made in order to ensure the functionality of a cutting device. A means of objectively and efficiently evaluating cutting media is needed in order to determine the optimum parameters such as cutting force, cutting angle, and push to slice ratio for certain applications. The approach taken in this work is to create a testing apparatus that uses standard cutting media and performs controlled cutting operations to determine key parameters to specific cutting operations. Most devices used for performing experimental controlled cutting operations are limited to a single axis of motion, thus not incorporating the effect of the push to slice ratio. The device created and discussed in this paper is capable of performing controlled cutting operations with three axes of motion. It is capable of accurately controlling the depth of cut, push to slice ratio, and angle of cut in order to accurately capture motions seen in typical cutting operations. Each degree of freedom on the device is capable of withstanding up to 1550 N of cutting force while still capable of maintaining smooth motions. The device is capable of controlling the velocity of the push and slice motions up to 34 mm/s. Depth of cut, for both pushing and slicing, the reaction forces, and the angle of cut are all controlled and measured in real-time so that a correlation can be made between them. Data collected by this device will be used to investigate the effects of the push to slice ratio and angle of cut on cutting force and overall quality of cutting operations. Preliminary testing in wood test samples evaluates the effectiveness of the device in collecting cutting data. This device will also be used to validate several finite element analyses used in investigating cutting mechanics.
APA, Harvard, Vancouver, ISO, and other styles
10

Busollo, Carlo, Stefano Mauro, Andrea Nesci, Leonardo Sabatino Scimmi, and Emanuele Baronio. "Development of a Digital Twin for Well Integrity Management in Underground Gas Storage Fields." In SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/206252-ms.

Full text
Abstract:
Abstract Objective Digitalization is offering several chances to improve performance and reliability of Underground Gas Storage (UGS) infrastructures, especially in those sites where ageing would require investment improvement for maintenance and monitoring. In that context, well integrity management can benefit from the implementation of a well digital twin, integrated with real time monitoring. The work proposes a digital model of the well that can provide a valuable tool to analyse its non stationary states in order to evaluate the integrity of the barriers and its health state. Methods, Procedures, Process The key points on well integrity management are barriers testing/qualification and annular pressure monitoring, and in UGS operations it’s crucial the selection of the timing of barrier assessment and of diagnostic test execution to correctly evaluates the results. The digital model can provide a tool to help the well engineer to understand the health state of the well and to plan maintenance activities. It considers a physical model of the well composed by gas and liquid filled chambers in the annuluses and in the tubing case and all the potential leak paths that could connect the annuluses, the tubing case, and the reservoir to the external environment. Each chamber is modelled considering its mass and energy balance, while fluid resistances describe fluid leakage across the barriers. Appropriate models, selected according to the geometry and type of each well barrier, describe each fluid resistance. The input parameters are the well architecture, flowing tubing temperature and pressure and gas flow rate. The model provides pressure and temperatures trends and estimates of leak rates trends or annular liquid level movements during the observation time window. The fine tuning of the model of each well is carried out by seeking for the values of the parameters that best describe each single leak path, such as size and position of the leaking point, with a genetic algorithm. Results, Observations, Conclusions The model has been customised and validated over several wells, some of which with perfect integrity status and others with some integrity issues. Results showed a very good fit with field data, as well as high precision in identifying leak position and size. The tool can also be applied to forecast well behaviour after the application of mitigating action or to simulate the evolution of the leak. Example applications are the evaluation of the correct time to top up a casing with liquid or nitrogen or the effect on annular pressure of limiting withdrawal or injection flow rate.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography