To see the other types of publications on this topic, follow the link: Quality control – Statistical methods.

Dissertations / Theses on the topic 'Quality control – Statistical methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Quality control – Statistical methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Barr, Tina Jordan. "Performance of quality control procedures when monitoring correlated processes." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/25497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ritchie, Paul Andrew 1960. "A systematic, experimental methodology for design optimization." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276698.

Full text
Abstract:
Much attention has been directed at off-line quality control techniques in recent literature. This study is a refinement of and an enhancement to one technique, the Taguchi Method, for determining the optimum setting of design parameters in a product or process. In place of the signal-to-noise ratio, the mean square error (MSE) for each quality characteristic of interest is used. Polynomial models describing mean response and variance are fit to the observed data using statistical methods. The settings for the design parameters are determined by minimizing a statistical model. The model uses a multicriterion objective consisting of the MSE for each quality characteristic of interest. Minimum bias central composite designs are used during the data collection step to determine the settings of the parameters where observations are to be taken. Included is the development of minimum bias designs for various cases. A detailed example is given.
APA, Harvard, Vancouver, ISO, and other styles
3

Jamnarnwej, Panisuan. "Methods for detection of small process shifts." Diss., Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/24518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ghebretensae, Manna Zerai. "A unified approach to the economic aspects of statistical quality control and improvement." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/49865.

Full text
Abstract:
Assignment (MSc)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: The design of control charts refers to the selection of the parameters implied, including the sample size n, control limit width parameter k, and the sampling interval h. The design of the X -control chart that is based on economic as well as statistical considerations is presently one of the more popular subjects of research. Two assumptions are considered in the development and use of the economic or economic statistical models. These assumptions are potentially critical. It is assumed that the time between process shifts can be modelled by means of the exponential distribution. It is further assumed that there is only one assignable cause. Based on these assumptions, economic or economic statistical models are derived using a total cost function per unit time as proposed by a unified approach of the Lorenzen and Vance model (1986). In this approach the relationship between the three control chart parameters as well as the three types of costs are expressed in the total cost function. The optimal parameters are usually obtained by the minimization of the expected total cost per unit time. Nevertheless, few practitioners have tried to optimize the design of their X -control charts. One reason for this is that the cost models and their associated optimization techniques are often too complex and difficult for practitioners to understand and apply. However, a user-friendly Excel program has been developed in this paper and the numerical examples illustrated are executed on this program. The optimization procedure is easy-to-use, easy-to-understand, and easy-to-access. Moreover, the proposed procedure also obtains exact optimal design values in contrast to the approximate designs developed by Duncan (1956) and other subsequent researchers. Numerical examples are presented of both the economic and the economic statistical designs of the X -control chart in order to illustrate the working of the proposed Excel optimal procedure. Based on the Excel optimization procedure, the results of the economic statistical design are compared to those of a pure economic model. It is shown that the economic statistical designs lead to wider control limits and smaller sampling intervals than the economic designs. Furthermore, even if they are more costly than the economic design they do guarantee output of better quality, while keeping the number of false alarm searches at a minimum. It also leads to low process variability. These properties are the direct result of the requirement that the economic statistical design must assure a satisfactory statistical performance. Additionally, extensive sensitivity studies are performed on the economic and economic statistical designs to investigate the effect of the input parameters and the effects of varying the bounds on, a, 1-f3 , the average time-to-signal, ATS as well as the expected shift size t5 on the minimum expected cost loss as well as the three control chart decision variables. The analyses show that cost is relatively insensitive to improvement in the type I and type II error rates, but highly sensitive to changes in smaller bounds on ATS as well as extremely sensitive for smaller shift levels, t5 . Note: expressions like economic design, economic statistical design, loss cost and assignable cause may seen linguistically and syntactically strange, but are borrowed from and used according the known literature on the subject.
AFRIKAANSE OPSOMMING: Die ontwerp van kontrolekaarte verwys na die seleksie van die parameters geïmpliseer, insluitende die steekproefgrootte n , kontrole limiete interval parameter k , en die steekproefmterval h. Die ontwerp van die X -kontrolekaart, gebaseer op ekonomiese sowel as statistiese oorwegings, is tans een van die meer populêre onderwerpe van navorsing. Twee aannames word in ag geneem in die ontwikkeling en gebruik van die ekonomiese en ekonomies statistiese modelle. Hierdie aannames is potensieel krities. Dit word aanvaar dat die tyd tussen prosesverskuiwings deur die eksponensiaalverdeling gemodelleer kan word. Daar word ook verder aangeneem dat daar slegs een oorsaak kan wees vir 'n verskuiwing, of te wel 'n aanwysbare oorsaak (assignable cause). Gebaseer op hierdie aannames word ekonomies en ekonomies statistiese modelle afgelei deur gebruik te maak van 'n totale kostefunksie per tydseenheid soos voorgestel deur deur 'n verenigende (unified) benadering van die Lorenzen en Vance-model (1986). In hierdie benadering word die verband tussen die drie kontrole parameters sowel as die drie tipes koste in die totale kostefunksie uiteengesit. Die optimale parameters word gewoonlik gevind deur die minirnering van die verwagte totale koste per tydseenheid. Desnieteenstaande het slegs 'n minderheid van praktisyns tot nou toe probeer om die ontwerp van hulle X -kontrolekaarte te optimeer. Een rede hiervoor is dat die kosternodelle en hulle geassosieerde optimeringstegnieke té kompleks en moeilik is vir die praktisyns om te verstaan en toe te pas. 'n Gebruikersvriendelike Excelprogram is egter hier ontwikkel en die numeriese voorbeelde wat vir illustrasie doeleindes getoon word, is op hierdie program uitgevoer. Die optimeringsprosedure is maklik om te gebruik, maklik om te verstaan en die sagteware is geredelik beskikbaar. Wat meer is, is dat die voorgestelde prosedure eksakte optimale ontwerp waardes bereken in teenstelling tot die benaderde ontwerpe van Duncan (1956) en navorsers na hom. Numeriese voorbeelde word verskaf van beide die ekonomiese en ekonomies statistiese ontwerpe vir die X -kontrolekaart om die werking van die voorgestelde Excel optimale prosedure te illustreer. Die resultate van die ekonomies statistiese ontwerp word vergelyk met dié van die suiwer ekomomiese model met behulp van die Excel optimerings-prosedure. Daar word aangetoon dat die ekonomiese statistiese ontwerpe tot wyer kontrole limiete en kleiner steekproefmtervalle lei as die ekonomiese ontwerpe. Al lei die ekonomies statistiese ontwerp tot ietwat hoër koste as die ekonomiese ontwerpe se oplossings, waarborg dit beter kwaliteit terwyl dit die aantal vals seine tot 'n minimum beperk. Hierbenewens lei dit ook tot kleiner prosesvartasie. Hierdie eienskappe is die direkte resultaat van die vereiste dat die ekonomies statistiese ontwerp aan sekere statistiese vereistes moet voldoen. Verder is uitgebreide sensitiwiteitsondersoeke op die ekonomies en ekonomies statistiese ontwerpe gedoen om die effek van die inset parameters sowel as van variërende grense op a, 1- f3 , die gemiddelde tyd-tot-sein, ATS sowel as die verskuiwingsgrootte 8 op die minimum verwagte kosteverlies sowel as die drie kontrolekaart besluitnemingsveranderlikes te bepaal. Die analises toon dat die totale koste relatief onsensitief is tot verbeterings in die tipe I en die tipe II fout koerse, maar dat dit hoogs sensitief is vir wysigings in die onderste grens op ATS sowel as besonder sensitief vir klein verskuiwingsvlakke, 8. Let op: Die uitdrukkings ekonomiese ontwerp (economic design), ekonomies statistiese ontwerp (economic statistical design), verlies kostefunksie (loss cost function) en aanwysbare oorsaak (assignable cause) mag taalkundig en sintakties vreemd voordoen, maar is geleen uit, en word so gebruik in die bekende literatuur oor hierdie onderwerp.
APA, Harvard, Vancouver, ISO, and other styles
5

Assareh, Hassan. "Bayesian hierarchical models in statistical quality control methods to improve healthcare in hospitals." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/53342/1/Hassan_Assareh_Thesis.pdf.

Full text
Abstract:
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
APA, Harvard, Vancouver, ISO, and other styles
6

Ismail, Noor Azina. "Statistical methods for the improvement of health care." Thesis, Queensland University of Technology, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Murphy, Terrence Edward. "Multivariate Quality Control Using Loss-Scaled Principal Components." Diss., Available online, Georgia Institute of Technology, 2004:, 2004. http://etd.gatech.edu/theses/available/etd-11222004-122326/unrestricted/murphy%5Fterrence%5Fe%5F200412%5Fphd.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2005.
Victoria Chen, Committee Co-Chair ; Kwok Tsui, Committee Chair ; Janet Allen, Committee Member ; David Goldsman, Committee Member ; Roshan Vengazhiyil, Committee Member. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
8

Harvey, Martha M. (Martha Mattern). "The Fixed v. Variable Sampling Interval Shewhart X-Bar Control Chart in the Presence of Positively Autocorrelated Data." Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc278763/.

Full text
Abstract:
This study uses simulation to examine differences between fixed sampling interval (FSI) and variable sampling interval (VSI) Shewhart X-bar control charts for processes that produce positively autocorrelated data. The influence of sample size (1 and 5), autocorrelation parameter, shift in process mean, and length of time between samples is investigated by comparing average time (ATS) and average number of samples (ANSS) to produce an out of control signal for FSI and VSI Shewhart X-bar charts. These comparisons are conducted in two ways: control chart limits pre-set at ±3σ_x / √n and limits computed from the sampling process. Proper interpretation of the Shewhart X-bar chart requires the assumption that observations are statistically independent; however, process data are often autocorrelated over time. Results of this study indicate that increasing the time between samples decreases the effect of positive autocorrelation between samples. Thus, with sufficient time between samples the assumption of independence is essentially not violated. Samples of size 5 produce a faster signal than samples of size 1 with both the FSI and VSI Shewhart X-bar chart when positive autocorrelation is present. However, samples of size 5 require the same time when the data are independent, indicating that this effect is a result of autocorrelation. This research determined that the VSI Shewhart X-bar chart signals increasingly faster than the corresponding FSI chart as the shift in the process mean increases. If the process is likely to exhibit a large shift in the mean, then the VSI technique is recommended. But the faster signaling time of the VSI chart is undesirable when the process is operating on target. However, if the control limits are estimated from process samples, results show that when the process is in control the ARL for the FSI and the ANSS for the VSI are approximately the same, and exceed the expected value when the limits are fixed.
APA, Harvard, Vancouver, ISO, and other styles
9

Grayson, James M. (James Morris). "Economic Statistical Design of Inverse Gaussian Distribution Control Charts." Thesis, University of North Texas, 1990. https://digital.library.unt.edu/ark:/67531/metadc332397/.

Full text
Abstract:
Statistical quality control (SQC) is one technique companies are using in the development of a Total Quality Management (TQM) culture. Shewhart control charts, a widely used SQC tool, rely on an underlying normal distribution of the data. Often data are skewed. The inverse Gaussian distribution is a probability distribution that is wellsuited to handling skewed data. This analysis develops models and a set of tools usable by practitioners for the constrained economic statistical design of control charts for inverse Gaussian distribution process centrality and process dispersion. The use of this methodology is illustrated by the design of an x-bar chart and a V chart for an inverse Gaussian distributed process.
APA, Harvard, Vancouver, ISO, and other styles
10

Sandholm, Thomas. "Statistical Methods for Computational Markets : Proportional Share Market Prediction and Admission Control." Doctoral thesis, KTH, Data- och systemvetenskap, DSV, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4738.

Full text
Abstract:
We design, implement and evaluate statistical methods for managing uncertainty when consuming and provisioning resources in a federated computational market. To enable efficient allocation of resources in this environment, providers need to know consumers' risk preferences, and the expected future demand. The guarantee levels to offer thus depend on techniques to forecast future usage and to accurately capture and model uncertainties. Our main contribution in this thesis is threefold; first, we evaluate a set of techniques to forecast demand in computational markets; second, we design a scalable method which captures a succinct summary of usage statistics and allows consumers to express risk preferences; and finally we propose a method for providers to set resource prices and determine guarantee levels to offer. The methods employed are based on fundamental concepts in probability theory, and are thus easy to implement, as well as to analyze and evaluate. The key component of our solution is a predictor that dynamically constructs approximations of the price probability density and quantile functions for arbitrary resources in a computational market. Because highly fluctuating and skewed demand is common in these markets, it is difficult to accurately and automatically construct representations of arbitrary demand distributions. We discovered that a technique based on the Chebyshev inequality and empirical prediction bounds, which estimates worst case bounds on deviations from the mean given a variance, provided the most reliable forecasts for a set of representative high performance and shared cluster workload traces. We further show how these forecasts can help the consumers determine how much to spend given a risk preference and how providers can offer admission control services with different guarantee levels given a recent history of resource prices.
QC 20100909
APA, Harvard, Vancouver, ISO, and other styles
11

Driesen, Kevin E. "Statistical process control as quantitative method to monitor and improve medical quality." Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/280602.

Full text
Abstract:
Statistical Process Control (SPC) methods, developed in industrial settings, are increasingly being generalized to medical service environments. Of special interest is the control chart, a graphic and statistical procedure used to monitor and control variation. This dissertation evaluates the validity of the control chart model to improve medical quality. The research design combines descriptive and causal comparative (ex-post facto) methods to address the principal research question, How is the control chart model related to medical quality? Hospital data were used for patients diagnosed with Community Acquired Pneumonia (CAP). During the initial research phase, five medical quality "events" assumed to affect CAP medical quality indicators were pre-specified by hospital staff. The impact of each event was then evaluated using control charts constructed for CAP quality indicators. Descriptive analysis was undertaken to determine whether data violated the statistical assumptions underlying the control chart model. Then, variable and attribute control charts were constructed to determine whether special cause signals occurred in association with the pre-specified events. Alternative methods were used to calibrate charts to different conditions. Sensitivity was computed as the proportion of event-sensitive signals. The descriptive analysis of CAP indicators uncovered "messy," and somewhat complex, data structure. The CAP indicators were marginally stable showing trend, seasonal cycles, skew, sampling variation and autocorrelation. Study results need to be interpreted with the knowledge that few events were evaluated, and that the effect sizes associated with events were small. The charts applied to the CAP indicators showed limited sensitivity; for three chart-types (i.e. XmR, Xbar, and P-charts), there were more false alarms than event-associated signals. Conforming to expectation, larger sample size increased chart sensitivity. The application of Jaehn Decision Rules led to increases in both sensitivity and false alarm. Increasing subgroup frequency from month, to week samples, increased chart sensitivity, but also increased data instability and autocorrelation. Contrary to expectation, the application of hybrid charting techniques (EWMA and CUSUM) did not increase chart sensitivity. Study findings support the conclusion that control charts provide valuable insight into medical variation. However, design issues, data character, and causal logic provide conditions to the interpretation of control charts.
APA, Harvard, Vancouver, ISO, and other styles
12

Vining, G. Geoffrey. "Determining the most appropiate [sic] sampling interval for a Shewhart X-chart." Thesis, Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/94487.

Full text
Abstract:
A common problem encountered in practice is determining when it is appropriate to change the sampling interval for control charts. This thesis examines this problem for Shewhart X̅ charts. Duncan's economic model (1956) is used to develop a relationship between the most appropriate sampling interval and the present rate of"disturbances,” where a disturbance is a shift to an out of control state. A procedure is proposed which switches the interval to convenient values whenever a shift in the rate of disturbances is detected. An example using simulation demonstrates the procedure.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
13

Minardi, Michael. "Comparing process capability : a c pk ratio approach." Honors in the Major Thesis, University of Central Florida, 2001. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/288.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Arts and Sciences
Statistics
APA, Harvard, Vancouver, ISO, and other styles
14

Lucas, Tamara J. H. "Formulation and solution of hierarchical decision support problems." Thesis, Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/17291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Steele, Clint, and n/a. "The prediction and management of the variability of manufacturing operations." Swinburne University of Technology, 2005. http://adt.lib.swin.edu.au./public/adt-VSWT20060815.151147.

Full text
Abstract:
Aim: To investigate methods that can be used to predict and manage the effects of manufacturing variability on product quality during the design process. Methodology: The preliminary investigation is a review and analysis of probabilistic methods and quality metrics. Based on this analysis, convenient robustification methods are developed. In addition, the nature of the flow of variability in a system is considered. This is then used to ascertain the information needed for an input variable when predicting the quality of a proposed design. The second, and major, part of the investigation is a case-by-case analysis of a collection of manufacturing operations and material properties. Each is initially analysed from first principles. On completion, the fundamental causes of variability of the key characteristic(s) are identified. Where possible, the expected variability for each of those characteristics has been determined. Where this determination was not possible, qualitative conclusions about the variability are made instead. In each case, findings on the prediction and management of manufacturing variability are made.
APA, Harvard, Vancouver, ISO, and other styles
16

Dai, Chengxin. "Exploring Data Quality of Weigh-In-Motion Systems." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1018.

Full text
Abstract:
This research focuses on the data quality control methods for evaluating the performance of Weigh-In-Motion (WIM) systems on Oregon highways. This research identifies and develops a new methodology and algorithm to explore the accuracy of each station's weight and spacing data at a corridor level, and further implements the Statistical Process Control (SPC) method, finite mixture model, axle spacing error rating method, and data flag method in published research to examine the soundness of WIM systems. This research employs the historical WIM data to analyze sensor health and compares the evaluation results of the methods. The results suggest the new triangulation method identified most possible WIM malfunctions that other methods sensed, and this method unprecedentedly monitors the process behavior with controls of time and meteorological variables. The SPC method appeared superior in differentiating between sensor noises and sensor errors or drifts, but it drew wrong conclusions when accurate WIM data reference was absent. The axle spacing error rating method cannot check the essential weight data in special cases, but reliable loop sensor evaluation results were arrived at by employing this multiple linear regression model. The results of the data flag method and the finite mixed model results were not accurate, thus they could be used as additional tools to complement the data quality evaluation results. Overall, these data quality analysis results are the valuable sources for examining the early detection of system malfunctions, sensor drift, etc., and allow the WIM operators to correct the situation on time before large amounts of measurement are lost.
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Joongsup. "New control charts for monitoring univariate autocorrelated processes and high-dimensional profiles." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42711.

Full text
Abstract:
In this thesis, we first investigate the use of automated variance estimators in distribution-free statistical process control (SPC) charts for univariate autocorrelated processes. We introduce two variance estimators---the standardized time series overlapping area estimator and the so-called quick-and-dirty autoregressive estimator---that can be obtained from a training data set and used effectively with distribution-free SPC charts when those charts are applied to processes exhibiting nonnormal responses or correlation between successive responses. In particular, we incorporate the two estimators into DFTC-VE, a new distribution-free tabular CUSUM chart developed for autocorrelated processes; and we compare its performance with other state-of-the-art distribution-free SPC charts. Using either of the two variance estimators, the DFTC-VE outperforms its competitors in terms of both in-control and out-of-control average run lengths when all the competing procedures are tested on the same set of independently sampled realizations of selected autocorrelated processes with normal or nonnormal noise components. Next, we develop WDFTC, a wavelet-based distribution-free CUSUM chart for detecting shifts in the mean of a high-dimensional profile with noisy components that may exhibit nonnormality, variance heterogeneity, or correlation between profile components. A profile describes the relationship between a selected quality characteristic and an input (design) variable over the experimental region. Exploiting a discrete wavelet transform (DWT) of the mean in-control profile, WDFTC selects a reduced-dimension vector of the associated DWT components from which the mean in-control profile can be approximated with minimal weighted relative reconstruction error. Based on randomly sampled Phase I (in-control) profiles, the covariance matrix of the corresponding reduced-dimension DWT vectors is estimated using a matrix-regularization method; then the DWT vectors are aggregated (batched) so that the nonoverlapping batch means of the reduced-dimension DWT vectors have manageable covariances. To monitor shifts in the mean profile during Phase II operation, WDFTC computes a Hotelling's T-square--type statistic from successive nonoverlapping batch means and applies a CUSUM procedure to those statistics, where the associated control limits are evaluated analytically from the Phase I data. We compare WDFTC with other state-of-the-art profile-monitoring charts using both normal and nonnormal noise components having homogeneous or heterogenous variances as well as independent or correlated components; and we show that WDFTC performs well, especially for local shifts of small to medium size, in terms of both in-control and out-of-control average run lengths.
APA, Harvard, Vancouver, ISO, and other styles
18

Savarese, Paul Tenzing. "New design comparison criteria in Taguchi's robust parameter design." Diss., This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-06062008-171200/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yerlikaya, Fatma. "A New Contribution To Nonlinear Robust Regression And Classification With Mars And Its Applications To Data Mining For Quality Control In Manufacturing." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12610037/index.pdf.

Full text
Abstract:
Multivariate adaptive regression spline (MARS) denotes a modern methodology from statistical learning which is very important in both classification and regression, with an increasing number of applications in many areas of science, economy and technology. MARS is very useful for high dimensional problems and shows a great promise for fitting nonlinear multivariate functions. MARS technique does not impose any particular class of relationship between the predictor variables and outcome variable of interest. In other words, a special advantage of MARS lies in its ability to estimate the contribution of the basis functions so that both the additive and interaction effects of the predictors are allowed to determine the response variable. The function fitted by MARS is continuous, whereas the one fitted by classical classification methods (CART) is not. Herewith, MARS becomes an alternative to CART. The MARS algorithm for estimating the model function consists of two complementary algorithms: the forward and backward stepwise algorithms. In the first step, the model is built by adding basis functions until a maximum level of complexity is reached. On the other hand, the backward stepwise algorithm is began by removing the least significant basis functions from the model. In this study, we propose not to use the backward stepwise algorithm. Instead, we construct a penalized residual sum of squares (PRSS) for MARS as a Tikhonov regularization problem, which is also known as ridge regression. We treat this problem using continuous optimization techniques which we consider to become an important complementary technology and alternative to the concept of the backward stepwise algorithm. In particular, we apply the elegant framework of conic quadratic programming which is an area of convex optimization that is very well-structured, herewith, resembling linear programming and, hence, permitting the use of interior point methods. The boundaries of this optimization problem are determined by the multiobjective optimization approach which provides us many alternative solutions. Based on these theoretical and algorithmical studies, this MSc thesis work also contains applications on the data investigated in a TÜ
BiTAK project on quality control. By these applications, MARS and our new method are compared.
APA, Harvard, Vancouver, ISO, and other styles
20

Nam, Kyungdoo T. "A Heuristic Procedure for Specifying Parameters in Neural Network Models for Shewhart X-bar Control Chart Applications." Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc278815/.

Full text
Abstract:
This study develops a heuristic procedure for specifying parameters for a neural network configuration (learning rate, momentum, and the number of neurons in a single hidden layer) in Shewhart X-bar control chart applications. Also, this study examines the replicability of the neural network solution when the neural network is retrained several times with different initial weights.
APA, Harvard, Vancouver, ISO, and other styles
21

Harrington, Robert P. "Forecasting corporate performance." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54515.

Full text
Abstract:
For the past twenty years, the usefulness of accounting information has been emphasized. In 1966 the American Accounting Association in its State of Basic Accounting Theory asserted that usefulness is the primary purpose of external financial reports. In 1978 the State of Financial Accounting Concepts, No. 1 affirmed the usefulness criterion. "Financial reporting should provide information that is useful to present and potential investors and creditors and other users..." Information is useful if it facilitates decision making. Moreover, all decisions are future-oriented; they are based on a prognosis of future events. The objective of this research, therefore, is to examine some factors that affect the decision maker's ability to use financial information to make good predictions and thereby good decisions. There are two major purposes of the study. The first is to gain insight into the amount of increase in prediction accuracy that is expected to be achieved when a model replaces the human decision-maker in the selection of cues. The second major purpose is to examine the information overload phenomenon to provide research evidence to determine the point at which additional information may contaminate prediction accuracy. The research methodology is based on the lens model developed by Eyon Brunswick in 1952. Multiple linear regression equations are used to capture the participants’ models, and correlation statistics are used to measure prediction accuracy.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Arantes, Cássia da Silva Castro. "ANÁLISE ESTATÍSTICA DA QUALIDADE NA PRODUÇÃO DE FARELO E ÓLEO DEGOMADO DE SOJA, ESTUDO DE CASO EM EMPRESA DE MÉDIO PORTE EM RIO VERDE - GO." Pontifícia Universidade Católica de Goiás, 2016. http://localhost:8080/tede/handle/tede/2483.

Full text
Abstract:
Made available in DSpace on 2016-08-10T10:40:37Z (GMT). No. of bitstreams: 1 CASSIA DA SILVA CASTRO ARANTES.pdf: 3047367 bytes, checksum: a63eab38d70349a1a580a883aa0f8bd3 (MD5) Previous issue date: 2016-03-10
This study deals with the application of statistical methods for analysis of the quality production of the Meal and Degummed soybean Oil. As Case Study object has Guará Industry, which provided the necessary data. These were analyzed by applying statistical methods such as ANOVA and Tukey Test, stability analysis using control charts and process capability analysis. With the analysis came to the conclusion that the warehouse actually influence the quality of soybeans, the main feedstock of the company. It was also found that much of the quality characteristics of the products produced show that the processes are not stable and are unable. During the study, also rose at the main quality problems the company owned, as well as the causes of these. Finally, this work shows important information about the enterprise and suggests improvements to ensure effective gains in the quality of final products and consequently better results for the organization, preventing and eliminating unnecessary quality costs.
Este estudo aborda a aplicação de métodos estatísticos para análise da qualidade da produção de farelo e óleo degomado de soja. Como objeto do estudo de caso tem-se a Indústria Guará, a qual forneceu os dados necessários. Estes foram analisados aplicando-se métodos estatísticos, tais como Anova e Teste de Tukey, análise da estabilidade através de cartas de controle e análise da capacidade do processo. Com as análises, chegou-se à conclusão de que os armazéns realmente influenciam na qualidade da soja, principal matéria prima da empresa. Verificou-se também que grande parte das características de qualidade dos produtos produzidos demonstram que os processos não estão estáveis e não são capazes. Durante o estudo, foram levantados ainda os principais problemas de qualidade que a empresa possui, bem como as causas destes. Por fim, este trabalho demonstra importantes informações sobre a empresa e sugere melhorias para garantir ganhos efetivos na qualidade dos produtos finais e, consequentemente, melhores resultados para a organização, evitando e eliminando custos de qualidade desnecessários.
APA, Harvard, Vancouver, ISO, and other styles
23

Turney, Celena. "An analysis of the California State Department of Parks and Recreation's "Quality Management Program"." CSUSB ScholarWorks, 1997. https://scholarworks.lib.csusb.edu/etd-project/1316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Devaux, Marie-Françoise. "Interpretation de spectres de reflexion dans l'infrarouge proche et moyen de produits agroalimentaires par des methodes d'analyse multidimensionnelle." Nantes, 1988. http://www.theses.fr/1988NANT2022.

Full text
Abstract:
Dans cette etude sont montrees les potentialites de l'analyse en composantes principales (acp) et de l'analyse factorielle discriminante (afd) pour l'interpretation des spectres de reflexion proche et moyen infrarouges. L'acp realise un modele spectral: elle decompose les spectres en une somme de profils spectraux caracteristiques. Une nouvelle methode, l'acp sous contraintes d'orthogonalite (acpco) est introduite. L'acpco est une acp dans laquelle certains profils sont imposes. Les proprietes de l'afd appliquees aux variables issues de l'acp sont mises en evidence, permettant ainsi le calcul des spectres discriminants. A partir d'une collection de semoules modele, l'afd est utilisee pour prevoir des groupes de taille de particules et de teneur en eau. Les spectres discriminants montrent des bandes d'absorption attribuables a l'etat de l'eau (libre ou liee). L'acpco permet de corriger les deformations spectrales correspondantes en imposant les spectres discriminants comme profils spectraux. Une collection de spectres proche infrarouges de pommes est etudiee par acp. Leurs varietes et le russeting sont calcules par afd. Les spectres discriminants permettent d'en donner une interpretation biochimique
APA, Harvard, Vancouver, ISO, and other styles
25

Abdel-Jaber, Hussein F. "Performance Modelling and Evaluation of Active Queue Management Techniques in Communication Networks. The development and performance evaluation of some new active queue management methods for internet congestion control based on fuzzy logic and random early detection using discrete-time queueing analysis and simulation." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4261.

Full text
Abstract:
Since the field of computer networks has rapidly grown in the last two decades, congestion control of traffic loads within networks has become a high priority. Congestion occurs in network routers when the number of incoming packets exceeds the available network resources, such as buffer space and bandwidth allocation. This may result in a poor network performance with reference to average packet queueing delay, packet loss rate and throughput. To enhance the performance when the network becomes congested, several different active queue management (AQM) methods have been proposed and some of these are discussed in this thesis. Specifically, these AQM methods are surveyed in detail and their strengths and limitations are highlighted. A comparison is conducted between five known AQM methods, Random Early Detection (RED), Gentle Random Early Detection (GRED), Adaptive Random Early Detection (ARED), Dynamic Random Early Drop (DRED) and BLUE, based on several performance measures, including mean queue length, throughput, average queueing delay, overflow packet loss probability, packet dropping probability and the total of overflow loss and dropping probabilities for packets, with the aim of identifying which AQM method gives the most satisfactory results of the performance measures. This thesis presents a new AQM approach based on the RED algorithm that determines and controls the congested router buffers in an early stage. This approach is called Dynamic RED (REDD), which stabilises the average queue length between minimum and maximum threshold positions at a certain level called the target level to prevent building up the queues in the router buffers. A comparison is made between the proposed REDD, RED and ARED approaches regarding the above performance measures. Moreover, three methods based on RED and fuzzy logic are proposed to control the congested router buffers incipiently. These methods are named REDD1, REDD2, and REDD3 and their performances are also compared with RED using the above performance measures to identify which method achieves the most satisfactory results. Furthermore, a set of discrete-time queue analytical models are developed based on the following approaches: RED, GRED, DRED and BLUE, to detect the congestion at router buffers in an early stage. The proposed analytical models use the instantaneous queue length as a congestion measure to capture short term changes in the input and prevent packet loss due to overflow. The proposed analytical models are experimentally compared with their corresponding AQM simulations with reference to the above performance measures to identify which approach gives the most satisfactory results. The simulations for RED, GRED, ARED, DRED, BLUE, REDD, REDD1, REDD2 and REDD3 are run ten times, each time with a change of seed and the results of each run are used to obtain mean values, variance, standard deviation and 95% confidence intervals. The performance measures are calculated based on data collected only after the system has reached a steady state. After extensive experimentation, the results show that the proposed REDD, REDD1, REDD2 and REDD3 algorithms and some of the proposed analytical models such as DRED-Alpha, RED and GRED models offer somewhat better results of mean queue length and average queueing delay than these achieved by RED and its variants when the values of packet arrival probability are greater than the value of packet departure probability, i.e. in a congestion situation. This suggests that when traffic is largely of a non bursty nature, instantaneous queue length might be a better congestion measure to use rather than the average queue length as in the more traditional models.
APA, Harvard, Vancouver, ISO, and other styles
26

Castano, Antoine. "Methode d'analyse des cotes de fabrication." Paris 6, 1988. http://www.theses.fr/1988PA066123.

Full text
Abstract:
Modele definissant la distance entre deux surfaces par une fonction aleatoire. Ce modele permet l'analyse du comportement de la piece dans son montage d'usinage. Il permet d'exploiter les donnees des machines a mesurer tridimensionnelles ainsi que les controles statistiques. On en deduit une methode generale de determination des cotations de fabrication. Applications a la productique, a la fabrication assistee, aux systemes flexibles de fabrication et a l'informatique industrielle
APA, Harvard, Vancouver, ISO, and other styles
27

Szarka, John Louis III. "Surveillance of Negative Binomial and Bernoulli Processes." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/26617.

Full text
Abstract:
The evaluation of discrete processes are performed for industrial and healthcare processes. Count data may be used to measure the number of defective items in industrial applications or the incidence of a certain disease at a health facility. Another classification of a discrete random variable is for binary data, where information on an item can be classified as conforming or nonconforming in a manufacturing context, or a patient's status of having a disease in health-related applications. The first phase of this research uses discrete count data modeled from the Poisson and negative binomial distributions in a healthcare setting. Syndromic counts are currently monitored by the BioSense program within the Centers for Disease Control and Prevention (CDC) to provide real-time biosurveillance. The Early Aberration Reporting System (EARS) uses recent baseline information comparatively with a current day's syndromic count to determine if outbreaks may be present. An adaptive threshold method is proposed based on fitting baseline data to a parametric distribution, then calculating an upper-tailed p-value. These statistics are then converted to an approximately standard normal random variable. Monitoring is examined for independent and identically distributed data as well as data following several seasonal patterns. An exponentially weighted moving average (EWMA) chart is also used for these methods. The effectiveness of these methods in detecting simulated outbreaks in several sensitivity analyses is evaluated. The second phase of research explored in this dissertation considers information that can be classified as a binary event. In industry, it is desirable to have the probability of a nonconforming item, p, be extremely small. Traditional Shewhart charts such as the p-chart, are not reliable for monitoring this type of process. A comprehensive literature review of control chart procedures for this type of process is given. The equivalence between two cumulative sum (CUSUM) charts, based on geometric and Bernoulli random variables is explored. An evaluation of the unit and group--runs (UGR) chart is performed, where it is shown that the in--control behavior of this chart is quite misleading and should not be recommended for practitioners.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
28

Booi, Arthur Mzwandile. "An empirical investigation of the extension of servqual to measure internal service quality in a motor vehicle manufacturing setting." Thesis, Rhodes University, 2004. http://hdl.handle.net/10962/d1006139.

Full text
Abstract:
This research explores the role, which the construct, service quality plays in an internal marketing setting. This is achieved by evaluating the perceptions and expectations of the production department with regards to the service quality provided by the maintenance department of a South African motor vehicle manufacturer. This was done using the INTSERVQUAL instrument, which was found to be a reliable instrument for measuring internal service quality within this context. A positivist approach has been adopted in conducting this research. There are two main hypotheses for this study: the first hypothesis is concerned with the relationship between the overall internal service quality and the five dimensions of service quality namely: tangibles, empathy, reliability, responsiveness and reliability. The second hypothesis focuses on the relationship between the front line staff segments of the production department and the five dimensions of internal service quality. The results of this research suggest that the perceptions and expectations of internal service customer segments plays a major role in achieving internal service quality. In addition, the importance of the INTSERVQUAL instrument in measuring internal service quality within the motor vehicle manufacturing environment is confirmed.
APA, Harvard, Vancouver, ISO, and other styles
29

Lauer, Peccoud Marie-Reine. "Méthodes statistiques pour le controle de qualité en présence d'erreurs de mesure." Université Joseph Fourier (Grenoble), 1997. http://www.theses.fr/1997NICE5136.

Full text
Abstract:
Lorsqu'on cherche a controler la qualite d'un ensemble de pieces a partir de mesures bruitees des grandeurs d'une caracteristique de ces pieces, on peut commettre des erreurs de decision nuisibles a la qualite. Il est donc essentiel de maitriser les risques encourus afin d'assurer la qualite finale de la fourniture. Nous considerons qu'une piece est defectueuse ou non selon que la grandeur g correspondante de la caracteristique est superieure ou inferieure a une valeur g#o donnee. Nous supposons que, compte tenu de l'infidelite de l'instrument de mesure, la mesure m obtenue de cette grandeur est de la forme f(g) + ou f est une fonction croissante telle que la valeur f(g#o) est connue et est une erreur aleatoire centree de variance donnee. Nous examinons d'abord le probleme de la fixation d'une mesure de rejet m de maniere a ce que, pour le tri d'un lot de pieces consistant a accepter ou rejeter chacune selon que la mesure associee est inferieure ou superieure a m, un objectif donne de qualite du lot apres tri soit respecte. Nous envisageons ensuite le probleme de test de la qualite globale d'un lot au vu de mesures pour un echantillon de pieces extrait du lot. Pour ces deux types de problemes, avec differents objectifs de qualite, nous proposons des solutions en privilegiant le cas ou la fonction f est affine et ou l'erreur et la variable g sont gaussiennes. Des resultats de simulations permettent d'apprecier les performances des procedures de controle definies et leur robustesse a des ecarts aux hypotheses utilisees dans les developpements theoriques.
APA, Harvard, Vancouver, ISO, and other styles
30

Arbogast, Patrick G. "Statistical methods for case-control studies /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

MARCHANT, FUENTES Carolina Ivonne. "Essays on multivariate generalized Birnbaum-Saunders methods." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18647.

Full text
Abstract:
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-04-26T17:07:37Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Carolina Marchant.pdf: 5792192 bytes, checksum: adbd82c79b286d2fe2470b7955e6a9ed (MD5)
Made available in DSpace on 2017-04-26T17:07:38Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Carolina Marchant.pdf: 5792192 bytes, checksum: adbd82c79b286d2fe2470b7955e6a9ed (MD5) Previous issue date: 2016-10-31
CAPES; BOLSA DO CHILE.
In the last decades, univariate Birnbaum-Saunders models have received considerable attention in the literature. These models have been widely studied and applied to fatigue, but they have also been applied to other areas of the knowledge. In such areas, it is often necessary to model several variables simultaneously. If these variables are correlated, individual analyses for each variable can lead to erroneous results. Multivariate regression models are a useful tool of the multivariate analysis, which takes into account the correlation between variables. In addition, diagnostic analysis is an important aspect to be considered in the statistical modeling. Furthermore, multivariate quality control charts are powerful and simple visual tools to determine whether a multivariate process is in control or out of control. A multivariate control chart shows how several variables jointly affect a process. First, we propose, derive and characterize multivariate generalized logarithmic Birnbaum-Saunders distributions. Also, we propose new multivariate generalized Birnbaum-Saunders regression models. We use the method of maximum likelihood estimation to estimate their parameters through the expectation-maximization algorithm. We carry out a simulation study to evaluate the performance of the corresponding estimators based on the Monte Carlo method. We validate the proposed models with a regression analysis of real-world multivariate fatigue data. Second, we conduct a diagnostic analysis for multivariate generalized Birnbaum-Saunders regression models. We consider the Mahalanobis distance as a global influence measure to detect multivariate outliers and use it for evaluating the adequacy of the distributional assumption. Moreover, we consider the local influence method and study how a perturbation may impact on the estimation of model parameters. We implement the obtained results in the R software, which are illustrated with real-world multivariate biomaterials data. Third and finally, we develop a robust methodology based on multivariate quality control charts for generalized Birnbaum-Saunders distributions with the Hotelling statistic. We use the parametric bootstrap method to obtain the distribution of this statistic. A Monte Carlo simulation study is conducted to evaluate the proposed methodology, which reports its performance to provide earlier alerts of out-of-control conditions. An illustration with air quality real-world data of Santiago-Chile is provided. This illustration shows that the proposed methodology can be useful for alerting episodes of extreme air pollution.
Nas últimas décadas, o modelo Birnbaum-Saunders univariado recebeu considerável atenção na literatura. Esse modelo tem sido amplamente estudado e aplicado inicialmente à modelagem de fadiga de materiais. Com o passar dos anos surgiram trabalhos com aplicações em outras áreas do conhecimento. Em muitas das aplicações é necessário modelar diversas variáveis simultaneamente incorporando a correlação entre elas. Os modelos de regressão multivariados são uma ferramenta útil de análise multivariada, que leva em conta a correlação entre as variáveis de resposta. A análise de diagnóstico é um aspecto importante a ser considerado no modelo estatístico e verifica as suposições adotadas como também sua sensibilidade. Além disso, os gráficos de controle de qualidade multivariados são ferramentas visuais eficientes e simples para determinar se um processo multivariado está ou não fora de controle. Este gráfico mostra como diversas variáveis afetam conjuntamente um processo. Primeiro, propomos, derivamos e caracterizamos as distribuições Birnbaum-Saunders generalizadas logarítmicas multivariadas. Em seguida, propomos um modelo de regressão Birnbaum-Saunders generalizado multivariado. Métodos para estimação dos parâmetros do modelo, tal como o método de máxima verossimilhança baseado no algoritmo EM, foram desenvolvidos. Estudos de simulação de Monte Carlo foram realizados para avaliar o desempenho dos estimadores propostos. Segundo, realizamos uma análise de diagnóstico para modelos de regressão Birnbaum-Saunders generalizados multivariados. Consideramos a distância de Mahalanobis como medida de influência global de detecção de outliers multivariados utilizando-a para avaliar a adequacidade do modelo. Além disso, desenvolvemos medidas de diagnósticos baseadas em influência local sob alguns esquemas de perturbações. Implementamos a metodologia apresentada no software R, e ilustramos com dados reais multivariados de biomateriais. Terceiro, e finalmente, desenvolvemos uma metodologia robusta baseada em gráficos de controle de qualidade multivariados para a distribuição Birnbaum-Saunders generalizada usando a estatística de Hotelling. Baseado no método bootstrap paramétrico encontramos aproximações da distribuição desta estatística e obtivemos limites de controle para o gráfico proposto. Realizamos um estudo de simulação de Monte Carlo para avaliar a metodologia proposta indicando seu bom desempenho para fornecer alertas precoces de processos fora de controle. Uma ilustração com dados reais de qualidade do ar de Santiago-Chile é fornecida. Essa ilustração mostra que a metodologia proposta pode ser útil para alertar sobre episódios de poluição extrema do ar, evitando efeitos adversos na saúde humana.
APA, Harvard, Vancouver, ISO, and other styles
32

Fang, Lei. "Wireless sensor network control through statistical methods." Thesis, University of St Andrews, 2015. http://hdl.handle.net/10023/7032.

Full text
Abstract:
Wireless Sensor Networks (WSNs) form a new paradigm of computing that allows the physical world to be measured at an unprecedented resolution; and the importance of the technology has been increasingly recognised. However, WSNs are still facing critical challenges, including the low data quality and high energy consumption. In this thesis, formal statistical models are employed to address these two practical problems. With the formalism that is properly designed, sound statistical inferences can be made to guide local sensor nodes to make reasonable and timely decisions at local level in the face of uncertainties. To improve data reliability, we introduce formal Bayesian statistical method to form two on-line in-network fault detectors. The two detection techniques are well integrated with existing data collection protocols. Experimental results demonstrate the technique has good detection accuracy but limited computational and communication overhead. To improve energy efficiency, we propose a novel data collection framework that features both energy conservation and data fault filtering by exploiting Hidden Markov Models (HMMs). Another data collection framework, a Dynamic Linear Model (DLM) based solution, featuring both adaptive sampling and efficient data collection is also proposed. Experimental results show the two solutions effectively suppress unnecessary packet transmission while satisfying users' precision requirement. To prove the feasibility, we show all the proposed solutions are lightweight by either real world implementation or formal complexity analysis.
APA, Harvard, Vancouver, ISO, and other styles
33

Binny, Diana. "Radiotherapy quality assurance using statistical process control." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/130738/1/Diana_Binny_Thesis.pdf.

Full text
Abstract:
The work presented in this thesis was a step forward in applying statistics to the important problem of monitoring machine performance and quantifying optimal treatment quality assurance in radiotherapy. This research investigated the use of an analytical decision making tool known as Statistical Process Control (SPC) that employs statistical means to measure, monitor and identify random and systematic errors in a process based on observed behaviour. In this research, several treatment machine and planning system parameters were investigated and a method of calculating SPC based tolerances to achieve optimal treatment goals was highlighted in this study.
APA, Harvard, Vancouver, ISO, and other styles
34

Cassady, Charles Richard. "Statistical quality control techniques using multilevel discrete product quality measures." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-06062008-151120/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Arif, Osama Hasan. "Statistical process control by quantile approach." Thesis, Sheffield Hallam University, 2000. http://shura.shu.ac.uk/19285/.

Full text
Abstract:
Most quality control and quality improvement procedures involve making assumptions about the distributional form of data it uses; usually that the data is normally distributed. It is common place to find processes that generate data which is non-normally distributed, e.g. Weibull, logistic or mixture data is increasingly encountered. Any method that seeks to avoid the use of transformation for non-normal data requires techniques for identification of the appropriate distributions. In cases where the appropriate distributions are known it is often intractable to implement. This research is concerned with statistical process control (SPC), where SPC can be apply for variable and attribute data. The objective of SPC is to control a process in an ideal situation with respect to a particular product specification. One of the several measurement tools of SPC is control chart. This research is mainly concerned with control chart which monitors process and quality improvement. We believe, it is a useful process monitoring technique when a source of variability is present. Here, control charts provides a signal that the process must be investigated. In general, Shewhart control charts assume that the data follows normal distribution. Hence, most of SPC techniques have been derived and constructed using the concept of quality which depends on normal distribution. In reality, often the set of data such as, chemical process data and lifetimes data, etc. are not normal. So when a control chart is constructed for x or R, assuming that the data is normal, if in reality, the data is nonnormal, then it will provide an inaccurate results. Schilling and Nelson has (1976) investigated under the central limit theory, the effect of non-normality on charts and concluded that the non-normality is usually not a problem for subgroup sizes of four or more. However, for smaller subgroup sizes, and especially for individual measurements, non-normality can be serious problem. The literature review indicates that there are real problems in dealing with statistical process control for non-normal distributions and mixture distributions. This thesis provides a quantile approach to deal with non-normal distributions, in order to construct median rankit control chart. Here, the quantile approach will also be used to calculate process capability index, average run length (ARL), multivariate control chart and control chart for mixture distribution for non-normal situations. This methodology can be easily adopted by the practitioner of statistical process control.
APA, Harvard, Vancouver, ISO, and other styles
36

Bujatzeck, Baldur. "Statistical evaluation of water quality measurements." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0017/MQ44134.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Huang, Biao. "Multivariate statistical methods for control loop performance assessment." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq21580.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Thoutou, Sayi Mbani. "Quality control charts under random fuzzy measurements." Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/19140.

Full text
Abstract:
Includes bibliographical references. .
We consider statistical process control charts as tools that statistical process control utilizes for monitoring changes; identifying process variations and their causes in industrial processes (manufacturing processes) and which help manufacturers to take the appropriate action, rectify problems or improve manufacturing processes so as to produce good quality products. As an essential tool, researchers have always paid attention to the development of process control charts. Also, the sample sizes required for establishing control charts are often under discussion depending on the field of study. Of late, the problem of Fuzziness and Randomness often brought into modern manufacturing processes by the shortening product life cycles and diversification (in product designs, raw material supply etc) has compelled researchers to invoke quality control methodologies in their search for high customer satisfaction and better market shares (Guo et al 2006). We herein focus our attention on small sample sizes and focus on the development of quality control charts in terms of the Economic Design of Quality Control Charts; based on credibility measure theory under Random Fuzzy Measurements and Small Sample Asymptotic Distribution Theory. Economic process data will be collected from the study of Duncan (1956) in terms of these new developments as an illustrative example. or/Producer, otherwise they are undertaken with respect to the market as a whole. The techniques used for tackling the complex issues are diverse and wide-ranging as ascertained from the existing literature on the subject. The global ideology focuses on combining two streams of thought: the production optimisation and equilibrium techniques of the old monopolistic, cost-saving industry and; the new dynamic profit-maximising and risk-mitigating competitive industry. Financial engineering in a new and poorly understood market for electrical power must now take place in conjunction with - yet also constrained by - the physical production and distribution of the commodity.
APA, Harvard, Vancouver, ISO, and other styles
39

Parsons, Nicholas Rene. "Statistical methods for improving pot-plant quality and robustness." Thesis, Queen Mary, University of London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Khalidi, Mohammad Said Asem Weheba Gamal S. "Multivariate quality control statistical performance and economic feasibility /." Diss., A link to full text of this thesis in SOAR, 2007. http://soar.wichita.edu/dspace/handle/10057/1077.

Full text
Abstract:
Thesis (Ph.D.)--Wichita State University, College of Engineering, Dept. of Industrial and Manufacturing Engineering.
"May 2007." Title from PDF title page (viewed on October 25, 2007). Thesis adviser: Gamal S. Weheba. Includes bibliographic references (leaves 96-101).
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Hualong, and 阳华龙. "Statistical process control charts with known and estimatedparameters." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B50900018.

Full text
Abstract:
Monitoring and detection of abrupt changes for multivariate processes are becoming increasingly important in modern manufacturing environments. Typical equipment may have multiple key variables to be measured continuously. Hotelling's 〖T 〗^2and CUSUM charts were widely applied to solve the problem of monitoring the mean vector of multivariate quality measurements. Besides, a new multivariate cumulative sum chart (MCUSUM) is introduced where the target shift mean is assumed to be a weighted sum of principal directions of the population covariance matrix. In practical problems, estimated parameters are needed and the properties of control charts differ from the case where the parameters are known in advance. In particular, it has been observed that the average run length (ARL), a performance indicator of the control charts, is larger when the estimated parameters are used. As a first contribution we provide a general and formal proof of the phenomenon. Also, to design an efficient 〖T 〗^2 or CUSUM chart with estimated parameters, a method to calculate or approximate the ARL function is necessarily needed. A commonly used approach consists in tabulating reference values using extensive Monte-Carlo simulation. By a different approach in thesis, an analytical approximation for the ARL function in univariate case is provided, especially in-control ARL function, which can help to directly set up control limits for different sample sizes of Phase I procedure instead of conducting complex simulation.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
42

Strong, Mark J. (Mark Joseph). "Statistical methods for process control in automobile body assembly." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10922.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1996, and Thesis (M.S.)--Massachusetts Institute of Technology, Sloan School of Management, 1996.
Includes bibliographical references (p. 117-120).
by Mark J. Strong.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
43

Kenerson, Jonathan E. "Quality Assurance and Quality Control Methods for Resin Infusion." Fogler Library, University of Maine, 2010. http://www.library.umaine.edu/theses/pdf/KenersonJE2010.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Lanhede, Daniel. "Non-parametric Statistical Process Control : Evaluation and Implementation of Methods for Statistical Process Control at GE Healthcare, Umeå." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-104512.

Full text
Abstract:
Statistical process control (SPC) is a toolbox to detect changes in the output of a process distribution. It can serve as a valuable resource to maintain high quality in a manufacturing process. This report is based on the work on evaluating and implementing methods for SPC in the process of chromatography instrument manufacturing at GE Healthcare, Umeå. To handle low volume and non-normally distributed process output data, non-parametric methods are considered. Eight control charts, three for for Phase I analysis, and five for Phase II analysis, are evaluated in this study. The usability of the charts are assessed based on ease of interpretation and the performance to detect distributional changes. The later is evaluated with simulations. The result of the project is the implementation of the RS/P-chart, suggested by Capizzi et al (2013), for Phase I analysis. Of the considered Phase I methods (and simulation scenarios), the RS/P-chart has the highest overall probability, of detecting a variety of distributional changes. Further, the RS/P-chart is easily interpreted, facilitating the analysis. For Phase II analysis, the use of two control charts, one based on the Mann-Whitney U statistic, suggested by Chakraborti et al (2008), and one on the Mood test statistic for dispersion, suggested by Ghute et al (2014), have been implemented. These are chosen mainly based on the ease of interpretation. To reduce the detection time for changes in the process distribution, the change-point chart based on the Cramer Von Mises statistic, suggested by Ross et al (2012), could be used instead. Using single observations, instead of larger samples, this chart is updated more frequently. However, this efficiently increases the false alarm rate and the chart is also considered much more difficult to interpret for the SPC practitioner.
Statistisk processkontroll (SPC) är en samling verktyg för att upptäcka förändringar, i fördelningen, hos utfallen i en process. Det kan fungera som en värdefull resurs för att upprätthålla en hög kvalitet i en tillverkningsprocess. Denna rapport är baserad på arbetet med att utvärdera och implementera metoder för SPC i en monteringsprocess av kromatografiinstrument på GE Healthcare, Umeå. Åtta styrdiagram, tre för för fas I analys, och fem för fas II analys, studeras i denna rapport. Användbarheten hos styrdiagrammen bedöms efter hur enkla de är att tolka och förmågan att upptäcka fördelningsförändringar. Den senare utvärderas med simuleringar. Resultatet av projektet är införandet av RS/P-metod, utvecklad av Capizzi et al (2013), för analysen i fas I. Av de utvärderade metoderna, (och simuleringsscenarier), har RS/P-diagrammet den högsta övergripande sannolikheten, för att upptäcka en mängd olika fördelningsförändringar. Vidare är metodens grafiska diagram lätt att tolka, vilket underlättar analysen. För fas II analys, har två styrdiagram, ett baserat på Mann-Whitney's U teststatistika, som föreslagits av Chakraborti et al (2008), och ett på Mood's teststatistika för spridning, som föreslagits av Ghute et al (2014), implementerats. Styrkan i dessa styrdiagram ligger främst i dess enkla tolkning. För snabbare identifiering av processförändringar kan styrdiagrammet baserat på Cramer von Mises teststatistika, som föreslagits av Ross et al (2012), användas. Baserat på enskilda observationer, istället för stickprov, har styrdiagrammet en högre uppdateringsfrekvens. Detta leder dock till ett ökat antal falska larm och styrdiagrammet anses dessutom vara avsevärt mycket svårare att tolka för SPC-utövaren.
APA, Harvard, Vancouver, ISO, and other styles
45

He, Baosheng. "New Bayesian methods for quality control applications." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6133.

Full text
Abstract:
In quality control applications, the most basic tasks are monitoring and fault diagnosis. Monitoring results determines if diagnosis is required, and conversely, diagnostic results aids better monitoring design. Quality monitoring and fault diagnosis are closely related but also have significant difference. Essentially. monitoring focus on online changepoint detection, whilst the primary objective of diagnosis is to identify fault root causes as an offline method. Several critical problems arise in the research of quality control: firstly, whether process monitoring is able to distinguish systematic or assignable faults and occasional deviation; secondly, how to diagnose faults with coupled root causes in complex manufacturing systems; thirdly, if the changepoint and root causes of faults can be diagnosed simultaneously. In Chapter 2, we propose a novel Bayesian statistical process control method for count data in the presence of outliers. That is, we discuss how to discern out of control status and temporary abnormal process behaviors in practice, which is incapable for current SPC methodologies. In this work, process states are modeled as latent variables and inferred by the sequential Monte Carlo method. The idea of Rao-Blackwellization is employed in the approach to control detection error and computational cost. Another contribution of this work is that our method possesses self-starting characteristics, which makes the method a more robust SPC tool for discrete data. Sensitivity analysis on monitoring parameter settings is also implemented to provide practical guidelines. In Chapter 3, we study the diagnosis of dimensional faults in manufacturing. A novel Bayesian variable selection oriented diagnostic framework is proposed. Dimensional fault sources are not explicitly measurable; instead, they are connected with dimensional measurements by a generalized linear mixed effect model, based on which we further construct a hierarchical quality-fault model to conduct Bayesian inference. A reversible jump Markov Chain Monte Carlo algorithm is developed to estimate the approximate posterior probability of fault patterns. Such diagnostic procedure is superior over previous studies since no numeric regularization is required for decision making. The proposed Bayesian diagnosis can further lean towards sparse fault patterns by choosing suitable priors, in order to handle the challenge from the diagnosability of faults. Our work considers the diagnosability in building dimensional diagnostic methodologies. We explain that the diagnostic result is trustworthy for most manufacturing systems in practice. The convergence analysis is also implemented, considering the trans-dimensional nature of the diagnostic method. In Chapter 4 of the thesis, we consider the diagnosis of multivariate linear profile models. We assume liner profiles as piece-wise constant. We propose an integrated Bayesian diagnostic method to answer two problems: firstly, whether and when the process is shifted, and secondly, in which pattern the shift occurs. The method can be applied for both Phase I and Phase II needs. For Phase I diagnosis, the method is implemented with no knowledge of in control profiles, whereas in Phase II diagnosis, the method only requires partial observations. To identify exactly which profile components deviate from nominal value, the variability of the value of profile components is marginalized out through a fully Bayesian approach. To address computational difficulty, we implement Monte Carlo Method to alternatively inspect between spaces of changepoint positions and fault patterns. The diagnostic method is capable to be applied under multiple scenarios.
APA, Harvard, Vancouver, ISO, and other styles
46

Kang, Bei. "STATISTICAL CONTROL USING NEURAL NETWORK METHODS WITH HIERARCHICAL HYBRID SYSTEMS." Diss., Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/122303.

Full text
Abstract:
Electrical Engineering
Ph.D.
The goal of an optimal control algorithm is to improve the performance of a system. For a stochastic system, a typical optimal control method minimizes the mean (first cumulant) of the cost function. However, there are other statistical properties of the cost function, such as variance (second cumulant) and skewness (third cumulant), which will affect the system performance. In this dissertation, the work on the statistical optimal control are presented, which extends the traditional optimal control method using cost cumulants to shape the system performance. Statistical optimal control will allow more design freedom to achieve better performance. The solutions of statistical control involve solving partial differential equations known as Hamilton-Jacobi-Bellman equation. A numerical method based on neural networks is employed to find the solutions of the Hamilton-Jacobi-Bellman partial differential equation. Furthermore, a complex problem such as multiple satellite control, has both continuous and discrete dynamics. Thus, a hierarchical hybrid architecture is developed in this dissertation where the discrete event system is applied to discrete dynamics, and the statistical control is applied to continuous dynamics. Then, the application of a multiple satellite navigation system is analyzed using the hierarchical hybrid architecture. Through this dissertation, it is shown that statistical control theory is a flexible optimal control method which improves the performance; and hierarchical hybrid architecture allows control and navigation of a complex system which contains continuous and discrete dynamics.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
47

Cheung, Ngai-pang, and 張毅鵬. "Statistical analysis of marine water quality data in Hong Kong." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31254846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Vidal, Puig Santiago. "FAULT DIAGNOSIS TOOLS IN MULTIVARIATE STATISTICAL PROCESS AND QUALITY CONTROL." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/61292.

Full text
Abstract:
[EN] An accurate fault diagnosis of both, faults sensors and real process faults have become more and more important for process monitoring (minimize downtime, increase safety of plant operation and reduce the manufacturing cost). Quick and correct fault diagnosis is required in order to put back on track our processes or products before safety or quality can be compromised. In the study and comparison of the fault diagnosis methodologies, this thesis distinguishes between two different scenarios, methods for multivariate statistical quality control (MSQC) and methods for latent-based multivariate statistical process control: (Lb-MSPC). In the first part of the thesis the state of the art on fault diagnosis and identification (FDI) is introduced. The second part of the thesis is devoted to the fault diagnosis in multivariate statistical quality control (MSQC). The rationale of the most extended methods for fault diagnosis in supervised scenarios, the requirements for their implementation, their strong points and their drawbacks and relationships are discussed. The performance of the methods is compared using different performance indices in two different process data sets and simulations. New variants and methods to improve the diagnosis performance in MSQC are also proposed. The third part of the thesis is devoted to the fault diagnosis in latent-based multivariate statistical process control (Lb-MSPC). The rationale of the most extended methods for fault diagnosis in supervised Lb-MSPC is described and one of our proposals, the Fingerprints contribution plots (FCP) is introduced. Finally the thesis presents and compare the performance results of these diagnosis methods in Lb-MSPC. The diagnosis results in two process data sets are compared using a new strategy based in the use of the overall sensitivity and specificity
[ES] La realización de un diagnóstico preciso de los fallos, tanto si se trata de fallos de sensores como si se trata de fallos de procesos, ha llegado a ser algo de vital importancia en la monitorización de procesos (reduce las paradas de planta, incrementa la seguridad de la operación en planta y reduce los costes de producción). Se requieren diagnósticos rápidos y correctos si se quiere poder recuperar los procesos o productos antes de que la seguridad o la calidad de los mismos se pueda ver comprometida. En el estudio de las diferentes metodologías para el diagnóstico de fallos esta tesis distingue dos escenarios diferentes, métodos para el control de estadístico multivariante de la calidad (MSQC) y métodos para el control estadístico de procesos basados en el uso de variables latentes (Lb-MSPC). En la primera parte de esta tesis se introduce el estado del arte sobre el diagnóstico e identificación de fallos (FDI). La segunda parte de la tesis está centrada en el estudio del diagnóstico de fallos en control estadístico multivariante de la calidad. Se describen los fundamentos de los métodos más extendidos para el diagnóstico en escenarios supervisados, sus requerimientos para su implementación sus puntos fuertes y débiles y sus posibles relaciones. Los resultados de diagnóstico de los métodos es comparado usando diferentes índices sobre los datos procedentes de dos procesos reales y de diferentes simulaciones. En la tesis se proponen nuevas variantes que tratan de mejorar los resultados obtenidos en MSQC. La tercera parte de la tesis está dedicada al diagnóstico de fallos en control estadístico multivariante de procesos basados en el uso de modelos de variables latentes (Lb-MSPC). Se describe los fundamentos de los métodos mas extendidos en el diagnóstico de fallos en Lb-MSPC supervisado y se introduce una de nuestras propuestas, el fingerprint contribution plot (FCP). Finalmente la tesis presenta y compara los resultados de diagnóstico de los métodos propuestos en Lb-MSPC. Los resultados son comparados sobre los datos de dos procesos usando una nueva estrategia basada en el uso de la sensitividad y especificidad promedia.
[CAT] La realització d'un diagnòstic precís de les fallades, tant si es tracta de fallades de sensors com si es tracta de fallades de processos, ha arribat a ser de vital importància en la monitorització de processos (reduïx les parades de planta, incrementa la seguretat de l'operació en planta i reduïx els costos de producció) . Es requerixen diagnòstics ràpids i correctes si es vol poder recuperar els processos o productes abans de que la seguretat o la qualitat dels mateixos es puga veure compromesa. En l'estudi de les diferents metodologies per al diagnòstic de fallades esta tesi distingix dos escenaris diferents, mètodes per al control estadístic multivariant de la qualitat (MSQC) i l mètodes per al control estadístic de processos basats en l'ús de variables latents (Lb-MSPC). En la primera part d'esta tesi s'introduïx l'estat de l'art sobre el diagnòstic i identificació de fallades (FDI). La segona part de la tesi està centrada en l'estudi del diagnòstic de fallades en control estadístic multivariant de la qualitat. Es descriuen els fonaments dels mètodes més estesos per al diagnòstic en escenaris supervisats, els seus requeriments per a la seua implementació els seus punts forts i febles i les seues possibles relacions. Els resultats de diagnòstic dels mètodes és comparat utilitzant diferents índexs sobre les dades procedents de dos processos reals i de diferents simulacions. En la tesi es proposen noves variants que tracten de millorar els resultats obtinguts en MSQC. La tercera part de la tesi està dedicada al diagnòstic de fallades en control estadístic multivariant de processos basat en l'ús de models de variables latents (Lb-MSPC). Es descriu els fonaments dels mètodes més estesos en el diagnòstic de fallades en MSPC supervisat i s'introdueix una nova proposta, el fingerprint contribution plot (FCP). Finalment la tesi presenta i compara els resultats de diagnòstic dels mètodes proposats en MSPC. Els resultats són comparats sobre les dades de dos processos utilitzant una nova estratègia basada en l'ús de la sensibilitat i especificitat mitjana.
Vidal Puig, S. (2016). FAULT DIAGNOSIS TOOLS IN MULTIVARIATE STATISTICAL PROCESS AND QUALITY CONTROL [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/61292
TESIS
APA, Harvard, Vancouver, ISO, and other styles
49

Weiss, Christian H. "Categorical time series analysis and applications in statistical quality control." Berlin dissertation.de, 2009. http://d-nb.info/995083134/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Nadeem, Mohammed. "A statistical quality control support system to facilitate acceptance sampling and control chart procedures." Ohio : Ohio University, 1994. http://www.ohiolink.edu/etd/view.cgi?ohiou1178137590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography