To see the other types of publications on this topic, follow the link: Statistical and analytical support.

Dissertations / Theses on the topic 'Statistical and analytical support'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Statistical and analytical support.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kamleu, Germaine. "An analytical model for assessing the knowledge of statistical procedures amongst postgraduate students in a higher educational environment." University of the Western Cape, 2019. http://hdl.handle.net/11394/6769.

Full text
Abstract:
Philosophiae Doctor - PhD<br>Over the past decades, the use and application of statistical concepts for university students have been a big challenge learned from their previous courses. Aftermath of democracy, South African higher education focused on redressing issues of reparation and social imbalances inherited from Apartheid with the commitment to reconstruct a comprehensive educational quality framework. Growing activities lead to new models emphasised to support students and universities in their attempts to demonstrate evidence of enthusiastic statistics learning, with an acceptable degree of accuracy. This study combines quantitative and qualitative research approaches to assess the knowledge of postgraduate students in applying suitable statistical procedures in higher education (HE). The quantitative data were randomly collected from the postgraduate students (n1=307) while the qualitative data were collected through semi-structured interviews (n2=19) from two institutions (University of Cape Town [UCT] and University of the Western Cape [UWC]) in the Western Cape, South Africa. The SPSS V24 statistical package was used for quantitative data analysis and the explorative design was selected as a theoretical framework to guide the investigation, analysis and interpretation of the qualitative findings. UCT model achieved for all combined categories 73% high prediction accuracy. The UWC model revealed similar results, with ask for help, worth of statistics, fear of statistics monitors, affect, cognitive competence, support from significant others, marital status, ethnic groups and type of study as significant predictors with a high prediction accuracy of 75.49%. Additionally, the ethnic groups, marital status, postgraduate programmes, experiences in statistics and effort were significant contributed factors of SELS beliefs while findings of the combined data of UCT and UWC significantly explained the variation observed in SELS beliefs with only 60% model accuracy. Nevertheless, the qualitative data outcomes indicated that the comments of the participants provided a rich understanding of the perceived failure to choose a relevant statistical test. The results further indicated that confusion and frustration characterised the attitude of students during the selection of a suitable statistical test. The original value of this current study is bridging the inequity gap, in terms of statistics learning, and building a substantial input to the achievement of the objectives of UNESCO, the World Education Forum and the White Paper 3, while ultimately, contributing to the sustainable development of learning statistics at universities in the Western Cape, South Africa. By logical extrapolation, this current study proffers significant insights to the rest of the universities in Africa, and beyond.
APA, Harvard, Vancouver, ISO, and other styles
2

Patel, Zubair. "A decision support system for sugarcane irrigation supply and demand management." Master's thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/24455.

Full text
Abstract:
Commercial sugarcane farming requires large quantities of water to be delivered to the fields. Ideal irrigation schedules are produced indicating how much water to be supplied to fields considering multiple objectives in the farming process. Software packages do not fully account for the fact that the ideal irrigation schedule may not be met due to limitations in the water distribution network. This dissertation proposes the use of mathematical modelling to better understand water supply and demand management on a commercial sugarcane farm. Due to the complex nature of water stress on sugarcane, non-linearities occur in the model. A piecewise linear approximation is used to handle the non-linearity in the water allocation model and is solved in a commercial optimisation software package. A test data set is first used to exercise and evaluate the model performance, then to illustrate the practical applicability of the model, a commercial sized data set is used and analysed.
APA, Harvard, Vancouver, ISO, and other styles
3

Uys, J. W. "A framework for exploiting electronic documentation in support of innovation processes." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/1449.

Full text
Abstract:
Thesis (PhD (Industrial Engineering))--University of Stellenbosch, 2010.<br>ENGLISH ABSTRACT: The crucial role of innovation in creating sustainable competitive advantage is widely recognised in industry today. Likewise, the importance of having the required information accessible to the right employees at the right time is well-appreciated. More specifically, the dependency of effective, efficient innovation processes on the availability of information has been pointed out in literature. A great challenge is countering the effects of the information overload phenomenon in organisations in order for employees to find the information appropriate to their needs without having to wade through excessively large quantities of information to do so. The initial stages of the innovation process, which are characterised by free association, semi-formal activities, conceptualisation, and experimentation, have already been identified as a key focus area for improving the effectiveness of the entire innovation process. The dependency on information during these early stages of the innovation process is especially high. Any organisation requires a strategy for innovation, a number of well-defined, implemented processes and measures to be able to innovate in an effective and efficient manner and to drive its innovation endeavours. In addition, the organisation requires certain enablers to support its innovation efforts which include certain core competencies, technologies and knowledge. Most importantly for this research, enablers are required to more effectively manage and utilise innovation-related information. Information residing inside and outside the boundaries of the organisation is required to feed the innovation process. The specific sources of such information are numerous. Such information may further be structured or unstructured in nature. However, an ever-increasing ratio of available innovation-related information is of the unstructured type. Examples include the textual content of reports, books, e-mail messages and web pages. This research explores the innovation landscape and typical sources of innovation-related information. In addition, it explores the landscape of text analytical approaches and techniques in search of ways to more effectively and efficiently deal with unstructured, textual information. A framework that can be used to provide a unified, dynamic view of an organisation‟s innovation-related information, both structured and unstructured, is presented. Once implemented, this framework will constitute an innovation-focused knowledge base that will organise and make accessible such innovation-related information to the stakeholders of the innovation process. Two novel, complementary text analytical techniques, Latent Dirichlet Allocation and the Concept-Topic Model, were identified for application with the framework. The potential value of these techniques as part of the information systems that would embody the framework is illustrated. The resulting knowledge base would cause a quantum leap in the accessibility of information and may significantly improve the way innovation is done and managed in the target organisation.<br>AFRIKAANSE OPSOMMING: Die belangrikheid van innovasie vir die daarstel van „n volhoubare mededingende voordeel word tans wyd erken in baie sektore van die bedryf. Ook die belangrikheid van die toeganklikmaking van relevante inligting aan werknemers op die geskikte tyd, word vandag terdeë besef. Die afhanklikheid van effektiewe, doeltreffende innovasieprosesse op die beskikbaarheid van inligting word deurlopend beklemtoon in die navorsingsliteratuur. „n Groot uitdaging tans is om die oorsake en impak van die inligtingsoorvloedverskynsel in ondernemings te bestry ten einde werknemers in staat te stel om inligting te vind wat voldoen aan hul behoeftes sonder om in die proses deur oormatige groot hoeveelhede inligting te sif. Die aanvanklike stappe van die innovasieproses, gekenmerk deur vrye assosiasie, semi-formele aktiwiteite, konseptualisering en eksperimentasie, is reeds geïdentifiseer as sleutelareas vir die verbetering van die effektiwiteit van die innovasieproses in sy geheel. Die afhanklikheid van hierdie deel van die innovasieproses op inligting is besonder hoog. Om op „n doeltreffende en optimale wyse te innoveer, benodig elke onderneming „n strategie vir innovasie sowel as „n aantal goed gedefinieerde, ontplooide prosesse en metingskriteria om die innovasieaktiwiteite van die onderneming te dryf. Bykomend benodig ondernemings sekere innovasie-ondersteuningsmeganismes wat bepaalde sleutelaanlegde, -tegnologiëe en kennis insluit. Kern tot hierdie navorsing, benodig organisasies ook ondersteuningsmeganismes om hul in staat te stel om meer doeltreffend innovasie-verwante inligting te bestuur en te gebruik. Inligting, gehuisves beide binne en buite die grense van die onderneming, word benodig om die innovasieproses te voer. Die bronne van sulke inligting is veeltallig en hierdie inligting mag gestruktureerd of ongestruktureerd van aard wees. „n Toenemende persentasie van innovasieverwante inligting is egter van die ongestruktureerde tipe, byvoorbeeld die inligting vervat in die tekstuele inhoud van verslae, boeke, e-posboodskappe en webbladsye. In hierdie navorsing word die innovasielandskap asook tipiese bronne van innovasie-verwante inligting verken. Verder word die landskap van teksanalitiese benaderings en -tegnieke ondersoek ten einde maniere te vind om meer doeltreffend en optimaal met ongestruktureerde, tekstuele inligting om te gaan. „n Raamwerk wat aangewend kan word om „n verenigde, dinamiese voorstelling van „n onderneming se innovasieverwante inligting, beide gestruktureerd en ongestruktureerd, te skep word voorgestel. Na afloop van implementasie sal hierdie raamwerk die innovasieverwante inligting van die onderneming organiseer en meer toeganklik maak vir die deelnemers van die innovasieproses. Daar word verslag gelewer oor die aanwending van twee nuwerwetse, komplementêre teksanalitiese tegnieke tot aanvulling van die raamwerk. Voorts word die potensiele waarde van hierdie tegnieke as deel van die inligtingstelsels wat die raamwerk realiseer, verder uitgewys en geillustreer.
APA, Harvard, Vancouver, ISO, and other styles
4

Hamilton-Wright, Andrew. "Transparent Decision Support Using Statistical Evidence." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/778.

Full text
Abstract:
An automatically trained, statistically based, fuzzy inference system that functions as a classifier is produced. The hybrid system is designed specifically to be used as a decision support system. This hybrid system has several features which are of direct and immediate utility in the field of decision support, including a mechanism for the discovery of domain knowledge in the form of explanatory rules through the examination of training data; the evaluation of such rules using a simple probabilistic weighting mechanism; the incorporation of input uncertainty using the vagueness abstraction of fuzzy systems; and the provision of a strong confidence measure to predict the probability of system failure. <br /><br /> Analysis of the hybrid fuzzy system and its constituent parts allows commentary on the weighting scheme and performance of the "Pattern Discovery" system on which it is based. <br /><br /> Comparisons against other well known classifiers provide a benchmark of the performance of the hybrid system as well as insight into the relative strengths and weaknesses of the compared systems when functioning within continuous and mixed data domains. <br /><br /> Classifier reliability and confidence in each labelling are examined, using a selection of both synthetic data sets as well as some standard real-world examples. <br /><br /> An implementation of the work-flow of the system when used in a decision support context is presented, and the means by which the user interacts with the system is evaluated. <br /><br /> The final system performs, when measured as a classifier, comparably well or better than other classifiers. This provides a robust basis for making suggestions in the context of decision support. <br /><br /> The adaptation of the underlying statistical reasoning made by casting it into a fuzzy inference context provides a level of transparency which is difficult to match in decision support. The resulting linguistic support and decision exploration abilities make the system useful in a variety of decision support contexts. <br /><br /> Included in the analysis are case studies of heart and thyroid disease data, both drawn from the University of California, Irvine Machine Learning repository.
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Jinran. "Statistical support vector machines with optimizations." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/234509/1/Jinran_Wu_Thesis.pdf.

Full text
Abstract:
This thesis combines support vector machines with statistical models for analyzing data generated by complex processes. The key contribution of the thesis is to propose five regression frameworks aiming for hyperparameter estimation, support vector selection, data modelling with unequal variances, temporal patterns, and cost benefit analysis. A new optimizer is also proposed for high-dimensional optimization.
APA, Harvard, Vancouver, ISO, and other styles
6

Ovsyuk, Nina Vasylivna, and Lyudmyla Oleksandrivna Galushko. "Accounting and analytical support of enterprise management." Thesis, National Aviation University, 2021. https://er.nau.edu.ua/handle/NAU/53925.

Full text
Abstract:
1. Bezrodna T.M. Accounting and analytical support of enterprise management: defining the essence of the concept. Bulletin of the East Ukrainian National un-tu them. V. Dahl. 2008. № 10 (128). Part 2. URL: http://www.nbuv.gov.ua/portal-/Soc_Gum/VSUNU/2008_10_2/ bezrodna.pdf. 2. Volskaya V.V. Methodical approaches to accounting and analytical support and audit of management activities of agricultural enterprises. Problems of theory and methodology of accounting, control and analysis. 2012. № 3 (24). Pp. 83–88. 3. Golyachuk N.V. Accounting and analytical support as an important component of enterprise management. Coll. Science. works of Ternopil National economy. University "Economic Analysis". 2010. Vol. 6. pp. 408-410.<br>Substantiation of theoretical provisions for accounting and analytical support of enterprise management and identification of the basic principles of construction of accounting and analytical information system.<br>Обґрунтування теоретичних положень обліково-аналітичного забезпечення управління підприємством та визначення основних принципів побудови обліково-аналітичної інформаційної системи.
APA, Harvard, Vancouver, ISO, and other styles
7

FREITAS, SONIA MARIA DE. "STATISTICAL METHODOLOGY FOR ANALYTICAL METHODS VALIDATION APPLICABLE CHEMISTRY METROLOGY." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2003. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=4058@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>A metodologia estatística escolhida para validação de métodos analíticos aplicável à metrologia em química é fundamental para assegurar a qualidade, comprovar a eficiência e demonstrar a exatidão dos resultados das medições nas análises químicas. Essa metodologia, desenvolvida em conformidade com o rigor metrológico, resulta num sistema de medições validado, confiável e com incertezas quantificadas. Este trabalho propõe uma metodologia geral para validação de métodos analíticos. A metodologia desenvolvida resultou de uma síntese de métodos parciais descritos na literatura, e inclui uma escolha crítica de técnicas mais adequadas dentro das alternativas existentes. A abordagem proposta combina quatro diferentes aspectos da validação: a modelagem da curva de calibração; o controle da especificidade do método; a comparação da tendência e precisão (repetitividade e precisão intermediária) do método com um método de referência; e a estimação das componentes de incerteza inerentes a todos esses aspectos. Como resultado, além de uma proposta para validação de métodos para uso em análises químicas, obtêm- se a função de calibração inversa e as incertezas expandidas, que permitem obter os resultados analíticos associados aos valores da resposta, com suas respectivas incertezas associadas. Na modelagem geral para obtenção da curva de calibração, empregam-se técnicas estatísticas para avaliação da linearidade e para o cálculo do mínimo valor detectável e do mínimo valor quantificável. A especificidade do método analítico é avaliada pela adição de padrões a um conjunto de amostras representativas e posterior recuperação dos mesmos, com ajuste por mínimos quadrados e testes de hipóteses. Para estudar a tendência e a precisão do método quando comparado a um método de referência, utiliza-se um modelo hierárquico de quatro níveis e a aproximação de Satterthwaite para determinação do número de graus de liberdade associados aos componentes de variância. As técnicas estatísticas utilizadas são ilustradas passo a passo por exemplos numéricos.<br>The use of statistical methodology for analytical methods validation is vital to assure that measurements have the quality level required by the goal to be attained. This thesis describes a statistical modelling approach for combining four different aspects of validation: checking the linearity of the calibration curve and compute the detection and the quantification limits; controlling the specificity of the analytical method; estimating the accuracy (trueness and precision) of the alternative method, for comparison with a reference method. The general approach is a synthesis of several partial techniques found in the literature, according to a choice of the most appropriate techniques in each case. For determination of the response function, statistical techniques are used for assessing the fitness of the regression model and for determination of the detection limit and the quantification limit. Method specificity is evaluated by adjusting a straight line between added and recovered concentrations via least squares regression and hypotheses tests on the slope and intercept. To compare a method B with a reference method A, the precision and accuracy of method B are estimated. A 4-factor nested design is employed for this purpose. The calculation of different variance estimates from the experimental data is carried out by ANOVA. The Satterthwaite approximation is used to determine the number of degrees of freedom associated with the variance components. The application of the methodology is thoroughly illustrated with step-by-step examples.
APA, Harvard, Vancouver, ISO, and other styles
8

Yap, Fook Fah. "Statistical Energy Analysis of structural vibration : analytical and computational investigations." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Hongyan. "A visualization tool to support Online Analytical Processing." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE0000622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Carlson, Jennifer (Jennifer Beth) 1970. "Analytical and statistical approaches toward understanding sedimentation in siliciclastic depositional systems." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9435.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric and Planetary Sciences, 1998.<br>Folded leaves in pocket on p. [3] of cover, v. 2.<br>Includes bibliographical references.<br>Recent studies of turbidite bed thickness distributions have demonstrated power law and other statistical distributions. Chapter 2 explores the different distributions which may express fan processes and may be used as a tool to classify environments. Chapter 3 illustrates a correlation between cumulative distributions of well known turbidite deposits and interpreted fan subenvironments. A power law distribution may be, for some systems, the primary input signal and a one dimensional model allows qualitative-quantitative characterization of the effects of different fan processes. Environments dominated by different fan processes may be characterized based on the degree to which processes have acted as a "filter", systematically modifying the assumed power law distribution. This model is used to help account for bed thickness distributions observed in several field sites. Turbidite sections are often characterized in terms of alternating packages of thinning- and thickening-upward intervals, which are interpreted to be representative of different subenvironments, including channel and levee environments. In Chapter 4, stratigraphic sections from several field sites are analyzed for dominance of asymmetrical bedding packages using runs analysis. Results indicate ( 1) a correlation between the number of beds in the dataset and the significance level of the results, with may relate to prevalence of progradational and lateral migration of deposits; (2) runs tests should be applied with caution to datasets containing significant levels of erosion and amalgamation; and (3) runs tests may be used to identify the presence of interlayered lithologies and perhaps flow types. In Chapter 5, the log-log cumulative distribution model and the runs test techniques are applied to turbidites of the Permian Skoorsteenberg Fonnation in the Tanqua-Karoo Basin, South Africa. Exceptional exposure permitted a number of lateral correlation studies over a range of scales. The cumulative distribution model is supported by the turbidite distributions broadly subdivided into fan and interfan environments. Runs analysis reveals that interfan environments are generally more ordered than fan environments. Chapter 6 illustrates the development of a dual-component heterogeneous sediment transport model. A grain-scale model is integrated with a large scale basin model to simulate transport and deposition of heterogeneous-sized sediment in a fluvial system.<br>by Jennifer Carlson.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Jiale. "ANALYTICAL FATIGUE DAMAGE CALCULATION FOR WIND TURBINE SUPPORT STRUCTURE." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1364832753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

SOUSA, TAISSA ABDALLA FILGUEIRAS DE. "RECOMMENDER SYSTEM TO SUPPORT CHART CONSTRUCTIONS WITH STATISTICAL DATA." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2013. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=22030@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>Pesquisas sobre visualização de dados estatísticos demonstram a necessidade de sistemas que apóiem tomadas da decisões e análises visuais. Constatando problemas de construção de visualizações por usuários inexperientes, nossa questão de pesquisa foi: Como apoiar usuários inexperientes na construção de visualizações eficientes com dados estatísticos? Assim, desenvolvemos ViSC, um sistema de recomendações que apóia a construção interativa de gráficos para visualizar dados estatísticos, através de uma série de recomendações baseadas nos dados selecionados e na interação do usuário com a ferramenta. O sistema explora uma ontologia de visualização para oferecer um conjunto de gráficos que ajudam a responder questões baseadas em informação relacionadas aos dados exibidos no gráfico. Percorrendo os gráficos recomendados através de suas questões relacionadas, o usuário implicitamente adquire conhecimento tanto do domínio quanto dos recursos de visualização que melhor representam os conceitos do domínio de interesse. Esta dissertação apresenta os problemas que motivaram a pesquisa, descreve a ferramenta ViSC e apresenta os resultados de uma pesquisa qualitativa realizada para avaliar ViSC. Para a avaliação, utilizamos o Método de Inspeção Semiótica (MIS) e o Retrospective Communicability Evaluation (RCE) — uma combinação do Método de Avaliação da Comunicabilidade (MAC) e Retrospective Think Aloud Protocol. Concentramo-nos em verificar como as recomendações influenciam na realização de uma tarefa e nas visualizações geradas para então endereçar nossa questão mais ampla.<br>Research on statistical data visualization emphasizes the need for systems that assist in decision-making and visual analysis. Having found problems in chart construction by novice users, we decided to research the following question: How can we support novice users to create efficient visualizations with statistical data? Thus we, created ViSC, a recommender system that supports the interactive construction of charts to visualize statistical data by offering a series of recommendations based on the selected data and the user interaction with the tool. The system explores a visualization ontology to offer a set of graphs that help to answer information-based questions related to the current graph data. By traversing the recommended graphs through their related questions, the user implicitly acquires knowledge both on the domain and on visualization resources that better represent the domain concepts of interest. This dissertation presents the problems that motivated the research, describes the ViSC tool and presents the results of a qualitative study conducted to evaluate ViSC. We used two methods in our evaluation: the Semiotic Inspection Method (SIM) and the Retrospective Communicability Evaluation (RCE) — a combination of the Communicability Evaluation Method (CEM) and Retrospective Think Aloud Protocol. We first analyze how the questions influence the users traversal through the graph and, then, we address the broader question.
APA, Harvard, Vancouver, ISO, and other styles
13

Chan, Herman King Yeung. "Machine learning and statistical approaches to support gait analysis." Thesis, Ulster University, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.646039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Stewart, John. "Physical and analytical aspects of projection operators in non equilibrium statistical mechanics." Thesis, University of Strathclyde, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bell, Madison. "Developing Statistical and Analytical Methods for Untargeted Analysis of Complex Environmental Matrices." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/41626.

Full text
Abstract:
The main objective of this thesis was to develop statistical and analytical methods for untargeted analyses of complex environmental matrices like soil and sediment. Untargeted analyses are notoriously difficult to perform in matrices like soil and sediment because of the complexity of organic matter composition within these matrices. This thesis aimed to (1) Develop and compare extraction methods for untargeted analyses of soil and sediment while also developing data handling and quality control protocols; (2) Investigate novel applications of untargeted analyses for environmental classification and monitoring; and (3) Investigate the experimental factors that can influence the organic matter composition of untargeted extractions. CHAPTER TWO is a literature review of metabolomics protocols, and these protocols were incorporated into a proposed workflow for performing untargeted analysis in oil, soil, and sediment. This thesis contains the first application of untargeted analysis to freshwater lake sediment organic matter (i.e. sedimentomics) in CHAPTER THREE, and this has implications for discovering new biomarkers for paleolimnology (APPENDIX ONE). I demonstrated successful extraction methods for both sedimentomics and soil metabolomics studies in CHAPTER THREE and CHAPTER FIVE, respectively, using the proposed workflow from CHAPTER TWO. I also applied sedimentomics to the classification of lake sediments using machine learning and geostatistics based on sediment organic matter compositions in CHAPTER FOUR; this was a novel application of sedimentomics that could have implications for ecosystem classifications and advance our knowledge of organic matter cycling in lake sediments. Lastly, in CHAPTER FIVE I determined microbial activity, extraction method, and soil type can all influence the composition of soil organic matter extracts in soil metabolomics experiments. I also developed novel quality controls and quantitative methods that can help control these influences in CHAPTER FIVE and APPENDIX THREE. APPENDIX TWO was written in collaboration with multiple researchers and is a review of all “omics” types of analyses that can be performed on soil or sediment, and how methods like the untargeted analysis of soil and sediment organic matter can be linked with metagenomics, metatranscriptomics, and metaproteomics for a comprehensive metaphenomics analysis of soil and sediment ecosystems. In CHAPTER SIX the conclusions and implications for each chapter and overall for this thesis are detailed and I describe future directions for the field. In the end the overall conclusions of this thesis were: 1) Quality controls are necessary for sedimentomics and soil metabolomics studies, 2) Sedimentomics is a valid technique to highlight changes in sediment organic matter, 3) Soil metabolomics and sedimentomics yield more information about carbon cycling than traditional measurements, and 4) Soil metabolomics organic matter extractions are more variable and require more quality controls.
APA, Harvard, Vancouver, ISO, and other styles
16

Hryhorevska, O. O. "Problem issues of accounting and analytical support of organic production." Thesis, BoScience Publisher, 2020. https://er.knutd.edu.ua/handle/123456789/17414.

Full text
Abstract:
The advantages of plow production for sustainable development of the country are substantiated. It is proved that the management system of any business entity directly depends on a properly constructed, organized and functioning information system based on the information generated in the accounting system. One of the objects of accounting is considered –costs, as such, which should be investigated from the standpoint of accounting and analytical support of organic production.<br>Обґрунтовано переваги виробництва плугів для сталого розвитку країни. Доведено, що система управління будь-яким суб’єктом господарювання безпосередньо залежить від правильно побудованої, організованої та функціонуючої інформаційної системи, заснованої на інформації, що формується в системі бухгалтерського обліку. Розглядається один з об'єктів бухгалтерського обліку – витрати як такі, які слід досліджувати з позицій бухгалтерського та аналітичного забезпечення органічного виробництва.
APA, Harvard, Vancouver, ISO, and other styles
17

CORREIA, WEULES FERNANDES. "INCLUSION OF STATISTICAL METHODS AS BILLING SUPPORT OF SMART METERS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=35304@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>A sociedade está vivendo em uma época de forte convergência tecnológica, onde novas tecnologias são descobertas e extintas em um prazo cada vez menor. Esta revolução tecnológica também já chegou para o setor de infraestrutura de distribuição de energia que são as Redes Elétricas Inteligentes, sendo o medidor inteligente o principal equipamento desta revolução. Apesar da evolução do parque de medidores brasileiros, a regulação comercial não acompanhou esta modernização e continua tendo como referência o sistema de medição convencional com a atuação de leituristas e não considerando as oportunidades de usar dados de consumo, mesmo que não sejam da data do faturamento nos casos de falhas de transmissão da informação e aplicação de ferramentas estatísticas para estimação no faturamento. Neste contexto, esta dissertação tem como objetivo avaliar as regras regulatórias de faturamento considerando as ausências de leituras, propor uma nova metodologia para definir como realizar o faturamento na ausência de leituras considerando consumos anteriores e usar ferramentas estatísticas para a definição do valor a ser faturado. A metodologia pode ser dividida em duas fases: (i) imputação de dados faltantes na base de dados decorrentes de possíveis erros de transmissão dos medidores; (ii) previsão do consumo de energia elétrica por cliente. O presente trabalho cumpriu os objetivos aos quais se propôs e apresentou uma alternativa promissora para o faturamento com medidores inteligentes e que utilize tecnologias de comunicação de baixo custo e que possam apresentar uma efetividade de medição abaixo da ideal, no caso, 100 por cento.<br>Society is living in a time of strong technological convergence, where new technologies are discovered and extinguished in an ever shorter time frame. This technological revolution has also arrived for the energy distribution infrastructure with the Smart Grid, in which the smart meter being the main equipment of this revolution. Despite the evolution of the Brazilian meter park, the commercial regulation did not go along with this modernization and continues with reference to the conventional metering system and not considering the opportunities to use consumption data comes from out of the billing date, in cases of data transmission failures, being thus possible use statistical tools for billing estimation. In this context, this dissertation aims to evaluate the regulatory rules of billing considering the absences of readings, proposing a new methodology to define how to estimate the billing in the absence of readings, considering previous consumption and using statistical tools to define the value to be billed. The methodology can be divided into two phases: (i) imputation of missing data in the database, resulting from possible transmission errors of the meters; (ii) forecast of electricity consumption per customer. The present work fulfilled the objectives proposed and presented a promising alternative for billing with smart meters using low cost communication technologies that could have low measurement effectiveness, in this case, 100 percent.
APA, Harvard, Vancouver, ISO, and other styles
18

Cure, Vellojin Laila Nadime. "Analytical Methods to Support Risk Identification and Analysis in Healthcare Systems." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3054.

Full text
Abstract:
Healthcare systems require continuous monitoring of risk to prevent adverse events. Risk analysis is a time consuming activity that depends on the background of analysts and available data. Patient safety data is often incomplete and biased. This research proposes systematic approaches to monitor risk in healthcare using available patient safety data. The methodologies combine traditional healthcare risk analysis methods with safety theory concepts, in an innovative manner, to allocate available evidence to potential risk sources throughout the system. We propose the use of data mining to analyze near-miss reports and guide the identification of risk sources. In addition, we propose a Maximum-Entropy based approach to monitor risk sources and prioritize investigation efforts accordingly. The products of this research are intended to facilitate risk analysis and allow for timely identification of risks to prevent harm to patients.
APA, Harvard, Vancouver, ISO, and other styles
19

Lee, Chul Kyu. "A computerized analytical decision support system for evaluating airline scheduling interactions /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488191667180958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Rakotonirainy, Rosephine Georgina. "Decision support for the production and distribution of electricity under load shedding." Master's thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/20942.

Full text
Abstract:
Every day national power system networks provide thousands of MW of electric power from generating units to consumers. This process requires different operations and planning to ensure the security of the entire system. Part of the daily or weekly operation system is the so called Unit Commitment problem which consists of scheduling the available resources in order to meet the system demand. But the continuous growth in electricity demand might put pressure on the ability of the generation system to sufficiently provide supply. In such case load shedding (a controlled, enforced reduction in electricity supply) is necessary to prevent the risk to system collapse. In South Africa at the present time, a systematic lack of supply has meant that regular load shedding has taken place, with substantial economic and social costs. In this research project we study two optimization problems related to load shedding. The first is how load shedding can be integrated into the unit commitment problem. The second is how load shedding can be fairly and efficiently allocated across areas. We develop deterministic and stochastic linear and goal programming models for these purposes. Several case studies are conducted to explore the possible solutions that the proposed models can offer.
APA, Harvard, Vancouver, ISO, and other styles
21

Man, Wai-chung. "Analytical and numerical study on agent behaviour in various market structures through the minority game." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B34618648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Man, Wai-chung, and 文慧中. "Analytical and numerical study on agent behaviour in various market structures through the minority game." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B34618648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hosseini, Ehsan. "The analytical modelling of collective capability of human networks." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/12445.

Full text
Abstract:
This thesis is an attempt to propose an analytical model for estimating and predicting capability in human networks (i.e. work teams). Capability in this context is the ability to utilise the collective inherent and acquired resources of individuals to complete a given task. The motivation of proposing a method for measuring collective capability of teams is to assist project managers and team builders to allocate and assign “The most capable teams” to a project to maximise the likelihood of success. The review of literature in engineering, human sciences and economics has led to a definition of capability. One of the key findings of this research work is that collective capability can be predicted by: 1. Demographic homophily of members of the team, 2. The diversity of skills that each member brings to the team, 3. The past experience or attainments of the members, and 4. The strength of relationship amongst the members of the team. The influence of the four predictors of capability is investigated through the design of empirical surveys conducted among postgraduate students over a period of 2 years. The data collected from the surveys are used to assess the correlation between the predictors and the dependent variable using standard statistical methods. The conclusions of the study confirm that there are positive and significant relationships between the independent predictors and collective capability of project teams. The demographic homophily of the individuals in team and their instrumental (task related) relationships’ strength become the two most effective predictors which have the highest effect on the collective capability of a team as a whole. The skills diversity of the members in a group and their previous level of attainments/experiences in similar projects were also proved to be effective factors (with lower level of effect) in increasing the capability of the whole team in fulfilling the requirements of a pre-defined project.
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Yuhao. "Differential Expression Analysis between Microarray and RNA-seq over Analytical Methods across Statistical Models." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1586452969867803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Dowling, Carlin. "The antecedents of appropriate audit support system use /." Connect to thesis, 2006. http://eprints.unimelb.edu.au/archive/00002185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Liu, Dingfei. "An object-oriented approach to structuring multicriteria decision support in natural resource management problems." Doctoral thesis, University of Cape Town, 2001. http://hdl.handle.net/11427/4384.

Full text
Abstract:
Includes bibliographical references.<br>The undertaking of MCDM (Multicriteria Decision Making) and the development of DSSs (Decision Support Systems) tend to be complex and inefficient, leading to low productivity in decision analysis and DSSs. Towards this end, this study has developed an approach based on object orientation for MCDM and DSS modelling, with the emphasis on natural resource management. The object-oriented approach provides a philosophy to model decision analysis and DSSs in a uniform way, as shown by the diagrams presented in this study. The solving of natural resource management decision problems, the MCDM decision making procedure and decision making activities are modelled in an object-oriented way. The macro decision analysis system, its DSS, the decision problem, the decision context, and the entities in the decision making procedure are represented as "objects". The object-oriented representation of decision analysis also constitutes the basis for the analysis ofDSSs.
APA, Harvard, Vancouver, ISO, and other styles
27

Van, Dyk Theron Van Zyl. "Decision support systems for solving discrete multicriteria decision making problems." Master's thesis, University of Cape Town, 1992. http://hdl.handle.net/11427/14300.

Full text
Abstract:
Includes bibliography.<br>The aim of this study was the design and implementation of an interactive decision support system, assisting a single decision maker in reaching a satisfactory decision when faced by a multicriteria decision making problem. There are clearly two components involved in designing such a system, namely the concept of decision support systems (DSS) and the area of multicriteria decision making (MCDM). The multicriteria decision making environment as well as the definitions of the multicriteria decision making concepts used, are discussed in chapter 1. Chapter 2 gives a brief historical review on MCDM, highlighting the origins of some of the more well-known methods for solving MCDM problems. A detailed discussion of interactive decision making is also given. Chapter 3 is concerned with the DSS concept, including a historical review thereof, a framework for the design of a DSS, various development approaches as well as the components constituting a decision support system. In chapter 4, the possibility of integrating the two concepts, MCDM and DSS, are discussed. A detailed discussion of various methodologies for solving MCDM problems is given in chapter 5. Specific attention is given to identifying the methodologies to be implemented in the DSS. Chapter 6 can be seen as a theoretical description of the system developed, while Chapter 7 is concerned with the evaluation procedures used for testing the system. A final summary and concluding remarks are given in Chapter 8.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Yanwei. "Analytical and statistical modeling to evaluate effectiveness of stream restoration in reducing stream bank erosion." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2005. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Cecchini, Lee-Anne. "Robben Island penguin pressure model: a decision support tool for an ecosystems approach to fisheries management." Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/10212.

Full text
Abstract:
Includes bibliographical references.<br>The African penguin (Spheniscus demersus) population in southern Africa has declined from approximately 575 000 adults at the start of the 20th century to 180 000 adults in the early 1990s. The population is still declining, leading to the International Union for the Conservation of Nature upgrading the status of African penguins to Endangered on the Red List of Threatened Species. This dissertation uses a systems dynamics approach to produce a model incorporating all important pressures. The model is stochastic and spatially explicit, and uses expert opinion where data are not available. The model has been produced and revised with the help of the Penguin Modelling Group, based at the University of Cape Town. The modelling process culminated in a workshop where participants experimented with the model themselves. The model in this dissertation is only applicable to the penguin population on Robben Island and, as such, conclusions drawn cannot necessarily be applied to other penguin colonies.
APA, Harvard, Vancouver, ISO, and other styles
30

Knox, Kathryn M. G. "Statistical interpretation of a veterinary hospital database : from data to decision support." Thesis, University of Glasgow, 1998. http://theses.gla.ac.uk/6735/.

Full text
Abstract:
Research was undertaken to investigate whether data maintained within a veterinary hospital database could be exploited such that important medical information could be realised. At the University of Glasgow Veterinary School (GUVS), a computerised hospital database system, which had maintained biochemistry and pathology data for a number of years, was upgraded and expanded to enable recording of signalment, historical and clinical data for referral cases. Following familiarisation with the computerised database, clinical diagnosis and biochemistry data pertaining to 740 equine cases were extracted. Graphical presentation of the results obtained for each of 18 biochemistry parameters investigated indicated that the distributions of the data were variable. This had important implications with respect to the statistical techniques which were subsequently applied, and also to the appropriateness of the reference range method currently used for interpretation of clinical biochemistry data. A percentile analysis was performed for each of the biochemistry parameters; data were grouped into ten appropriate percentile band intervals; and the corresponding diagnoses tabulated and ranked according to frequency. Adoption of a Bayesian method enabled determination of how many times more likely a diagnosis was than before the biochemistry parameter concentration had been ascertained. The likelihood ratio was termed the "Biochemical Factor". Consequently, a measurement on a parameter, such as urea, could be classified on the percentile scale, and a diagnosis, such as hepatopathy, judged to be less or many times more likely, based on the numerical evaluation of the Biochemical Factor. One issue associated with the interrogation of the equine cases was that the diagnoses were clinical in origin, and, because they may have been made with the assistance of biochemistry data, this may have yielded biased results. Although this was considered unlikely to have affected the findings to a large extent, a database containing biochemistry and post mortem diagnosis data for cattle was also assessed.
APA, Harvard, Vancouver, ISO, and other styles
31

Chan, Ka-wah, and 陳家華. "An analytical study of Hong Kong's private consumption expenditure figures." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1989. http://hub.hku.hk/bib/B31975677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

De, Silva Kalumin Amila. "Statistical approaches for milk composition determination using combined near infrared, Raman, conductivity, and refractive index measurements." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=78349.

Full text
Abstract:
Current practices for routine milk composition determination employ commercial infrared systems. The use of SW-NIR and NIR FT-Raman spectra coupled with conductivity and refractive index could lead to more frequent and less costly analysis of fat, lactose and protein in milk.<br>The present study examines the potential of both SW-NIR absorbance spectrophotometry and NIR FT-Raman spectrophotometry to develop a model to estimate fat, lactose, and protein in whole milk of cows. To accomplish this, 79 milk standards, spanning the range of composition seen in practice, were obtained. Acquisition of NIR spectra over the wavelength range of 700 nm to 1018 nm was conducted. Between 0 and 3700 cm-1, NIR FT-Raman spectrophotometric measurements of the milk samples were made using a 1064 nm Nd: YAG laser source. Conductivity and refractive index measurements were also obtained for the milk standards.<br>A partial least squares calibration with leave-N-out cross validation was made using spectra with conductivity and refractive index to estimate fat, lactose and protein contents. Calibrations were developed using 75% of the milk standards. Models were further validated using an independent test set comprised of the remaining 25% of the data that had been excluded from calibration. A second calibration was conducted using a genetic algorithm approach. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
33

Manawadu, Erandika Oshan. "Development of the rural statistical sustainability framework tool." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/46257/1/Erandika_Manawadu_Thesis.pdf.

Full text
Abstract:
It is important to promote a sustainable development approach to ensure that economic, environmental and social developments are maintained in balance. Sustainable development and its implications are not just a global concern, it also affects Australia. In particular, rural Australian communities are facing various economic, environmental and social challenges. Thus, the need for sustainable development in rural regions is becoming increasingly important. To promote sustainable development, proper frameworks along with the associated tools optimised for the specific regions, need to be developed. This will ensure that the decisions made for sustainable development are evidence based, instead of subjective opinions. To address these issues, Queensland University of Technology (QUT), through an Australian Research Council (ARC) linkage grant, has initiated research into the development of a Rural Statistical Sustainability Framework (RSSF) to aid sustainable decision making in rural Queensland. This particular branch of the research developed a decision support tool that will become the integrating component of the RSSF. This tool is developed on the web-based platform to allow easy dissemination, quick maintenance and to minimise compatibility issues. The tool is developed based on MapGuide Open Source and it follows the three-tier architecture: Client tier, Web tier and the Server tier. The developed tool is interactive and behaves similar to a familiar desktop-based application. It has the capability to handle and display vector-based spatial data and can give further visual outputs using charts and tables. The data used in this tool is obtained from the QUT research team. Overall the tool implements four tasks to help in the decision-making process. These are the Locality Classification, Trend Display, Impact Assessment and Data Entry and Update. The developed tool utilises open source and freely available software and accounts for easy extensibility and long-term sustainability.
APA, Harvard, Vancouver, ISO, and other styles
34

Tang, Yuchun. "Granular Support Vector Machines Based on Granular Computing, Soft Computing and Statistical Learning." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_diss/5.

Full text
Abstract:
With emergence of biomedical informatics, Web intelligence, and E-business, new challenges are coming for knowledge discovery and data mining modeling problems. In this dissertation work, a framework named Granular Support Vector Machines (GSVM) is proposed to systematically and formally combine statistical learning theory, granular computing theory and soft computing theory to address challenging predictive data modeling problems effectively and/or efficiently, with specific focus on binary classification problems. In general, GSVM works in 3 steps. Step 1 is granulation to build a sequence of information granules from the original dataset or from the original feature space. Step 2 is modeling Support Vector Machines (SVM) in some of these information granules when necessary. Finally, step 3 is aggregation to consolidate information in these granules at suitable abstract level. A good granulation method to find suitable granules is crucial for modeling a good GSVM. Under this framework, many different granulation algorithms including the GSVM-CMW (cumulative margin width) algorithm, the GSVM-AR (association rule mining) algorithm, a family of GSVM-RFE (recursive feature elimination) algorithms, the GSVM-DC (data cleaning) algorithm and the GSVM-RU (repetitive undersampling) algorithm are designed for binary classification problems with different characteristics. The empirical studies in biomedical domain and many other application domains demonstrate that the framework is promising. As a preliminary step, this dissertation work will be extended in the future to build a Granular Computing based Predictive Data Modeling framework (GrC-PDM) with which we can create hybrid adaptive intelligent data mining systems for high quality prediction.
APA, Harvard, Vancouver, ISO, and other styles
35

Sotero, Charity Faith Gallemit. "Statistical Support Algorithms for Clinical Decisions and Prevention of Genetic-related Heart Disease." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10751893.

Full text
Abstract:
<p> Drug-induced long QT syndrome (diLQTS) can lead to seemingly healthy patients experiencing cardiac arrest, specifically Torsades de Pointes (TdP), which may lead to death. Clinical decision support systems (CDSS) assist better prescribing of drugs, in part by issuing alerts that warn of the drug&rsquo;s potential harm. LQTS may be either genetic or acquired. Thirteen distinct genetic mutations have already been identified for hereditary LQTS. Since hereditary and acquired LQTS both share similar clinical symptoms, it is reasonable to assume that they both have some sort of genetic component. The goal of this study is to identify genetic risk markers for diLQTS and TdP. These markers will be used to develop a statistical DSS for clinical applications and prevention of genetic-related heart disease. We will use data from a genome-wide associate study conducted by the Pharmacogenomics of Arrhythmia Therapy subgroup of the Pharmacogenetics Research Network, focused on subjects with a history of diLQTS or TdP after taking medication. The data was made available for general research use by National Center for Biotechnology Information (NCBI). The data consists of 831 total patients, with 172 diLQTS and TdP case patients. Out of 620,901 initial markers, variable screening is done by a preliminary t-test (&alpha;=0.01), and the resulting feasible set of 5,754 markers associated with diLQTS to prevent TdP were used to create an appropriate predictive model. Methods used to create a predictive model were ensemble logistic regression, elastic net, random forests, artificial neural networks, and linear discriminant analysis. Of these methods using all 5,754 markers, accuracy ranged from 76.84% to 90.29%, with artificial neural networks as the most accurate model. Finally, variable importance algorithms were applied to extract a feasible set of markers from the ensemble logistic regression, elastic net, and random forests methods, and used to produce a subset of genetic markers suitable to build a proposed DSS. Of the methods using a subset of 61 markers, accuracy ranged from 76.59% to 87.00%, with ensemble logistic regression as the most accurate model. Of the methods using a subset of 22 markers, accuracy ranged from 74.24% to 82.87%, with the single hidden layer neural network (using the subset of markers extracted from the ensemble bagged logistic model) as the most accurate model.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
36

Cardamone, Dario. "Support Vector Machine a Machine Learning Algorithm." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
Nella presente tesi di laurea viene preso in considerazione l’algoritmo di classificazione Support Vector Machine. Piu` in particolare si considera la sua formulazione come problema di ottimizazione Mixed Integer Program per la classificazione binaria super- visionata di un set di dati.
APA, Harvard, Vancouver, ISO, and other styles
37

Higgins, Stephen G., and Kristian L. Wahlgren. "An analytical history of provider organization support within navy enterprise: Naval Supply Systems Command." Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/10619.

Full text
Abstract:
MBA Professional Report<br>In an increasingly constrained resource environment, the enterprise approach was introduced in the U.S. Navy to empower stakeholders across multiple commands to take a holistic view of objectives and processes and become unified to achieve required output with greater efficiency. As a member of the Navy Provider Enterprise, Naval Supply Systems Command (NAVSUP) is responsible for providing services, equipment and other resources to the Warfare Enterprises with focus of future readiness at minimal cost. This project focuses on enterprise practices within NAVSUP. It analyzes how NAVSUP Enterprise was implemented and designed to function within the Navy Provider Enterprise construct. This project also describes NAVSUP's execution of the organizational change process and analyzes to what extent change is occurring. The results of this thesis reveal that the NAVSUP Provider Enterprise is achieving positive organizational change through the implementation of collaborative enterprise management practices. The thesis reveals some identifiable organizational challenges and change issues that inhibit the achievement of NAVSUP Provider Enterprise goals. These findings are used to develop and present a series of recommendations to assist the leadership to further align NAVSUP Provider Enterprise actions with the change objectives.
APA, Harvard, Vancouver, ISO, and other styles
38

Casadei, Enrico <1990&gt. "Analytical instrumental approaches and strategies to support the sensory assessment of virgin olive oils." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amsdottorato.unibo.it/9712/1/Casadei%20Enrico_Thesis.pdf.

Full text
Abstract:
At the beginning, this Ph.D. project led to an overview of the most common and emerging types of fraud and possible countermeasures in the olive oil sector. Furthermore, possible weaknesses in the current conformity check system for olive oil were highlighted. Among those, despite the organoleptic assessment is a fundamental tool for establishing the virgin olive oils (VOOs) quality grade, the scientific community has evidenced some drawbacks in it. In particular, the application of instrumental screening methods to support the panel test could reduce the work of sensory panels and the cost of this analysis (e.g. for industries, distributors, public and private control laboratories), permitting the increase in the number and the efficiency of the controls. On this basis, a research line called “Quantitative Panel Test” is one of the main expected outcomes of the OLEUM project that is also partially discussed in this doctoral dissertation. In this framework, analytical activities were carried out, within this PhD project, aimed to develop and validate analytical protocols for the study of the profiles in volatile compounds (VOCs) of the VOOs headspace. Specifically, two chromatographic approaches, one targeted and one semi-targeted, to determine VOCs were investigated in this doctoral thesis. The obtained results, will allow the possible establishment of concentration limits and ranges of selected volatile markers, as related to fruitiness and defects, with the aim to support the panel test in the commercial categorization of VOOs. In parallel, a rapid instrumental screening method based on the analysis of VOCs has been investigated to assist the panel test through a fast pre-classification of VOOs samples based on a known level of probability, thus increasing the efficiency of quality control.
APA, Harvard, Vancouver, ISO, and other styles
39

Kahle, Thomas. "On Boundaries of Statistical Models." Doctoral thesis, Universitätsbibliothek Leipzig, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-37952.

Full text
Abstract:
In the thesis "On Boundaries of Statistical Models" problems related to a description of probability distributions with zeros, lying in the boundary of a statistical model, are treated. The distributions considered are joint distributions of finite collections of finite discrete random variables. Owing to this restriction, statistical models are subsets of finite dimensional real vector spaces. The support set problem for exponential families, the main class of models considered in the thesis, is to characterize the possible supports of distributions in the boundaries of these statistical models. It is shown that this problem is equivalent to a characterization of the face lattice of a convex polytope, called the convex support. The main tool for treating questions related to the boundary are implicit representations. Exponential families are shown to be sets of solutions of binomial equations, connected to an underlying combinatorial structure, called oriented matroid. Under an additional assumption these equations are polynomial and one is placed in the setting of commutative algebra and algebraic geometry. In this case one recovers results from algebraic statistics. The combinatorial theory of exponential families using oriented matroids makes the established connection between an exponential family and its convex support completely natural: Both are derived from the same oriented matroid. The second part of the thesis deals with hierarchical models, which are a special class of exponential families constructed from simplicial complexes. The main technical tool for their treatment in this thesis are so called elementary circuits. After their introduction, they are used to derive properties of the implicit representations of hierarchical models. Each elementary circuit gives an equation holding on the hierarchical model, and these equations are shown to be the "simplest", in the sense that the smallest degree among the equations corresponding to elementary circuits gives a lower bound on the degree of all equations characterizing the model. Translating this result back to polyhedral geometry yields a neighborliness property of marginal polytopes, the convex supports of hierarchical models. Elementary circuits of small support are related to independence statements holding between the random variables whose joint distributions the hierarchical model describes. Models for which the complete set of circuits consists of elementary circuits are shown to be described by totally unimodular matrices. The thesis also contains an analysis of the case of binary random variables. In this special situation, marginal polytopes can be represented as the convex hulls of linear codes. Among the results here is a classification of full-dimensional linear code polytopes in terms of their subgroups. If represented by polynomial equations, exponential families are the varieties of binomial prime ideals. The third part of the thesis describes tools to treat models defined by not necessarily prime binomial ideals. It follows from Eisenbud and Sturmfels' results on binomial ideals that these models are unions of exponential families, and apart from solving the support set problem for each of these, one is faced with finding the decomposition. The thesis discusses algorithms for specialized treatment of binomial ideals, exploiting their combinatorial nature. The provided software package Binomials.m2 is shown to be able to compute very large primary decompositions, yielding a counterexample to a recent conjecture in algebraic statistics.
APA, Harvard, Vancouver, ISO, and other styles
40

Janas, Marek J. "Change of support correction in mineral resource estimation." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2001. https://ro.ecu.edu.au/theses/1051.

Full text
Abstract:
The success of any mining operation greatly, if not entirely, depends on the accuracy of prediction of recoverable mining reserves. However, prior to mining, knowledge about the distribution of the Selective Mining Unit (SMU) is limited. The SMU represents the volume on which extraction of ore takes place and on which recoverable mining reserves are based. Realistic recoverable reserve estimates can be obtained from the grade-tonnage curve that corresponds to the unknown distribution of the SMU rather than to the distribution of exploration sample data. In general, if the reserve calculation, at the given cut-off grade, is based upon exploration drill samples, with much smaller support than the SMU, then there is a high probability of incorrect estimation of the tonnage and the grade of ore, and this can have serious implications for the economic side of the mining project. Various techniques for correction for the change of support of data, in other words change of the volume on which the data are defined, enable more accurate estimates of the distribution of the variable of interest (that is grade of a precious metal). The fact that the volume (support) represented by the variable is taken into account makes the estimates more reliable and, as we will show in the study, closer to reality. The distribution of the SMU is derived from the known distribution of samples by applying a correction model. Among these techniques arc two recent methods these arc a conditional simulation method detailed by I. Glacken and a kriging method due to A. Arik. This study aims to examine these two methods and compare them with the standard techniques. The methods will be applied to real data acquired from the Boddington Gold Mine in the south-west of Western Australia. In addition to accuracy, the practicality and simplicity of implementation of each method will also be discussed.
APA, Harvard, Vancouver, ISO, and other styles
41

Hoffman, Melissa. "Quantitative Proteomics to Support Translational Cancer Research." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7303.

Full text
Abstract:
Altered signaling pathways, which are mediated by post-translational modifications and changes in protein expression levels, are key regulators of cancer initiation, progression, and therapeutic escape. Many aspects of cancer progression, including early carcinogenesis and immediate response to drug treatment, are beyond the scope of genetic profiling and non-invasive monitoring techniques. Global protein profiling of cancer cell line models, tumor tissues, and biofluids (e.g. serum or urine) using mass spectrometry-based proteomics produces novel biological insights, which support improved patient outcomes. Recent technological advances resulting in next-generation mass spectrometry instrumentation and improved bioinformatics workflows have led to unprecedented measurement reproducibility as well as increased depth and coverage of the human proteome. It is now possible to interrogate the cancer proteome with quantitative proteomics to identify prognostic cancer biomarkers, stratify patients for treatment, identify new therapeutic targets, and elucidate drug resistance mechanisms. There are, however, numerous challenges associated with protein measurements. Biological samples have a high level of complexity and wide dynamic range, which is even more pronounced in samples used for non-invasive disease monitoring, such as serum. Cancer biomarkers are generally found in low abundance compared to other serum proteins, particularly at early stages of disease where cancer detection would make the biggest impact on improving patient survival. Additionally, the large-scale datasets generally require bioinformatics expertise to produce useful biological insights. These difficulties converge to create obstacles for down-stream clinical translation. This dissertation research demonstrates how proteomics is applied to develop new resources and generate novel workflows to improve protein quantification in complex biosamples, which could improve translation of cancer research to benefit patient care. The studies described in this dissertation move from assessment of quantitative mass spectrometry platforms, to analytical assay development and validation, and ending with personalized biomarker development applied to patient samples. As an example, four different quantitative mass spectrometry acquisition platforms are explored and comparisons of their ability to quantify low abundance peptides in a complex background are explored. Lung cancers frequently have aberrant signaling resulting in increased kinase activity and targetable signaling hubs; kinase inhibitors have been successfully developed and implemented clinically. Therefore, changes in amounts of kinase peptides in the complex background of peptides from all ATP-utilizing enzymes in a lung cancer cell line model after kinase inhibitor treatment was selected as a model system. Traditional mass spectrometry platforms, data dependent acquisition and multiple reaction monitoring, are compared to the two newer methods, data independent acquisition and parallel reaction monitoring. Relative quantification is performed across the four methods and analytical performance as well as downstream applications, including drug target identification and elucidation of signaling changes. Liquid chromatography – multiple reaction monitoring (LC-MRM) was selected for development of multiplexed quantitative assays based on superior sensitivity and fast analysis times, allowing for larger peptide panels. Method comparison results also provide guidelines for quantitative proteomics platform selection for translational cancer researchers. Next, a multiplexed quantitative LC-MRM assay targeting a panel of 30 RAS signaling proteins was developed and described. Over 30% of all human cancers have a RAS mutation and these cancers are generally aggressive and limited treatment options, leading to poor patient prognosis. Many targeted inhibitors have successfully shut down RAS signaling, leading to tumor regression, however, acquired drug resistance is common. The multiplexed LC-MRM assays characterized and validated are a publically available resource for cancer researchers to interrogate the RAS signal transduction network. Feasibility has been demonstrated in cell line models in order to identify signaling changes that confer BRAF inhibitor resistance and biomarkers of sensitivity to treatment. This analytical LC-MRM panel could support meaningful development of new therapeutic options and identification of companion biomarkers, with the end goal of improving patient outcomes. Multiplexed LC-MRM assays developed for personalized disease biomarkers using an integrated multi-omics approach are described for Multiple Myeloma, an incurable malignancy with poor patient outcomes. This disease is characterized by clonal expansion of the plasma cells in the bone marrow, which secrete a monoclonal immunoglobulin, or M-protein. Clinical treatment decisions are based on multiple semi-quantitative assays that require manual evaluation. In the clinic, minimal residual disease quantification methods, including multi-parameter flow cytometry and immunohistochemistry, are applied to bone marrow aspirates, which is a highly invasive technique that does not provide a systemic evaluation of the disease. To address these issues, we hypothesized that unique variable region peptides could be identified and LC-MRM assays developed specific to each patient’s M-protein to improve specificity and sensitivity in non-invasive disease monitoring. A proteogenomics approach was used to design personalized assays for each patient to monitor their disease progression, which demonstrate improved specificity and up to a 500-fold increase in sensitivity compared to current clinical methods. Assays can be developed from marrow aspirates collected when the patient was at residual disease stage, which is useful if no sample with high disease burden is available. The patient-specific tests are also multiplexed with constant region peptide assays that monitor all immunoglobulin heavy and light chain classes, which could reduce analysis to a single test. In conclusion, highly sensitive patient-specific assays have been developed that could change the paradigm for patient evaluation and clinical decision-making, increasing the ability of clinicians to continue first line therapy in the hopes of achieving a cure, or to intervene at an earlier time point in disease recurrence. This study also provides a blueprint for future development of personalized diagnostics, which could be applied to biomarkers of other cancer types. Overall, these studies demonstrate how quantitative proteomics can be used to support translational cancer research, from the impact of different mass spectrometry platforms on elucidating signaling changes and drug targets to the characterization of multiplexed LC-MRM assays applied to cell line models for translational research purposes and in patient serum samples optimized for clinical translation. We believe that mass spectrometry-based proteomics is poised to play a pivotal role in personalized diagnostics to support implementation of precision medicine, an effort that will improve the quality and efficiency of patient care.
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Jiong. "Analytical studies on the force-induced phase transitions in slender shape memory alloy cylinders layers /." access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-ma-b23750546f.pdf.

Full text
Abstract:
Thesis (Ph.D.)--City University of Hong Kong, 2009.<br>"Submitted to Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves [214]-224)
APA, Harvard, Vancouver, ISO, and other styles
43

Nadeem, Mohammed. "A statistical quality control support system to facilitate acceptance sampling and control chart procedures." Ohio : Ohio University, 1994. http://www.ohiolink.edu/etd/view.cgi?ohiou1178137590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hechter, Trudie. "A comparison of support vector machines and traditional techniques for statistical regression and classification." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/49810.

Full text
Abstract:
Thesis (MComm)--Stellenbosch University, 2004.<br>ENGLISH ABSTRACT: Since its introduction in Boser et al. (1992), the support vector machine has become a popular tool in a variety of machine learning applications. More recently, the support vector machine has also been receiving increasing attention in the statistical community as a tool for classification and regression. In this thesis support vector machines are compared to more traditional techniques for statistical classification and regression. The techniques are applied to data from a life assurance environment for a binary classification problem and a regression problem. In the classification case the problem is the prediction of policy lapses using a variety of input variables, while in the regression case the goal is to estimate the income of clients from these variables. The performance of the support vector machine is compared to that of discriminant analysis and classification trees in the case of classification, and to that of multiple linear regression and regression trees in regression, and it is found that support vector machines generally perform well compared to the traditional techniques.<br>AFRIKAANSE OPSOMMING: Sedert die bekendstelling van die ondersteuningspuntalgoritme in Boser et al. (1992), het dit 'n populêre tegniek in 'n verskeidenheid masjienleerteorie applikasies geword. Meer onlangs het die ondersteuningspuntalgoritme ook meer aandag in die statistiese gemeenskap begin geniet as 'n tegniek vir klassifikasie en regressie. In hierdie tesis word ondersteuningspuntalgoritmes vergelyk met meer tradisionele tegnieke vir statistiese klassifikasie en regressie. Die tegnieke word toegepas op data uit 'n lewensversekeringomgewing vir 'n binêre klassifikasie probleem sowel as 'n regressie probleem. In die klassifikasiegeval is die probleem die voorspelling van polisvervallings deur 'n verskeidenheid invoer veranderlikes te gebruik, terwyl in die regressiegeval gepoog word om die inkomste van kliënte met behulp van hierdie veranderlikes te voorspel. Die resultate van die ondersteuningspuntalgoritme word met dié van diskriminant analise en klassifikasiebome vergelyk in die klassifikasiegeval, en met veelvoudige linêere regressie en regressiebome in die regressiegeval. Die gevolgtrekking is dat ondersteuningspuntalgoritmes oor die algemeen goed vaar in vergelyking met die tradisionele tegnieke.
APA, Harvard, Vancouver, ISO, and other styles
45

Ghodsypour, Seyed Hassan. "A decision support system for supplier selection integrating analytical hierarchy process with operations research methods." Thesis, University of Nottingham, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Lucas, Tamara J. H. "Formulation and solution of hierarchical decision support problems." Thesis, Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/17291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Kahle, Thomas. "On Boundaries of Statistical Models." Doctoral thesis, Max-Planck-Institut für Mathematik in den Naturwissenschaften, 2009. https://ul.qucosa.de/id/qucosa%3A11003.

Full text
Abstract:
In the thesis "On Boundaries of Statistical Models" problems related to a description of probability distributions with zeros, lying in the boundary of a statistical model, are treated. The distributions considered are joint distributions of finite collections of finite discrete random variables. Owing to this restriction, statistical models are subsets of finite dimensional real vector spaces. The support set problem for exponential families, the main class of models considered in the thesis, is to characterize the possible supports of distributions in the boundaries of these statistical models. It is shown that this problem is equivalent to a characterization of the face lattice of a convex polytope, called the convex support. The main tool for treating questions related to the boundary are implicit representations. Exponential families are shown to be sets of solutions of binomial equations, connected to an underlying combinatorial structure, called oriented matroid. Under an additional assumption these equations are polynomial and one is placed in the setting of commutative algebra and algebraic geometry. In this case one recovers results from algebraic statistics. The combinatorial theory of exponential families using oriented matroids makes the established connection between an exponential family and its convex support completely natural: Both are derived from the same oriented matroid. The second part of the thesis deals with hierarchical models, which are a special class of exponential families constructed from simplicial complexes. The main technical tool for their treatment in this thesis are so called elementary circuits. After their introduction, they are used to derive properties of the implicit representations of hierarchical models. Each elementary circuit gives an equation holding on the hierarchical model, and these equations are shown to be the "simplest", in the sense that the smallest degree among the equations corresponding to elementary circuits gives a lower bound on the degree of all equations characterizing the model. Translating this result back to polyhedral geometry yields a neighborliness property of marginal polytopes, the convex supports of hierarchical models. Elementary circuits of small support are related to independence statements holding between the random variables whose joint distributions the hierarchical model describes. Models for which the complete set of circuits consists of elementary circuits are shown to be described by totally unimodular matrices. The thesis also contains an analysis of the case of binary random variables. In this special situation, marginal polytopes can be represented as the convex hulls of linear codes. Among the results here is a classification of full-dimensional linear code polytopes in terms of their subgroups. If represented by polynomial equations, exponential families are the varieties of binomial prime ideals. The third part of the thesis describes tools to treat models defined by not necessarily prime binomial ideals. It follows from Eisenbud and Sturmfels'' results on binomial ideals that these models are unions of exponential families, and apart from solving the support set problem for each of these, one is faced with finding the decomposition. The thesis discusses algorithms for specialized treatment of binomial ideals, exploiting their combinatorial nature. The provided software package Binomials.m2 is shown to be able to compute very large primary decompositions, yielding a counterexample to a recent conjecture in algebraic statistics.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Ni. "Statistical Learning in Logistics and Manufacturing Systems." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11457.

Full text
Abstract:
This thesis focuses on the developing of statistical methodology in reliability and quality engineering, and to assist the decision-makings at enterprise level, process level, and product level. In Chapter II, we propose a multi-level statistical modeling strategy to characterize data from spatial logistics systems. The model can support business decisions at different levels. The information available from higher hierarchies is incorporated into the multi-level model as constraint functions for lower hierarchies. The key contributions include proposing the top-down multi-level spatial models which improve the estimation accuracy at lower levels; applying the spatial smoothing techniques to solve facility location problems in logistics. In Chapter III, we propose methods for modeling system service reliability in a supply chain, which may be disrupted by uncertain contingent events. This chapter applies an approximation technique for developing first-cut reliability analysis models. The approximation relies on multi-level spatial models to characterize patterns of store locations and demands. The key contributions in this chapter are to bring statistical spatial modeling techniques to approximate store location and demand data, and to build system reliability models entertaining various scenarios of DC location designs and DC capacity constraints. Chapter IV investigates the power law process, which has proved to be a useful tool in characterizing the failure process of repairable systems. This chapter presents a procedure for detecting and estimating a mixture of conforming and nonconforming systems. The key contributions in this chapter are to investigate the property of parameter estimation in mixture repair processes, and to propose an effective way to screen out nonconforming products. The key contributions in Chapter V are to propose a new method to analyze heavily censored accelerated life testing data, and to study the asymptotic properties. This approach flexibly and rigorously incorporates distribution assumptions and regression structures into estimating equations in a nonparametric estimation framework. Derivations of asymptotic properties of the proposed method provide an opportunity to compare its estimation quality to commonly used parametric MLE methods in the situation of mis-specified regression models.
APA, Harvard, Vancouver, ISO, and other styles
49

Ozdemir, Murat Ozyurek Kadir. "An analysis of the integration of decision-making modeling with statistical/quantitative background for master's level analytical courses." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA380828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ozdemir, Murat, and Kadir Ozyurek. "An analysis of the integration of decision-making modeling with statistical/quantitative background for master's level analytical courses." Thesis, Monterey, California. Naval Postgraduate School, 2000. http://hdl.handle.net/10945/9304.

Full text
Abstract:
The purpose of this thesis is to integrate statistical/quantitative background material with Master's level analytical courses. This thesis first identifies the requirements for management education in terms of AACSB and NASPAA standards. Then, based on a comparative analysis of the country's top master's of business administration (MBA) programs and Naval Postgraduate School's current Systems Management curricula, and a survey conducted among SM faculty members, it integrates the decision-making modeling with statistical/ quantitative background material for master's level analytical courses. The structure of the MS in Management at NPS, while satisfying the requirements of both AACSB and NASPAA is similar to the top management schools' MBA programs in the United States. However, top management schools' statistical/quantitative course sequence generally has four courses, providing more statistic al/ quantitative background material than those three of NPS. Additionally, the contents of these three courses are not offered in adequate depth and some topics are duplicated. The new sequence and the contents of these courses are proposed based on a survey conducted among SM faculty members.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!