To see the other types of publications on this topic, follow the link: Established methods.

Dissertations / Theses on the topic 'Established methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Established methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Carlsen, Karen Marie. "Human parvovirus B19 erythrovirus : methods established for virological and diagnostic aspects /." Copenhagen : Blackwell Munksgaard, 2006. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=014982739&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lyman, Seth Neeley. "Investigation of atmospheric mercury concentrations and dry deposition rates using established and novel methods." abstract and full text PDF (UNR users only), 2009. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3369542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cosgriff-Hernandez, Meghan-Tomasita JuRi. "Histomorphometric Estimation of Age at Death Using the Femoral Cortex: A Modification of Established Methods." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1338361172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jessica, Stålheim. "Comparative study of established test methods for aggregate strength and durability of Archean rocks from Botswana." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-221250.

Full text
Abstract:
ABSTRACT Comparative study of established test methods for aggregate strength and durabilityof Archean rocks from Botswana In the current situation, river sand is used for building of roads and as raw material forconcrete in Botswana. River sand is a finite resource and important to preserve as itacts as natural water purification, groundwater aquifer and protection against soil erosion.Mining of bedrock may be a good alternative to replace the river sand with crushed rock(aggregates) in concrete and as road materials.The main purpose of this thesis was to determine if the rock grain size can be usedas a parameter to indicate durability and rock strength. It was also of interest to find outif the grain size correlates with established technical analysis and strength test methods.This knowledge can be used as a prospecting tool when searching for new quarry sites inthe future.In this master’s thesis, rock samples from the Gaborone granite complex have beenanalysed to examine how established test methods and the mineral grain size correspondswith the rock strength. By comparing technical properties (Los Angeles (LA) value ,aggregate crushing value (ACV), aggregate impact value (AIV) and 10 percent fines aggregatecrushing test (10 % FACT)) with quantitative analysis (mineral grain size andmineral grain size distribution), it is possible to determine the mineral grain size correspondenceto rock strength. Generally the result show that more fine-grained granitesshow better technical properties than more coarse-grained granites. The calculated meangrain size show weak negative correlation to ACV value, and a positive correlation to LA-, AIV- and 10 % FACT values. Best correlation can be seen between mean grain size andLA values (R2= 0.61) and AIV values (R2= 0.58). Low mean grain size tend to give bettertechnical properties in form of lower LA- and AIV values. The cumulative distributioncurve show that a high concentration of very fine material or fine material tend to contributeto a lower LA value. The results indicate that equigranular rocks with low meangrain size contributes to good technical properties, but when it comes to uneven grainedrock more factors must be taken into account to estimate technical properties.
APA, Harvard, Vancouver, ISO, and other styles
5

Weir, Christopher John. "Informed clinical management of acute stroke : use of established statistical methods and the development of an expert system." Thesis, University of Glasgow, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Beisheim, Maja, and Charline Langner. "Lean Startup as a Tool for Digital Business Model Innovation : Enablers and Barriers for Established Companies." Thesis, Jönköping University, IHH, Företagsekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-52579.

Full text
Abstract:
Background: The rapidly changing world of digital technologies forces many companies to undertake a digital shift by transforming existing business models into digital business models to achieve sustainable value creation and value capture. Especially, for established companies, that have been successful leaders before the dot-com bubble (1995-2000) and whose business models have been threatened by the emergence of digital technologies, there is a need for a digital shift. We refer to this digitization of business models as digital business model innovation. However, often adoption and implementation of digital technologies require tremendous changes and thus, can be challenging for established companies. Therefore, agile methods and business experimentation have become important strategic elements and are used to generate and test novel business models in a fast manner. We introduce lean startup as an agile method for digital business model innovation, which has proven to be successful in digital entrepreneurship. Thus, it requires further empirical investigation on how to use lean startup in established companies for successful digital business model innovation. Purpose: The purpose of our study is to identify enablers and barriers of lean startup as a tool for digital BMI in established companies. Thus, we propose a framework showing how established companies can be successful in digital business model innovation by using lean startup. Method: We conducted exploratory, qualitative research based on grounded theory following an abductive approach. Using a non-probability, purposive sampling strategy, we gathered our empirical data through ten semi-structured interviews with experts in lean startup and digital business model innovation, working in or with established companies, shifting their business model towards a digital business model. By using grounded analysis, we gained an in-depth understanding of how lean startup is used in practice as well as occurring barriers and enablers for established companies. Conclusion: We emphasize that successful use of lean startup for digital business model innovation is based on an effective (1) lean startup management, appropriate (2) organizational structures, fitting (3) culture, and dedicated (4) corporate governance, which all require and are based on solid (5) methodical competence of the entire organization. Furthermore, (6) external influences such as market conditions, role of competition, or governance rules indirectly affect using lean startup as a tool for digital business model innovation.
APA, Harvard, Vancouver, ISO, and other styles
7

Farrah, John Alfred. "Measurement of mechanical properties of the skin in lower limb chronic venous disease compared to established non-invasive methods of assessment." Thesis, University College London (University of London), 1998. http://discovery.ucl.ac.uk/1318062/.

Full text
Abstract:
Chronic venous disease (CVD) of the lower limbs is a major problem in the western world with 1% of the adult population estimated to be affected at any one time. The clinical sequelae of CVD of the lower limbs range from oedema, haemosiderosis and pigmentation, to gross lipodermatosclerosis (LDS) and venous ulceration. The site most commonly affected is the gaiter area of the lower limb. The extent and severity of venous disease can be assessed by clinical and physiological methods which include duplex ultrasonography and plethysmography. Tissue oedema can be assessed by volumetric or circumferential measurements and venous ulcers may be quantified by area measurements and response to treatment in ulcer healing studies. In the vast majority of patients a spectrum of skin changes precedes venous ulceration. At present, there is no standardised objective method of assessing the degree of skin change in these patients, so that the response to treatment can be objectively monitored. I have developed a tissue tonometer and standardised the methodology for the objective assessment and quantification of the skin changes seen in patients. The tissue tonometer is a simple non-invasive instrument which uses a sensing device that detects the movements of a loaded plunger placed on the skin. The movement of the plunger is dependent on the mechanical properties of the skin and subcutaneous tissue. The instrument is positioned on the gaiter region of the leg with the subject in the supine position. The movement of the plunger into the tissues is recorded and analysed by a computer. The data obtained from the tonometer were analysed as distance and rate constant parameters. A simple mathematical model using spring and dashpot constants was also applied to see if it fitted the data. Skin compliance was investigated in normal control subjects and patients with varying severity of skin changes due to CVD, clinically classified according to the CEAP (Clinical, (A)Etiological, Anatomical and Pathophysiological) method. There was a significant reduction in skin compliance in patients with clinically severe LDS as compared to normal controls and patients with pigmentation alone or oedema without any clinical evidence of skin change. I further investigated the correlation between the recently introduced CEAP method of classification and scoring of chronic venous disease of the lower limbs with the tissue tonometry findings and parameters obtained with duplex ultrasonography, air plethysmography and photoplethysmography. Tissue tonometry provides a standardised objective means of assessing the severity of skin change in CVD which may prove to be useful in evaluating response to a particular treatment and comparing data from different centres. The deterioration of the venous physiology shown by blood flow measuring techniques correlates poorly with the clinical sequelae of venous disease, whether assessed by a trained observer or measured by the tonometer. Patients show a wide range of sensitivity to venous valvular incompetence, suggesting that factors related to the tissue response to venous hypertension are crucial in determining which patients develop venous ulceration.
APA, Harvard, Vancouver, ISO, and other styles
8

Backhans, Sandra, and Madelene Jansson. "Vedertagna metoder som främjar viktnedgång hosvuxna med fetma och övervikt : - En litteraturöversikt." Thesis, Högskolan Dalarna, Omvårdnad, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:du-25283.

Full text
Abstract:
Bakgrund: Övervikt och fetma är ett ökande problem i världen. Det finns idag flertalet faktorer som bidrar till att befolkningen blir överviktig, som bland annat en ökad tillgång till onyttig mat med hög fetthalt och en stillasittande livsstil. Övervikt och fetma är även en riskfaktor till flertalet sjukdomar som bland annat hjärt- och kärlsjukdomar. Syfte: Att belysa vedertagna metoder som kan främja viktnedgång bland vuxna individer med fetma och överviktsproblematik. Metod: Studien genomfördes som en litteraturöversikt där 15 artiklar har använts för att besvara studiens syfte och frågeställning. Sökningarna gjordes i databaserna Cinahl och PubMed. Resultat: Viktnedgång handlar till stor del om att ändra på levnadsvanor som kost och motion.Andra metoder som tycks främja viktnedgång är stresshantering och olika tekniska hjälpmedel som främjar individens egna förmåga. Även självövervakning i form av att skriva ner kaloriintaget varje dag och fysisk aktivitet i minuter hjälper individer till att gå ner i vikt. Motivation och utbildning om kost och motion är faktorer som främjar individernas egen förmåga till att gå ner i vikt. Slutsats: Det finns flertalet olika metoder som kan hjälpa individer att gå ner i vikt, att ändra kost och motionsvanor är en viktig grund. Trots att övervikt och fetma är ett växande problem världen över finns det lite forskning om vad sjuksköterskan kan göra för att hjälpa dessa individer och därför behövs mer forskning kring detta.<br>Background: Overweight and obesity is an increasing problem in the world. In today's society several factors contribute to the increase of overweight in the population, for instance an increase of fast food chains and a sedentary lifestyle. Overweight and obesity is a risk factor for several diseases such as cardiovascular diseases. Aim: To examine established methods that promote weight loss in overweight and obese adults. Method: The study was conducted as a literature review where 15 different articles were used to answer the purpose and research question of the study. The searches were made in the databases Cinahl and PubMed. Result: Weight loss is about changing lifestyle habits such as diet and physical activity. Other methods found to promote weight loss are stress management and technical tools which helped the individual to self-efficacy. Self-monitoring as writing down calorie intake and minutes of physical activity every day helped individuals to lose weight. Motivation and education about nutrition and exercise were factors which promoted individuals self-efficacy to lose weight. Conclusion: There are several methods that can help the individual to lose weight, to change lifestyle habits is an important basis. Even though overweight and obesity is a growing problem worldwide there is still little research about what the nurse can do to help those people and therefore more research is needed.
APA, Harvard, Vancouver, ISO, and other styles
9

Wiedemann, Anna Maria Patricia [Verfasser], Helmut [Akademischer Betreuer] Krcmar, Helmut [Gutachter] Krcmar, Heiko [Gutachter] Gewald, and Nils [Gutachter] Urbach. "The Integration of DevOps Teams in Established IT Organizations - Effective Methods and Empirical Insights / Anna Maria Patricia Wiedemann ; Gutachter: Helmut Krcmar, Heiko Gewald, Nils Urbach ; Betreuer: Helmut Krcmar." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1226287484/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kribel, Jacob Robert George. "Long Term Permanent Vegetation Plot Studies in the Matoaka Woods, Williamsburg, Virginia : Establishment and Initial Data Analysis of Plots Established with the North Carolina Vegetation Survey Protocol, Resampling of Single Circular Plots and a Comparison of Results from North Carolina Vegetation Survey Protocol and Single Circular Plot Methods." W&M ScholarWorks, 2003. https://scholarworks.wm.edu/etd/1539624378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hanif, R. "Microscale methods to establish scalable operations for protein impurity removal prior to packed bed steps." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1416834/.

Full text
Abstract:
The purification of monoclonal antibodies and Fab’ antibody fragments are of central importance to the pharmaceutical industry. In 2008, 29 new therapies based on such molecules were approved for the US market. Traditionally, a multistep process achieves purification with the majority of steps being packed bed chromatography. Chromatography is the major contributor to the unit operation costs in terms of initial capital expenditure for packing and recurrent replacement costs. When considering the demand for biopharmaceuticals, it becomes necessary to consider alternative process strategies to improve the economics of purification of such proteins. To address this issue, this thesis investigates precipitation to selectively isolate Fab’ or remove protein impurities to assist the initial purification process. The hypothesis tested was that the combination of two or more precipitating agents will alter the solubility profile of Fab’ or protein impurities through synergistic multimodal effects. This principle was investigated through combinations of polyethylene glycol (PEG) with ammonium sulphate, sodium citrate and sodium chloride at different ratios in a novel multimodal approach. A high throughput system utilising automated robotic handling was developed in microwells at 1 mL scale per well to enable the rapid screening of a large number of variables in parallel using a Design of Experiments (DoE) approach to statistically design studies in a two stage process, based on Quality by Design principles. In the first stage, Fab’ precipitation using PEG was investigated using a screening study in the form of a full two level factorial DoE to investigate a large design space. This was followed by a second more focused central composite face centred DoE to find optimal experimental conditions to deliver a high Fab’ yield and purification factor in the range investigated. A design space comprised of the responses percentage Fab’ yield and purification factor was created to give a robust region where Fab’ yield was ≥ 90% with a maximum purification factor of 1.7. A normal operating range (NOR) was defined within this design space for operational simplicity when working at process scale. A confirmatory run was performed within the NOR with PEG 12000 15% w/v pH 7.4, which delivered a Fab’ yield and purification factor of 93% and 1.5 respectively. In the second stage, optimum conditions from the first study were used in a central composite face centred DoE incorporating multimodal conditions combining PEG with three salts from the Hofmeister series namely, ammonium sulphate, sodium citrate and sodium chloride. It was found that 90% Fab’ yield with a purification factor of 1.9 was achievable with PEG 12000 15% w/v/0.30 M sodium citrate/0.15 M ammonium sulphate pH 7.4. This was an improvement of 26% relative to the use of 15% w/v PEG 12000 pH 7.4 in single mode. However an alternative precipitation strategy to precipitate ~20% of protein impurities whilst Fab’ remained soluble using PEG 12000 6.25% w/v/0.4 M sodium citrate pH 7.4 was proposed instead. The advantage of this approach at process scale is the potential ease of processing due to removal of a solubilisation step and the significantly reduced viscosity of the precipitating agent relative to that of high concentrations of PEG. It was shown that this system could mimic process scale, which was verified at laboratory scale (50 mL stirred tank reactor (STR)) and pilot process scale (5 L STR). A process run through was performed using a 1 mL SP Sepharose Hi Trap pre packed bed column (GE Healthcare, Uppsala, Sweden) to capture Fab’ from homogenate (control), multimodal (PEG 12000 6.25% w/v/0.4 M sodium citrate pH 7.4) and single mode (PEG 12000 15% w/v pH 7.4) feedstreams. The final process purification factor for the three feedstreams were 2.5, 4.4 and 3.5 respectively. The use of multimodal precipitated impurities prior to a packed bed step had improved process performance by a purification factor of 1.9. This underlines the importance of assessing the interaction of individual processing steps, and the implementation of appropriate scale down models as a means of achieving process parameter ranging understanding. The impact of which has the further potential to improve the longevity of chromatography resins and reducing overall downstream purification cost.
APA, Harvard, Vancouver, ISO, and other styles
12

Ishii, Makoto M. B. A. Massachusetts Institute of Technology. "A strategic method to establish sustainable platform businesses for next-generation home-network environments." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37216.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management, 2006.<br>Includes bibliographical references (leaves 147-152).<br>The situation of the consumer electronics industry has become severe due to the rapid growth of digital hardware technology, and sophisticated open source technology. Every product of this industry has become commoditized very rapidly due to the emergence of those technologies, and many firms have been suffering from very thin profitability. Under such severe circumstances, the firms in the high-tech industry that enjoy overwhelming market share, profitability, and sustainability are the firms doing "Platform business," such as Intel and Microsoft, rather than those doing low margin "product selling business." Looking at the great sustainability of those firms, many high-tech firms have aimed to be successful Platform leaders, but to do so is not easy. In this paper, I define key success factors for consumer electronics firms to be able to be profitable and sustainable Platform Leaders, especially focusing on the "home-network platform business" where many high-tech firms have tried to be a dominant design holder.<br>(cont.) I explore how to let a company's own technology and business model become a dominant design in the home-network business, how to establish a successful Platform business with the dominant design, and how to maintain sustainability and high profitability of the Platform business as a Platform leader. Concretely, based on Platform Leadership levers defined by Cusumano and Gawer, I define the Enhanced Platform Leader Model, EPLM, as newly redefined key success factors for being a successful Platform leader, by analyzing past successful and unsuccessful Platform business cases of new home-network businesses. In addition, through proposing an appropriate Platform architecture and other key elements for being a sustainable Platform leader, I propose a new business model for a high potential next-generation home-network business that takes advantage of "intuitive operation" technology, and I also propose appropriate strategies to make the business model successful, using EPLM. The views expressed in this paper are those of the authors and do not reflect those of Sony Corporation.<br>by Makoto Ishii.<br>M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
13

Shu, Jiaze, Defu Li, Haiping Ouyang, et al. "Comparison and evaluation of two different methods to establish the cigarette smoke exposure mouse model of COPD." NATURE PUBLISHING GROUP, 2017. http://hdl.handle.net/10150/626192.

Full text
Abstract:
Animal model of cigarette smoke (CS) -induced chronic obstructive pulmonary disease (COPD) is the primary testing methodology for drug therapies and studies on pathogenic mechanisms of disease. However, researchers have rarely run simultaneous or side-by-side tests of whole-body and nose-only CS exposure in building their mouse models of COPD. We compared and evaluated these two different methods of CS exposure, plus airway Lipopolysaccharides (LPS) inhalation, in building our COPD mouse model. Compared with the control group, CS exposed mice showed significant increased inspiratory resistance, functional residual capacity, right ventricular hypertrophy index, and total cell count in BALF. Moreover, histological staining exhibited goblet cell hyperplasia, lung inflammation, thickening of smooth muscle layer on bronchia, and lung angiogenesis in both methods of CS exposure. Our data indicated that a viable mouse model of COPD can be established by combining the results from wholebody CS exposure, nose-only CS exposure, and airway LPS inhalation testing. However, in our study, we also found that, given the same amount of particulate intake, changes in right ventricular pressure and intimal thickening of pulmonary small artery are a little more serious in nose-only CS exposure method than changes in the whole-body CS exposure method.
APA, Harvard, Vancouver, ISO, and other styles
14

Taylor, Mark Steven. "A novel method of nutritional assessment in school age children : its design, development and performance relative to a more established technique." Thesis, Teesside University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517480.

Full text
Abstract:
Background: Obesity and overweight are increasing problems to the health of Western populations, both in adults and children. Overweight or obesity in childhood are predictors of obesity and various chronic diseases in adulthood. Constituents of the diet, in particular dietary energy intake, are at least part of the reason for this, so the measurement of childhood diet has considerable public health importance. However, the assessment of children's diets is usually costly and often inaccurate. Aims: This PhD involved the design, development and testing of a novel website method of assessment of dietary energy and macronutrient intake for school-age children, with the aim of maximising participation and completion rates, while keeping costs lower than those associated with paper-and-ink survey techniques, such as food diaries. The results from this new method were tested for their agreement with the results obtained from a 5-day food diary used in the same subjects. Methods: Participants were 164 children aged 9-10 years from a range of schools in the North East of England. The website's content was based on recent evidence of important sources of energy in this age group, and its design was informed by research into children's memory and reporting of foods and drinks. Colour photographs of a range of foods were provided on a website, selected from keyage-specific references and refined during a pilot study, along with time-cued questions regarding their usual diet. A computer database automatically compiled and coded each child's responses, in order to produce an estimate of mean daily energy and macronutrient intakes. The food diary was adapted from an existing tool which has been used previously in similar participants, and mean daily nutrient intake calculations were made using the same database as the website. The two methods were compared in their calculated mean daily energy and macronutrient intakes, and their relative cost of use. Results: 154 children completed both the website and the diary in full. Significant correlations were found between the two methods' ranking of mean daily energy and macro nutrients intakes (r=0.193 p<0.05, to r=0.230 p<0.01). However, this did not reach the levels of correlation identified as adequate to make the new tool useful in public health research, i.e. correlation above r=0.5, more than 50% agreement on tertiles of intake, and weighted kappa greater than k=O.4. The mean daily energy intakes reported to the website and the diary were 14.46MJ and 6.04MJ respectively. Bland-Altman (difference) plots and limits of agreement modelling revealed systematic under-reporting to the website at lower levels of intake of all nutrients, and over-reporting at high levels, when compared with the diary. Analysis of sources of energy showed closer correlation between the two techniques (r=0.245 to 0.351, all p<O.O 1), but again this fell short of the specified threshold. A simple economic analysis showed substantial cost savings of the website over the diary technique. Discussion: The analysis of the novel website method of dietary assessment showed some strengths, such as high participation rate in the study and low but significant correlation with the more established technique. However, important limitations including a lack of agreement between the website and the diary, with the appearance of over-reporting in the website method mean this tool cannot be recommended to measure energy and macronutrient intake in children in its current form. Possible extensions and modifications to the website are suggested, including some which would aim to address the over-reporting, and hence improve its usefulness as a tool in public health research.
APA, Harvard, Vancouver, ISO, and other styles
15

Thaggard, Michael. "Use of a Monte Carlo algorithm to establish an improved method for estimating the average unavailability of nuclear power plant components." Master's thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-12232009-020142/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Karjalainen, M. (Mikael). "Survey of current high-throughput screening methods on living organisms and developing bioprocesses to establish a long term culture of nephron stem cells." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201511042102.

Full text
Abstract:
The knowledge of kidney development is limited and for example the formation of nephrons is not fully understood. Kidney research is ongoing constantly and advances in the different fields of technology have allowed deeper understanding to this subject. Molecular biology is a good example how the development of technology can change a whole branch of science. The study of regulatory networks in kidney development has helped kidney researchers to understand better nephrogenesis, although the cultivation of different kidney cells is challenging. High-throughput screening has become available to academic research. Anymore, it is not only available for pharmaceutical industry. This powerful tool have been harnessed for kidney research and its usefulness have been proved. The high-throughput screening could be used to study how nephrons are formed when the kidney is developing. High-throughput assays need cells in large quantities, but the study of nephrogenesis is impaired by poor cell culture systems which rely heavily on primary cells. Presumably, the nephrons are formed from stem or progenitor cells, but the cultivation of these nephron stem cells is not possible as long term cultures. The nephron stem cells can be isolated in many different ways for cultivation, but establishing a long term culture is not possible. The problem is how to cultivate the nephron stem cells for the high-throughput screening assays in large quantities. Protein coatings are well-known technique in cell cultures and the aim will be to test protein coatings on renal cells. In addition, a survey of high-throughput screening will be done and the survey focused to the high-throughput screening on living organisms. The kidney experiments are presented here as a case study. A clear trend was identified from the high-throughput screening and the conclusion was that the development of technology has led to a new way of doing high-throughput screening. Now, it is customary to measure multiple quantities at once. Today, technology allows multiple different measurements from one experiment which increases the productivity of high-throughput screening tremendously. This “new high-throughput screening” is termed as “high-content screening”. There is also ongoing research in microfluidics, cell arrays, and flow cytometry which aims to improve the high-throughput and the high-content screening assays. These technologies have not changed dramatically the screening, but they can improve throughput. The protein coating experiments were done by cultivating and imaging dissociated mouse metanephric mesenchyme cells. The cells of mice were cultivated in microtitre plates which had different protein coatings in different wells. Imaging was done with an automated IncuCyte ZOOMTM microscope. This showed that only commercial available BME was significantly promoting cells proliferation. The experiments did not reveal what cells were proliferating. For stem cell cultures, it was proposed that the long term culture of nephron stem cells could established by cultivating the nephron stem cells as organoid niches. Here, the BME could be useful, because it promotes the proliferation of metanephric mesenchyme cells<br>Tietämys munuaisesta on vaillinaista ja esimerkiksi nefronien muodostumista ei ymmärretä yksityiskohtaisesti. Tekniikan kehittyessä munuaistutkimus syventyy entisestään ja esimerkiksi molekyylibiologia on antanut paljon munuaistutkimukselle. Munuaisen solujen geenien säätelyn tutkiminen ja kartoittaminen on auttanut tutkijoita ymmärtämään paremmin nefronien kehitystä vaikka eri munuaissolujen viljely on haasteellista. Laajaseulonta eli ”high-throughput screening” on tullut akateemisen tutkimuksen saataville ja se ei ole enää vain lääketeollisuuden yksinoikeus. Muinuaistutkimuksessa on käytetty laajaseulontaa ja sen hyödyt ovat selvät. Laajaseulontaa voitaisiin käyttää kehittyvän munuaisen nefronien muodostumisen tutkimiseen, mutta munuaissolujen soluviljelmät ovat ongelmallisia kokeiden toteuttamiseen. Soluviljelmät muun muassa keskittyvät vahvasti primäärisoluihin. Oletettavasti nefronit muodostuvat kantasoluista tai jo vähän erilaistuneista kantasoluista. Näitä oletettuja soluja ei kuitenkaan pystytä tällä hetkellä viljelemään ja tutkimukseen käytetään paljon primäärisoluja. Soluja pystytään siis eristämään monella eri tavalla, mutta pidempiaikainen viljely ei onnistu. Ongelma on, kuinka viljellä paljon nefronien kantasoluja laajaseulonnan kokeisiin. Proteiinipäällystykset ovat hyvin tunnettu tekniikka soluviljelmissä ja tämän työn tavoite on testata eri proteiinipäällystyksiä soluviljelmissä. Lisäksi laajaseulonnan tilasta tehdään selvitys keräämällä tietoa alasta ja sen kehityksestä. Selvitys rajattiin suurimmaksi osaksi elävillä organismeilla tehtävään laajaseulontaan. Munuaisen solut ovat siis työssä mukana tapaustutkimuksena. Laajaseulonnassa on tapahtunut huomattava muutos ja tekniikan kehittyessä laajaseulonnassa ei enää keskitytä mittaamaan yhtä suuretta kerrallaan. Nykyään tekniikka mahdollistaa monen eri mittauksen tekemisen yhdestä kokeesta, mikä kasvattaa laajaseulonnan tuottavuutta paljon verrattuna aiempaan seulontaan. Englanniksi tämä uusi seulontatapa on ”high-content screening”. Myös mikrofluidiikassa, solusiruissa, ja virtaussytometriassa tehdään tutkimusta, jota voidaan hyödyntää laajaseulonnassa. Nämä teknologiat eivät ole muuttaneet merkittävästi laajaseulontaa, mutta niitä voidaan käyttää laajaseulonnan tehostamiseen. Proteiinipäällystyskokeet toteutettiin kasvattamalla ja kuvaamalla eristettyjä hiiren metanefrisen mesenkyymin soluja. Hiiren soluja kasvatettiin kuoppalevyillä joiden kaivoissa oli erilaisia proteiinipäällystyksiä. Kuvaaminen tehtiin IncuCyte ZOOMTM–laitteella. Kokeiden mukaan vain kaupallisesti saatava BME pystyi pidentämään merkittävästi hiiren solujen kasvatusta. Tutkimus ei paljastanut mitä soluja kuoppalevyillä kasvoi. Ratkaisuksi kantasolu viljelmille ehdotettiin organoideja. Organoideja voisi hyödyntää viljelemällä nefronien kantasoluja organoideina. Organoidit muodostaisivat kantasolutaskuja, joissa kantasolut voisivat elää niille sopivassa ympäristössä säilyen erilaistumattomina pitkään. Organoidien viljelyssä BME:stä voisi olla hyötyä, koska kokeissa metanefrisen mesenkyymin solut kasvoivat siinä
APA, Harvard, Vancouver, ISO, and other styles
17

Evans, Amanda M. "A Pilot Study of Small-Scale Spatial Variability in Aldehyde Concentrations in Hillsborough County, Florida, to Establish and Evaluate Passive Sampling and Analysis Methods." Scholar Commons, 2010. https://scholarcommons.usf.edu/etd/1627.

Full text
Abstract:
Formaldehyde and acetaldehyde are listed by the United States Environmental Protection Agency (U.S. EPA) as urban air toxics. Health effects due to significant exposure to these air toxics include increased incidence of nasopharyngeal cancer, myeloid leukemia, and exacerbation of asthma. Determining the spatial variation of air toxics, such as acetaldehyde and formaldehyde, is important for improving health risk assessment and evaluating the effectiveness of source control and reduction programs. Here, a pilot study was designed and performed to investigate small-scale spatial variability in concentrations of aldehydes using passive samplers. A literature review was first completed to select and evaluate current passive sampling and analysis methods. Radiello Aldehyde Samplers and high performance liquid chromatography (HPLC) were selected for sampling and analysis, respectively. An HPLC instrument was then set-up for separation with an Allure AK (aldehyde-ketone) column and for detection of aldehyde-derivatives via ultraviolet-visible (UV-Vis) spectrometer at 365 nm. Samplers were deployed in an (approximately) 0.7 km resolution grid pattern for one week in January 2010. Collected samples and blanks were eluted with acetonitrile and analysis was performed with the HPLC. Aldehyde samples were quantified using calibration standards. Mean aldehyde concentrations were 3.1 and 1.2 =/ mg/m³ for formaldehyde and acetaldehyde, respectively, and mean acetaldehyde/formaldehyde concentration ratios were 0.4. The concentration ratios showed very little variation between sites, and correlation of aldehyde concentrations by site was high (r=0.7). Therefore, it is likely that both aldehydes have similar sources. Spatial variation of aldehyde concentrations was small within the sampling area, as displayed by low coefficients of variation (13 and 23% for formaldehyde and acetaldehyde, respectively) and small concentration differences between sites (average of both aldehydes less than 0.5 mg/m³). Thus, one sampler may be representative of this sampling area and possibly other areas of the same spatial scale. Methods established during this pilot study will be used in a larger field campaign to characterize the spatial distribution of concentrations throughout the county, for analysis of environmental equity and health impacts.
APA, Harvard, Vancouver, ISO, and other styles
18

Derrick, Jade. "The development of viral capture, concentration and molecular detection method for norovirus in foods to establish the risk to public health." Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3022232/.

Full text
Abstract:
Norovirus has been identified as a common cause of gastroenteritis worldwide, and food as a transmission vehicle has been well documented. Standardised detection methods exist for the detection of norovirus from fresh produce and molluscan bivalves, whilst detection methods for a wider range of food matrices that may be implicated in transmission of norovirus do not currently exist. The detection of norovirus in foods suspected to be implicated in transmission is paramount for appropriate outbreak investigation. The contamination of foods other than shellfish and fresh produce often occurs via food handlers. The proportion of norovirus that is typically transferred from food handlers to food also remains unknown. Understanding this is necessary in order to estimate the risk of infection and the burden of gastroenteritis caused by norovirus that is attributable to food contaminated by food handlers. These questions were addressed by the development of a combined capture, concentration and quantitative detection protocol with the aim to enhanced norovirus recovery from a range of food types. A food surface wash and norovirus capture method that was sensitive, reduced processing time, and increased throughput capacity was applied to a range of ready to eat foods. An automated nucleic acid extraction method which further reduced processing time and increased throughput was validated. Finally the validated method demonstrated that two real time RT-PCR assays currently used for the detection of norovirus in shellfish and fresh produce or in faecal samples were comparable overall, and hence either could be used in combination with the norovirus capture, concentration and extraction protocol described in this thesis. The protocol was applied to a range of food matrices and resulted in < 1% to 55% recovery of norovirus GI and < 1% to 25% recovery of norovirus GII. The optimised protocol was then used to quantify virus transfer between food handlers hands and to food, in simulation experiments where food handlers’ gloved hands were artificially contaminated prior to preparation of a sandwich. This enabled norovirus transfer to food items and to other food handlers to be measured at each stage. Quantitative data demonstrated that 5.9 ± (SD ± 0.1) log10 cDNA copies/μl of norovirus GII inoculum, resulted in a percentage recovery of between 3.0% and 0.02% from Food Handlers and 7.8 ± (SD ± 0.1) log10 cDNA copies/μl of norovirus GI inoculum resulted in a percentage recovery between 9.6% and 0.004% from Food Handlers. The average percentage recovered from sandwich pieces over six replicates was 0.2% for norovirus GII and 1.2% for norovirus GI. The method and protocols developed could be rolled out to official control laboratories and aid foodborne outbreak investigation by allowing testing of food categories that currently are not investigated. Furthermore, this work demonstrated the extent of norovirus transfer from hands to food ingredients and the environment and could be used in risk assessment models. Further work applying these protocols to quantify the transfer from contaminated hands using a range of viral loads will be useful in determining risk more accurately, and to monitor and investigate food premises by introducing this as an additional food and hand hygiene marker.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Mingchuan. "A covariant 4D formalism to establish constitutive models : from thermodynamics to numerical applications." Thesis, Troyes, 2016. http://www.theses.fr/2016TROY0025/document.

Full text
Abstract:
L’objectif de ce travail est d’établir des modèles de comportement mécaniques pour les matériaux en grandes déformations. Au lieu des approches classiques en 3D dans lesquelles la notion d'objectivité est ambigüe et pour lesquelles différentes dérivées objectives sont utilisées arbitrairement, le formalisme quadridimensionnel dérivé des théories de la Relativité est appliqué. En 4D, les deux aspects de la notion d’objectivité, l’indépendance du référentiel (ou covariance) et l’invariance à la superposition de mouvement de corps rigide, peuvent désormais être distinguées. En outre, l’utilisation du formalisme 4D assure la covariance des modèles. Pour les modèles incrémentaux, la dérivée de Lie est choisie permettant une variation totale par rapport au temps, tout en étant à la fois covariante et invariante à la superposition des mouvements de corps rigide. Dans ce formalisme 4D, nous proposons également un cadre thermodynamique en 4D pour développer des modèles de comportement en 4D tels que l’hyperélasticité, l’élasticité anisotrope, l’hypoélasticité et l’élastoplasticité. Ensuite, les projections en 3D sont obtenus à partir des modèles en 4D et étudiés en les testant sur des simulations numériques par éléments finis avec le logiciel Zset<br>The objective of this work is to establish mechanical constitutive models for materials undergoing large deformations. Instead of the classical 3D approaches in which the notion of objectivity is ambiguous and different objective transports may be arbitrarily used, the four-dimensional formalism derived from the theories of Relativity is applied. Within a 4D formalism, the two aspects of notion of objectivity: frame-indifference (or covariance) and invariance to the superposition of rigid body motions can now be distinguished. Besides, the use of this 4D formalism ensures the covariance of the models. For rate-form models, the Lie derivative is chosen as a total time derivative, which is also covariant and invariant to the superposition of rigid body motions. Within the 4D formalism, we also propose a framework using the 4D thermodynamic to develop 4D constitutive models for hyperelasticity, anisotropic elasticity, hypoelasticity and elastoplasticity. Then, 3D models are derived from 4D models and studied by applying them in numerical simulations with finite element methods using the software Zset
APA, Harvard, Vancouver, ISO, and other styles
20

Raymond, Bonita. "Seeking arm’s length: An evaluation of formulary apportionment and predetermined margins as alternative or supplementary methods to establish proxy arm’s length transfer prices for multinational intercompany transactions in South Africa." Master's thesis, Faculty of Commerce, 2019. http://hdl.handle.net/11427/30808.

Full text
Abstract:
Since the inception of democracy in South Africa and the subsequent lifting of sanctions and trade embargos placed upon South Africa, the country’s economy has evolved from a much protected, inward looking economy into an internationally robust and competitive environment. Multinational enterprises (MNE’s) which seek to invest in a geographical region often choose certain countries as a base from which they can expand their investments to the other countries in the region. With its sizable economy, political stability relative to the rest of Africa and overall strength in financial services, South Africa should be the ideal location from which foreign investors can extend their investments into the rest of Africa (Ogutta, 2011). However, in South Africa foreign investment has reduced to an extent where local companies are now more invested in international markets than international investment in South Africa (Development, 2018). In monetary terms, at the end of 2017, South Africa had invested R3.3 trillion in foreign markets while foreign markets had only invested R 1.8 trillion in South Africa (Development, 2018). With the current global economic challenges, developing countries like South Africa have become increasingly aware of the importance of tax revenue and the effects of base erosion and profit shifting on the financial well-being of the state (OECD:G20 Working group, 2014); (Economic Commissions for Africa, 2018). Section 31 of the South African Income Tax Act, is the main section in the Act relating to transfer pricing in South Africa. Transfer pricing is one of the most important issues in international tax. It is estimated that more than 60% of international trade happens across borders but within the same corporate groups (Cobnam &amp; Mcnair, n.d.). The transfer pricing rules of South Africa are closely aligned with the wording of the Organisation for Economic Cooperation and Development (OECD) and the United Nations (UN) Model Tax conventions and are in line with tax treaties and other international tax principals (SARS, 2010). The cornerstone of the transfer pricing model is the use of the arm’s length price. In terms of the arm’s length principle, in order to test the reasonability of pricing within MNE’s, tax authorities should use a similar but unrelated open market transaction as the benchmark to determine if there were any profit shifting to avoid tax by the MNE’s between their different establishments in the different tax jurisdictions. The biggest challenge in South Africa and other countries, when applying the arm’s length principle is the lack of local comparable data available to evaluate the transfer prices (intercompany transactions) within the MNE’s (OECD Transfer Pricing Guidelines, 2018). There is a lack of publicly available company financial data that may be used to calculate comparative benchmarks, and the information which is available, is not necessarily sufficient or adequate for comparability purposes (Tax Justice Network, 2013). Information which is accessible may be incomplete and difficult to interpret. In other cases information may be difficult to obtain for reasons of its geographical location and, in some instances, it may simply not be possible to obtain information from independent enterprises due to enterprise competitiveness and confidentiality concerns (OECD Transfer Pricing Guidelines, 2018). Despite all of these limiting factors, the arm’s length principle, as recommended by the OECD &amp; UN Tax Model, remains the globally accepted guiding principle for calculating acceptable transfer prices. This is evident in the fact that almost all bilateral treaties in the world are based on these tax models (Steenkamp, 2017). For the last decade in South Africa, corporate tax has been the third largest contributor toward total revenue collection by National Treasury (National Treasury, 2017). It is therefore important that domestic tax laws should be able to protect the country’s tax base through legislation that discourages base erosion and profit shifting. The objective of this dissertation is to consider whether South Africa should continue to exclusively apply the arm’s length principle, which relies on comparable data, when determining transfer prices for goods in MNE’s. In testing this position, the following two alternative methods namely, formulary apportionment and predetermined margins, will be considered to evaluate whether or not these additional or complementary methods should be applied in the determination of arm’s length where comparable data is not available or requires significant adjustment as it relates to goods.
APA, Harvard, Vancouver, ISO, and other styles
21

Bredenhann, Hester Maria (Esme). "A study to establish a simple, reliable and economical method of evaluating food and nutritional intake of male mineworkers residing in a single accommodation residence on a platinum mine in the North West Province." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71887.

Full text
Abstract:
Thesis (MNutr)--Stellenbosch University, 2012.<br>ENGLISH ABSTRACT: Introduction: The study investigated the development of a simple, cost effective method to monitor food and nutritional intake of mineworkers residing in a Single Accommodation Village (SAV) by using food inventory data. Objectives: The aim of the study was to calculate average food and nutrient intake per mineworker using household data, assess actual food intake (individual data), determine food wastage and to compare food and nutritional intake between group and individual data. Methodology: The study design was a cross-sectional, observational study with an analytical component. The study population consisted of male mineworkers residing in a SAV on a platinum mine in the North West Province and included mineworkers performing mainly underground tasks. A census sampling method was used to select mineworkers participating in the study, and a pilot study was done to test the proposed study process. The study was conducted over five days, which included one weekend day. Food inventory data was recorded by capturing all food quantities (weight measured in kilogram) used for food preparation on the study days. The yield of the prepared food and expected meal participation was used to calculate an average intake per mineworker according to the household record method. An observational study was done to establish the food record data. Meal as well as food item participation was recorded. Food wastage was determined by weighing the production as well as the plate wastage and this data was used to ascertain average food intake per mineworker. Results: Approximately 700 mineworkers participated in the study. The study recorded a 96% meal participation measured against the planned participation figures during the main meal with 74% participating in all menu items. The values for breakfast and dinner were 95% meal participation for both meals with 87% menu item participation during breakfast and 82% during dinner. By using the t-distribution test it was recorded that limited values measured between the food inventory data and the food record data fell within the 95% confidence intervals even after correction for food wastage. However, when the planned participation used to calculate the household data was incorporated into an equation using actual participation data, the values fell within the 95% confidence interval demonstrating that with 95% certainty the planned values (when calculated according to the suggested equation) were within those values observed during the study. Conclusion: Household data can be used as a tool to monitor average individual food and nutritional intake of mineworkers; however both planned and actual menu item participation figures should be considered, together with the total wastage per food item. This tool can be adapted to be used in industrial catering units to monitor food and nutritional intake, which will enable identification of food or nutrient deficiencies and timeous implementation of intervention strategies.<br>AFRIKAANSE OPSOMMING: Inleiding: Die studie het ondersoek ingestel na die ontwikkeling van ‘n koste-effektiewe metode om die inname van voedsel en voedingstowwe van mynwerkers wat in enkel-akkommodasiebehuising (EAB) woon met behulp van voedselinventaris data te moniteer. Doelwitte: Die doel van die studie was om die gemiddelde voedsel en voedingstofinname per mynwerker met behulp van huishoudelike data te bereken, die werklike voedselinname (individuele data) te evalueer, voedselkwisting vas te stel en om voedsel- en voedingstofinname tussen groep en individuele data te vergelyk. Metode: Die studie-ontwerp was ‘n dwarssnitwaarnemingstudie met ‘n analitiese komponent. Die populasie van die studie het bestaan uit manlike mynwerkers woonagtig in ‘n EAB van ‘n platinum myn in die Noordwes Provinsie en het mynwerkers wat hoofsaaklik ondergronds werksaam is ingesluit. ‘n Sensussteekproefmetode is gebruik om deelnemende mynwerkers te selekteer en ‘n loods studie is gedoen om die voorgestelde studie model te toets. Die studie is oor vyf dae gedoen, wat een naweekdag ingesluit het. Voedselinventarisdata is versamel deur alle voedselhoeveelhede (in kilogram gemeet) wat gebruik was vir die voedsel voorbereiding op die studiedae in ag te neem. Die opbrengs van die voorbereide voedsel is gebruik om die gemiddelde inname per mynwerker volgens die huishoudelike rekord metode te bereken. ‘n Waarnemingstudie is gedoen om die voedselrekorddata vas te stel. Die voedselkwisting is bereken deur die produksie- asook bordkwisting te weeg en dan hierdie data te gebruik om die gemiddelde voedselinname per mynwerker te bereken. Resulate: Ongeveer 700 mynwerkers het aan die studie deelgeneem. Die studie het ‘n 96% maaltyddeelname opgeteken, gemeet teen die beplande deelnamesyfers tydens die hoofmaaltyd, met 74% deelname aan alle spyskaartitems. Die waardes vir ontbyt en aandete was 95% maaltyd bywoning vir beide etes, met 87% spyskaartitemdeelname tydens ontbyt en 82% tydens aandete. Die studie het beperkte waardes binne die 95% vertrouensinval tussen die voedselinventarisdata en voedselrekorddata opgeteken, selfs nadat die voedselkwistingsyfers in ag geneem is. Wanneer die beplande deelname wat gebruik is om die huishoudelike data te bereken egter in ‘n vergelyking wat werklike deelnamedata gebruik, inkorporeer word, het die waardes binne die 95% vertrouensinval geval. Dit is ‘n aanduiding dat daar met 95% sekerheid aangeneem kan word dat die beplande waardes (bereken volgens die voorgestelde vergelyking) vergelyk kan word met die waardes waargeneem tydens die studie. Gevolgtrekking: Huishoudelike data kan as ‘n meetinstrument dien om die gemiddelde individuele voedsel- en voedingstofinname van mynwerkers te moniteer. Beide beplande en werklike spyskaartitemdeelnamesyfers moet egter in ag geneem word, tesame met totale voedselkwisting per voedselitem. Hierdie instrument kan aangepas word vir gebruik in industriële voedseldienseenhede om voedsel- en voedingstofinname te moniteer, wat die identifisering van voedingstoftekorte en vroegtydige implementering van intervensie strategieë moontlik sal maak.
APA, Harvard, Vancouver, ISO, and other styles
22

von, Wenckstern Michael. "Web applications using the Google Web Toolkit." Master's thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2013. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-115009.

Full text
Abstract:
This diploma thesis describes how to create or convert traditional Java programs to desktop-like rich internet applications with the Google Web Toolkit. The Google Web Toolkit is an open source development environment, which translates Java code to browser and device independent HTML and JavaScript. Most of the GWT framework parts, including the Java to JavaScript compiler as well as important security issues of websites will be introduced. The famous Agricola board game will be implemented in the Model-View-Presenter pattern to show that complex user interfaces can be created with the Google Web Toolkit. The Google Web Toolkit framework will be compared with the JavaServer Faces one to find out which toolkit is the right one for the next web project<br>Diese Diplomarbeit beschreibt die Erzeugung desktopähnlicher Anwendungen mit dem Google Web Toolkit und die Umwandlung klassischer Java-Programme in diese. Das Google Web Toolkit ist eine Open-Source-Entwicklungsumgebung, die Java-Code in browserunabhängiges als auch in geräteübergreifendes HTML und JavaScript übersetzt. Vorgestellt wird der Großteil des GWT Frameworks inklusive des Java zu JavaScript-Compilers sowie wichtige Sicherheitsaspekte von Internetseiten. Um zu zeigen, dass auch komplizierte graphische Oberflächen mit dem Google Web Toolkit erzeugt werden können, wird das bekannte Brettspiel Agricola mittels Model-View-Presenter Designmuster implementiert. Zur Ermittlung der richtigen Technologie für das nächste Webprojekt findet ein Vergleich zwischen dem Google Web Toolkit und JavaServer Faces statt
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Ching-Wen, and 楊清文. "Analysis Technique Established for Power Quality by Optimal Searching Methods." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/84960644758415463448.

Full text
Abstract:
碩士<br>義守大學<br>電機工程學系<br>92<br>A complete method to find the exact power quality parameters is proposed in this thesis. These found parameters include frequencies, amplitudes, and phases. This method is established by optimal searching methods. The common ways in optimal searching methods, including gradient method, conjugate gradient method, and second order differentiation method, are all used in this thesis. The optimal searching methods need initial values to start iteration of non-linear equations. The general FFT interpolation method is also established in this thesis to provide initial values for iteration. Because the estimated results of the general FFT interpolation are quite closed the real ones, the iteration can converge quickly and avoid divergence. The optimal searching methods are the methods of optimization, which have the ability of anti-noise. Compared with present analysis technologies for power quality, including FFT, window method, FFT interpolation, group power method, zero crossing method and guasi-Newton method, the ability of accuracy, fast processing, and anti-noise in this method can be proved. The realization of this method in practical measurement system is shown in this thesis, and the results of its application are also illustrated. The practical analysis results are very closed those from reading values of real instrument that its practicability can be confirm.
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Tzu-I., and 吳姿頤. "Comparing the Superiority of Early Prediction Model for Treatment Response of Schizophrenia Established by Different Statistical Methods." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/59755023287532900780.

Full text
Abstract:
碩士<br>淡江大學<br>數學學系碩士班<br>95<br>The therapeutic period of schizophrenia needs to last at least for 4 to 8 weeks. Because of the complexity of the disease and the diversity medical responses of patients, it is hard for doctors to evaluate the results of medical response of patients by clinical symptoms and individualities before the treatments. It is intensely important to establish an authentic remedy prediction model in order to avoid the patients to accept ineffective medical treatments. The evaluation of the therapeutic effects on schizophrenia is based on the score variation of the certain evaluated scales, for example, Brief Psychiatric Rating Scale and Positive and Negative Syndrome Scale. In the present study, we compared two statistic methods to establish the pros and cons of prediction model. The first statistic method used multiple linear regression to establish prediction model directly by the scores of the scale. The second method determined its effectiveness first. If the scale score after the treatment decreases 20% or more compared with the baseline scale score, the result represents the treatment is effective; otherwise, the treatment is ineffective. Thereafter, we used multiple logistic regression to establish prediction model and then used the method which is brought up by Chang et al. [2006] to predict the effectiveness of the treatment for patients. We compared the diagnostic accuracy of two different prediction methods under various circumstances. As the results from the study, when multiple linear regression was used, the predicted scores of the scale tended to be underestimated. The results represented that multiple linear regression has higher sensitivity but lower specificity. However, the predicted results done by logistic regression has higher specificity but lower sensitivity compare to the results of multiple linear regression.And we compared the areas under the ROC curve of two prediction methods, the area is larger when we use multiple logistic regression to establish a prediction model.
APA, Harvard, Vancouver, ISO, and other styles
25

Hsiung, Chen-Wu, and 熊振武. "The Study of Established Criteria of Performance Evaluation for Military Auditing Department by Fuzzy Delphi and Fuzzy Analytic Hierarchy Process Methods." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/11021449912122050846.

Full text
Abstract:
碩士<br>中原大學<br>企業管理研究所<br>95<br>Abstract The power of defence capability concerns the national security, and it has close connection with the number of the national defence budget. But national resources are limited, if distribute the budget to the national defence and increase, will produce the function of squeezing on other budgets. So Analytic Accounting Unit of country ‘s army of, acts extremely important checking on the role even more. Analytic Accounting Unit of country’s army is responsible for compiling and execution, inside verifying, accountant, counting with such business as the financial administration, etc., so it influence country's army's war preparedness to be reorganized and outfitted and very maintained the hard iron with the homeworked, so the performance of every Analytic Accounting Unit of country's army is actually very important. This research is directed against in unit's performance assessment criteria of current Analytic Accounting, apply Fuzzy Delphi, collect experts’ opinion and gather together whole, screen proper examination indicator. Then apply Fuzzy AHP to construct its’ structure that performance assess, calculate out the weight value of assessing the project of every performance. Compare analysis result with current system to assess afterwards, propose every suggestion, in order to offer the reference while making, revise and implement unit's performance of Analytic Accounting to assess to a policymaker. Keyword: Analytic Accounting Unit; Performance assessment; Fuzzy Delphi; Fuzzy AHP
APA, Harvard, Vancouver, ISO, and other styles
26

Zhan, Jun-Yan, and 詹俊彥. "Displacement Basis Vectors Established by Numerical Method." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/55284552260554014803.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>營建工程系<br>93<br>The main purpose of this research is to establish the displacement basis vectors for static analysis. A numerical procedure is suggested is this study. A set of displacement results is used as original data base. The relative values between each displacement variables are searched by the suggested procedure. Then a set of base vectors can be found. The approximated results can be obtained by using these base vectors. The calculated results were compared to the approximated results obtained from Eigenvalue analysis . The comparisons show the suggested method is bettor than the Eigenvalue analysis.
APA, Harvard, Vancouver, ISO, and other styles
27

CNEN, FU-HUI, and 程馥慧. "A Study on the Establish Methods, Establish Resources and Related Factors of e-Learning of University." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/40144843296827293488.

Full text
Abstract:
碩士<br>世新大學<br>傳播管理學研究所(含碩專班)<br>93<br>The distance education carried on through the penetration network can assist traditional teaching on its insufficiency, and it even has become another field with better function and facilities. Moreover, through e-Learning school can not only reuse current educational resource but also enable more learners to share the information. Platform of e-Learning is the based on that. It has to be evaluated every related factor while the university want to find the best way to build e-Learning. The study is to point on the established work of e-Learning’s platform of university. The purpose of this study is to find out how to establish methods, resources and related factors of e-Learning. The way is to use e-mail to inquire the chief or someone who manages e-Learning at school. After we get the analysis of those questions, we can analyze the situation of e-Learning of university. We also find factors which affect establishing methods and resources, and to compare resources on the different establish methods. The study to obtain the final conclusion will provide schools or cooperates with useful references when they are to establish platform of e-Learning.
APA, Harvard, Vancouver, ISO, and other styles
28

CHANG, PAO-HUI, and 張寶慧. "Using Systematic Decision-making Methods to Establish Bidding Decision System." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/zgmq65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Nathoo, Kirti. "Establish a generic railway electronic interlocking solution using software engineering methods." Thesis, 2015. http://hdl.handle.net/10539/17639.

Full text
Abstract:
A research investigation has been undertaken to establish a generic software interlocking solution for electronic railway systems. The system is intended to be independent of the physical station layout and easily adaptable in any country of application. Railway signalling principles and regulated safety standards are incorporated into the system design. A literature review has been performed to investigate existing interlocking methods and to identify common aspects amongst these methods. Existing methods for the development of electronic interlocking systems are evaluated. The application of software engineering techniques to interlocking systems is also considered. Thereafter a model of the generic solution is provided. The solution is designed following an agile life cycle development process. The structure of the interlocking is based on an MVC (Model-View-Controller) architecture which provides a modular foundation upon which the system is developed. The interlocking system is modelled using Boolean interlocking functions and UML (Unified Modelling Language) statecharts. Statecharts are used to graphically represent the procedures of interlocking operations. The Boolean interlocking functions and statechart models collectively represent a proof of concept for a generic interlocking software solution. The theoretical system models are used to simulate the interlocking software in TIA (Totally Integrated Automation) Portal. The behaviour of the interlocking during element faults and safety–critical events is validated through graphical software simulations. Test cases are derived based on software engineering test techniques to validate the behaviour and completeness of the software. The software simulations indicate that the general algorithms defined for the system model can easily be determined for a specific station layout. The model is not dependent on the physical signalling elements. The generic algorithms defined for determining the availability of the signalling element types and the standard interlocking functions are easily adaptable to a physical layout. The generic solution encompasses interlocking principles and rail safety standards which enables the interlocking to respond in a fail-safe manner during hazardous events. The incorporation of formal software engineering methods assists in guaranteeing the safety of the system as safety components are built into the system at various stages. The use of development life cycle models and design patterns supports the development of a modular and flexible system architecture. This allows new additions or amendments to easily be incorporated into the system. The application of software engineering techniques assists in developing a generic maintainable interlocking solution for railways.
APA, Harvard, Vancouver, ISO, and other styles
30

Pan, Bo-Jhong, and 潘柏仲. "Using Neural Network Methods to Establish a Flight Safety Prediction Model." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/34513287018013159949.

Full text
Abstract:
碩士<br>中原大學<br>工業與系統工程研究所<br>105<br>The airline industry always do their hard to prevent aviation accidents that will cost too much social resources. In recent years, the airline industry all the world improve their own flight safety actively. Most flight safety system and the occurrence of events related to human factors, therefore, reduce the incidence of human error has become the flight safety management of the most important issues. In this study, the informations based on the Department of Civil Aviation Flight Safety Check data. We create the flight safety performance prediction mode with Back Propagation Neural Network (BPN) network method in previous studies. In this study, we revise the formula of quantification of human factors, and then use the Radial Basis Function Neural Network (RBF) network method to find the best predictive mode parameter configuration We will re-establish the human factor quantization formula and find the best Parameter, and combined with sensitivity analysis to find out the impact flight safety factor as the reference basis for improving performance.
APA, Harvard, Vancouver, ISO, and other styles
31

Lee, Chih-Lin, and 李智霖. "Establish the financial crisis prediction model by comparison of Classification methods." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/9d9at4.

Full text
Abstract:
碩士<br>銘傳大學<br>資訊管理學系碩士班<br>93<br>Most researches of early era used statistical method in financial crisis prediction, and some problems are found in these researches, including fewer factors、little predictive precision. This paper hoped to improve the above defects, using more constructs and variables to improve the integrity and predictive precision for prediction model. The training samples are collected since the first quarter of 2004 till the third quarter of 2004, including 204 of Taiwan corporations. It consists of 102 financial crisis corporations and 102 normal corporations. The testing samples are collected at the first quarter of 2004, including 60 of Taiwan corporations, which consists of 30 financial crisis corporations and 30 normal corporations. This research used some classification methods including statistical method(discriminate analysis)、directed data mining method(decision tree、neural network) to compare the classification models. The result is:First, , the prediction rate of neural network analysis is the best among the all classification models. Second, the decision tree CART analysis gets the lowest ratio of second type classified error, where as it produces the rules that are easy to understand. But it gets the lowest ratio of integral prediction. Third, the discriminate analysis helps the government to effectively find the key list of the financial crisis corporations. According to this list, our government can guidance these financial crisis corporations.
APA, Harvard, Vancouver, ISO, and other styles
32

Pan, Chia-Ming, and 潘家銘. "To Establish Better Evaluating Method in Pedestrian Wind Environment." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/56912861679713420257.

Full text
Abstract:
碩士<br>淡江大學<br>土木工程學系<br>92<br>The purpose of this thesis is to find a better method to evaluate pedestrian level wind environment from computational fluid dynamics simulation, wind tunnel tests and field surveys. Previous studies of the wind environmental problems were done by wind tunnel tests or simple numerical models which only concerned about simple geometric structure shapes, such as cubes. However, these two methods have some limitations which discussed in this thesis. On the other hand, there are not only buildings but various blocking objects those can make influence of wind environment, such as plants. That means we should take it into account to decrease the error generated by other objects in numerical models. Plants to add in the numerical models made the calculating domain of CFD very complicated due to meshing problems. As for this, the default function in FLUENT — “Porous Media Condition” added plants in numerical model without mesh increasing. Comparison results between numerical and wind tunnel model were closed if well adjustments of the parameters of porous media condition. By doing several comfortable surveys in wind tunnel laboratory and fields in different wind conditions, we established new comfort criteria of pedestrian wind environment suitable for people in Taiwan. Because of restricts and shortages of wind tunnel experiment facilities, we prefer to take the site data as a judging foundation. At last, one can evaluate pedestrian wind environment by putting those modeling results considering plants effect into the criteria through the same method.
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Jiun-Yi, and 李俊儀. "Using Fuzzy Tree Method to Establish a Business Diagnosing Process." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/60846685892276947835.

Full text
Abstract:
碩士<br>國立成功大學<br>工業與資訊管理學系碩博士班<br>95<br>Due to external factors as well as influences from within a business, maintaining a competitive edge over time requires a great deal of focus on continual improvement. These improvements are best handled through the employment of a management consultant who can impartially evaluate the various elements that impact a given business. Moreover, in cases where a business has not properly taken care of its internal operations it is sometimes too late to hire a management consultant to establish reforms. Previous evaluation methods have not addressed the need for proper comparisons between the status of a company both before and after professional evaluations are made by a management consultant. Therefore, in this study, an emphasis is placed not only on taking into account information from each department within a company, but also on those important items essential to the evaluation process. In addition, the fuzzy tree method is employed in order to establish a new evaluation process that can be used during consultations by those in a decision-making capacity. An example is used to demonstrate the approach.
APA, Harvard, Vancouver, ISO, and other styles
34

Strasheim, Catharina. "Simultaneous normalisation as an approach to establish equivalence in cross-cultural marketing research." Thesis, 2008. http://hdl.handle.net/10539/5592.

Full text
Abstract:
Since bias threatens the validity of a study, it should be avoided where possible. Across all phases of a research project, bias could be introduced, and in most situations the researcher has reasonable control over processes that may be the source of bias. However, within a quantitative research context in social sciences, where the opinions, attitudes and intentions of people are often sought, response styles patterns due to cultural background, for example, are not within the control of the researcher. Typical response style patterns include acquiescence bias, a tendency to be agreeable to statements, which could be more prevalent in certain cultural groups than other. Another response style pattern is extremity ratings, where respondents tend to avoid the middle categories and mark the scale extremes. When practitioners sample respondents from different cultural groups, it is difficult, and depending on the research design, sometimes impossible to know whether significant differences are an artefact of substantive differences, or of differences in response styles. Adjusting scores for bias has a significant effect on the interpretation of research findings. To correct for bias, the method most commonly used to adjust scores within each cultural group is standardisation. In this research, SIMNORM, a target distribution estimation approach was used for the simultaneous estimation of a class of non-linear transformation functions that transform the composite scores within each cultural group to a standard normal distribution. SIMNORM was found to perform better than standardisation to obtain equivalence across cultural groups when composite scores are used. In addition, SIMITNORM, an item normalisation approach was developed, which is a simultaneous non-linear transformation of item scores to a standard normal target distribution. The results of seven nested SIMITNORM models were compared to raw item scores and standardised scores, using a multi-group confirmatory factor analysis approach, a method that is suitable to test for construct equivalence, metric equivalence and scalar equivalence. SIMITNORM had significant advantages over standardisation as an approach to obtain equivalence over items in a set of data where bias is present.
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Kuei-Yuan, and 陳奎元. "Application of Space-Time Method Establish Dynamic Traffic Accidents Warning System." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/d98977.

Full text
Abstract:
碩士<br>國立臺灣大學<br>地理環境資源學研究所<br>105<br>The casualty involved in traffic accident will be even worse as a greatest problem than that involved in tuberculosis, cancer, and respiratory diseases in 2020. It is thought of as the biggest public hazard in the world nowadays. In the past, accident prevent method was prone to passive prevention like as propaganda, engineering, strengthen regulations, however people still didn’t understand where was dangerous.   This study used spatio-temporal exploratory method to confirm car accidents were not uniform distribution in time and space. We searched car accident hotspots by space-time scan statistics method because it was not always stationary, and the time criterion deep was affecting the accident change deeply, the different time caused the different hotspots. Space-Time Scan can understand hotspots range, starting time, ending time, duration, and risk-value besides research result confirm car accidents appeared at different locations by time flowing. Car accidents were significant timing regularity at days of week and hours of day, so early warning was not same like past it was a dynamic changing results including timing changing and user positions changing. Compare to past, new early warning way was different and creative, and such result was more meaningful.   Accident alert system deliver information by internet and provide a new way for active alerting. Main function contain dynamic early warning. Currently wireless internet and mobile devices are ripe, and they drive Location-Based Service (LBS). When accident alert App work, it will deliver dynamic accident hotspots and real-time traffic data immediately actively alerting way by word, map, voice to remind user avoiding from dangerous intersection and sudden road event. Hoping can forge ahead to a safer city road environment.
APA, Harvard, Vancouver, ISO, and other styles
36

Hsu, Chih-Chen, and 許致誠. "The Home Networking Technology Research Analysis and Testing for Establish Method." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/2a39vb.

Full text
Abstract:
碩士<br>中原大學<br>電機工程研究所<br>91<br>In order to completely explain the technology of Home Network, in this thesis we focus on analyzing the technology of Home Network and testing related products. Then, base on the features in different technologies to map out a scheme as a reference to construct a Home Network. This thesis proposes an overall analysis on backbone broadband and client-end interconnection over all kinds of “Home Networking” environments. This thesis also shows application fields and functionality characteristics of various “Home Networking” technologies, and discusses the advantages and disadvantages of various “Home Networking” technologies from different views. Although there is still kind of difference between the theoretical and practical functionalities because of the real environmental effect, this thesis looks into the factors of the environmental effect and studies the detailed solution to some Internet device’s deployment, configuration, and operation. At last, the analysis in this thesis makes simulation for “Home Networking” deployment in local building and apartment. The simulation can optimize the solution to “Home Networking” deployment according to the characteristics and demands of various Internet Appliances.
APA, Harvard, Vancouver, ISO, and other styles
37

Hsu, Ya-Hui, and 許雅惠. "Application of Entropy Method and VIKOR to Establish Logistics Supplier Selection Model." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/vt34t7.

Full text
Abstract:
碩士<br>樹德科技大學<br>會展管理與貿易行銷碩士學位學程<br>107<br>"Logistics" is the facade of a company. In the era of e-commerce, logistics represents the contact between the products of the enterprise and the customer. The enterprise logistics has "raw material logistics" (the original material supplier distributes the raw materials or semi-finished products to the enterprise). ), "main logistics" (enterprise distribution products to customers), "reverse logistics" (disposal products after customers use and distribute them), which explains the quality and service level of the products affected by "logistics", this study To construct a "speech assessment model" that integrates "dual semantic variables", "VIKOR" and "entropy method" to evaluate the performance of logistics providers for a food company in Kaohsiung, Taiwan. This model considers "enterprise size" and "distribution". After the assessment criteria such as "speed", "distribution flexibility", "distribution equipment", "distribution price" and "delivery success rate", it is possible to select suitable logistics providers for individual companies.
APA, Harvard, Vancouver, ISO, and other styles
38

Ching, Chang-Hsiao, and 張曉青. "Establish an Intelligent Piecewise Linear Representation method on stock trading points’ prediction." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/71363096976849382894.

Full text
Abstract:
碩士<br>元智大學<br>工業工程與管理學系<br>95<br>The stock market has become the main outlet for investment recently in Taiwan. The futures indicator, investment foundations, foreign capitals are diverse choices for investors. How to establish an efficient decision support system on stock market prediction is pretty difficult but it’s necessary. By adopting a good forecasting system, investors may earn more money form the stock market. This research utilizes Genetic Algorithms (GA) and Back-Propagation Neural Network (BPN) and Piecewise Linear Representation (PLR) to establish an Intelligent Piecewise Linear Representation (IPLR) system to be a stock trading decision support system. The system can assist investors to establish an investment reference indicator and decrease the investment risks. The GA is applied to adjust threshold value of PLR, and the BPN is adopted to train connecting weights of input variables and output variables. We expect to find out better commerce occasions (trading points) by using this method. This research takes Taiwan Stock Exchange Market to be research object and compute actual return on investment by utilizing the prediction model, IPLR. We can prove using established prediction model (IPLR) is more precisely to predict better commerce occasions (trading points) than experts’ judgment from technical indexes or other prediction models by experimental results.
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Mao-kung, and 王茂恭. "Application of 3D Digital Image Correlation Method to establish Forming Limit Diagram." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/77347476067845111541.

Full text
Abstract:
碩士<br>國立高雄大學<br>土木與環境工程學系碩士班<br>97<br>This study explored the application of 3D-Digital Image Correlation(3D-DIC) method to establish Forming Limit Diagram(FLD). FLD can be established by ball punch deformation test. Grid method is usually used to measure the strain in this test. However, the precision of this method is not good enough and its process is complicated. DIC method is a non-contact measurement technique. It provides accurate measurements of the full field strain distribution, and the process of this method is simple. In this study, the 3D-DIC method, in which only one camera is used, is applied in the ball punch deformation test and the advantage and disadvantage of this method and the grid method are compared.   According to the result of the experiments, 3D-DIC method can measure the major strain and the minor strain successfully in the ball punch deformation test. Moreover, the 3D-DIC method is easier and more accurate than grid test.   A numerical simulation is also carried out in this research. The major and minor strains obtained from the simulation are different from that measured by DIC and grid method. But the tendency of the strain path is very similar to that from DIC. And this strain path can not be measured by the grid method. As a result, the 3D-DIC method has a great potential in the ball punch deformation test.
APA, Harvard, Vancouver, ISO, and other styles
40

Hong, Chia-Hung, and 洪嘉宏. "Establish Taguchi Methods and Response Surface MethodologyIntegration Application Model-The Offset Printing Industry as Example." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/n2kaw7.

Full text
Abstract:
碩士<br>中原大學<br>工業工程研究所<br>93<br>Abstract “Taguchi Methods (TM)” and “Response Surface Methodology (RSM)” are the popular designs of experiments. Taguchi Methods that search the optimal parameters of the process or system use “Orthogonal Array” to plan experiments for quality improvement and “S/N Ratio (Signal-to Noise Ratio)” as an evaluating index. RSM is a helpful approach to establish and analyze problems in mathematical models by merging mathematical methods, design of experiments, regression analysis, etc. This research proposes an integration model based on research gap and practical need for improvement projects by TM or RSM as a reference when planning experiments and executing ones. The model is named “Preparation-Experiment-Completion (PEC) Application Model”. It is involves in “Taguchi Methods and Response Surface Methodology Integration Application Model” and “Taguchi-Response Surface Methods” for operating collaboratively with “PEC Model”. The “PEC Model” is verified by a quality improvement project in the offset printing factory.
APA, Harvard, Vancouver, ISO, and other styles
41

陳皖鐘. "Establish the Product Cost Model by Simulation and the Activity-Based Costing Method." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/21285481962261459477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Huang, Cheng-Kai, and 黃政凱. "Applying Fuzzy Evaluation method Establish the 3D Discrimination Matrix of Nation Infrastructure Competitive." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/43333600014319398569.

Full text
Abstract:
碩士<br>國立交通大學<br>土木工程系所<br>102<br>The degree of the country's economic growth is based on the infrastructure of nation competitiveness. Creating a policy regarding infrastructure strengthen is complicated. This research proposes model base on the national performance of single indicator, then refers to the World Competitiveness Yearbook (WCY) published by the Institute for Management Development (IMD). Applying Cluster analysis and Fuzzy evaluation method establish the 3D discrimination matrix of nation infrastructure competitive to plan absolute strength、relative strength、generally observation、emphasis observation、generally improvement、emphasis improvement、absolute weakness and relative weakness et investment strategy. Try to comparison of Taiwan and other country’s (Singapore、Korea and Mexico) infrastructure of nation competitiveness.
APA, Harvard, Vancouver, ISO, and other styles
43

Min, Namhong. "A method to establish non-informative prior probabilities for risk-based decision analysis." Thesis, 2008. http://hdl.handle.net/2152/24330.

Full text
Abstract:
In Bayesian decision analysis, uncertainty and risk are accounted for with probabilities for the possible states, or states of nature, that affect the outcome of a decision. Application of Bayes’ theorem requires non-informative prior probabilities, which represent the probabilities of states of nature for a decision maker under complete ignorance. These prior probabilities are then subsequently updated with any and all available information in assessing probabilities for making decisions. The conventional approach for the non-informative probability distribution is based on Bernoulli’s principle of insufficient reason. This principle assigns a uniform distribution to uncertain states when a decision maker has no information about the states of nature. The principle of insufficient reason has three difficulties: it may inadvertently provide a biased starting point for decision making, it does not provide a consistent set of probabilities, and it violates reasonable axioms of decision theory. The first objective of this study is to propose and describe a new method to establish non-informative prior probabilities for decision making under uncertainty. The proposed decision-based method is focuses on decision outcomes that include preference in decision alternatives and decision consequences. The second objective is to evaluate the logic and rationality basis of the proposed decision-based method. The decision-based method overcomes the three weaknesses associated with the principle of insufficient reason, and provides an unbiased starting point for decision making. It also produces consistent non-informative probabilities. Finally, the decision-based method satisfies axioms of decision theory that characterize the case of no information (or complete ignorance). The third and final objective is to demonstrate the application of the decision-based method to practical decision making problems in engineering. Four major practical implications are illustrated and discussed with these examples. First, the method is practical because it is feasible in decisions with a large number of decision alternatives and states of nature and it is applicable to both continuous and discrete random variables of finite and infinite ranges. Second, the method provides an objective way to establish non-informative prior probabilities that capture a highly nonlinear relationship between states of nature. Third, we can include any available information through Bayes’ theorem by updating the non-informative probabilities without the need to assume more than is actually contained in the information. Lastly, two different decision making problems with the same states of nature may have different non-informative probabilities.<br>text
APA, Harvard, Vancouver, ISO, and other styles
44

KUO, CHIN-CHIA, and 郭晉嘉. "Application of empirical Bayesian method to establish clinical blood laboratory test control chart." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/sc8434.

Full text
Abstract:
碩士<br>銘傳大學<br>應用統計與資料科學學系碩士班<br>107<br>Levey-Jennings control chart for quality control in clinical medical laboratories has been used in decades. In the process of quality control of blood routine test, quality control reagents needed to be replaced monthly for avoiding decay in regular. However, a fault may occur when the concentration or dosage of new reagent differs from the original one because of coming from different batches. It will result in the control limits are not strict enough if these deviations were noticed only when they are out of original setting control range. In this study, a blood quality control data in 2017 was collected from a medical examination laboratory in hospital located in Taoyuan. Quality control data including white blood cells (WBC), red blood cells (RBC), hemoglobin (Hb) and platelet (PLT) recorded by a laboratory blood analyzer was arranged and analyzed. Using the empirical Bayesian method, we estimated the variation of concentrations of last and current batches to establish a novel control chart with an adjusted upper and lower limit for current batch, and compared the results with the traditional Shewhart method. The average run length (ARL) and sensitivity of the empirical Bayesian method were explored as well. The study found that ARL showed a qualified capability for the four blood routine tests while using the empirical Bayesian method. Compare to the Levey-Jennings control chart, the novel control chart can show an alert earlier while a deviation occurring and show a fake alert latter when there is no deviation. Parallel tests showed that the longer the time go through, the better the capability can get. We concluded that the empirical Bayesian method can be applied effectively and can improve the capability of quality control in blood examination.
APA, Harvard, Vancouver, ISO, and other styles
45

Ye, Zhe-yu, and 葉哲宇. "Establish the anisotropy friction model of ring upsetting and verification by experimental method." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/62731845205398470006.

Full text
Abstract:
碩士<br>國立中央大學<br>機械工程研究所<br>99<br>In the process of upsetting, one of the main reason of deformation is friction, for understand the anisotropy deformation by friction. We make different surface morphology specimen and mold, used turning, milling and grind. Conduct ring upsetting experiment at lubricating environment research surface affect anisotropy deformation by specimen and mold, used experiment establish a anisotropy deformation model. Combine finite element analysis, verification different geometry(cylinder). Compare specimen surface profile and bulged profile. Forecast anisotropy deformation by friction
APA, Harvard, Vancouver, ISO, and other styles
46

Hsu, Wei-Chiang, and 徐偉強. "Establish a Method to Assess the Atherosclerosis Using Non-invasive Blood Pressure Monitor." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/56109014914518300458.

Full text
Abstract:
碩士<br>中原大學<br>生物醫學工程研究所<br>104<br>Cardiovascular diseases are always within the top ten causes of death in our country, and the fatality rate of it is increasing year by year. Since many of cardiovascular diseases are resulting from arteriosclerosis, the level of severity of arteriosclerosis is widely used as a reference index for cardiovascular diseases. The aim of this study is to evaluate the level of severity of arteriosclerosis via non-invasive blood pressure monitor, and thus it can be an economic approach for the prevention and tracking the progress of cardiovascular diseases. The base of this study is to evaluate the arteriosclerosis via area of the envelope of the blood pressure waveform. The detection of baPWV higher than 1400 cm/s or ABI smaller than 0.9 has the hit rate of 86% and the accuracy of the whole cases is 90%. In conclusion, the use of area difference of the blood pressure wave, in this study, can effectively detect the high risk patients with arteriosclerosis, and this evaluation technology can be realized in non-invasive blood pressure monitor; thus the technology can provide a convenience and low-cost method for evaluation the severity of arteriosclerosis.
APA, Harvard, Vancouver, ISO, and other styles
47

Lai, Shan-Hu, and 賴珊湖. "Applications of random amplified polymorphic DNA method to establish DNA biomarkers in fish." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/56178217265209106058.

Full text
Abstract:
博士<br>國立中興大學<br>動物科學系所<br>100<br>In the present study, Both of novel family- and genus-specific DNA markers in Mugilidae fish and golden threadfin bream (Nemipterus virgatus). Genomic DNA was isolated from the blood of fish of 15 families and eighty random primers were used for random amplified polymorphic DNA (RAPD) fingerprinting. When the primer OPAV04 was employed, a novel specific PCR product was observed in the Mugilidae family. In addition, another novel specific PCR product was also observed in the Liza genus while using primer OPAV10. Sequencing analysis revealed that the novel family- and genus-specific DNA fragments were 856 bp and 418 bp, respectively, and no similar sequence was found in GenBank. Two primer sets were designed based on the family- and genus-specific sequences to confirm the RAPD results and the 569 bp and 186 bp predicted bands were successfully amplified by PCR. Intriguingly, Mugilidae family specific DNA markers were also effectively used for terrestrial and aquatic animal discrimination. A species-specific band amplified from Nemipterus virgatus using the OPAA11 random primer was selected for sequencing analysis, and a 535 bp sequence which had no similarity with any others in the nucleotide database was obtained. A primer set was designed based on the Nemipterus virgatus specific sequence to validate the RAPD-PCR results. A predicted 372 bp amplicon was significantly observed by PCR. Therefore, the novel family- and genus-specific DNA markers identified in this study can be used as an effective tool for rapid and accurate determination of which fish, and even for cross-species identification.
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Ying-Jen, and 陳映任. "Using the dilute procedures to establish the method for measured COD concentration in water." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/87843410414172434871.

Full text
Abstract:
碩士<br>國立中央大學<br>環境工程研究所<br>100<br>In order to achieve the purpose of wastewater treatment automation and improved the water quality and stability in future, we must be able to collect the water quality and quantity characteristics of the wastewater in wastewater treatment system immediately. This study is focused on the concentration of chemical oxygen demand. For water quality, effluent concentration of the wastewater treatment system must comply with the emission standards of the regulations; for process control, operator must be able to monitor the concentration information to control the wastewater treatment process appropriately. Therefore, measure the COD concentration for the environment or water treatment control is very important. In this study, using optical measurement techniques to establish the COD concentration measurement method, and through the water sample pre-treatment program to reduce the interference.By using spectrophotometer scan, we can get some of the wastewater characteristics. Besides, we build the relationship between the absorbance of the organic and COD concentration to quantify the COD concentration. Overall, in three kind of water sample pre-treatment process, consider the analysis time and economic costs ,dilution procedures is the best process.This method validation results for COD concentration of two PCB companies are as follows: T company estimated of the COD concentration compared to standard method, the mean difference for inflow and outflow is 15.91 % and 7.00 %; S company T the mean difference for inflow and outflow is 29.48 % and 22.77 %. Both of two companies had good validation results, and this COD concentration measuring method will build case by case for different water quality characteristics of wastewater. In future, it will combined with automatic monitoring technology, to gain the water quality conditions immediately, or provide for warning messages when COD concentration is unusual.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Jiang-Chen, and 陳建辰. "Two Stage Method to Establish an Early Prediction Model for Antipsychotic Response in Schizophrenia." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/52186130088632432795.

Full text
Abstract:
碩士<br>淡江大學<br>數學學系碩士班<br>101<br>The therapeutic period of schizophrenia needs to last ad least for 6 weeks. For this reason, clinician pursue treatment curative effect in schizophrenia can predict, come out in early days hard for many years. And in the early research results, the logit used for predicting curative effect returns to the independent variable that the way uses then and has a CGI, it, with diagnosing PANSS effective is the clinical diagnosis form which assesses curative effect together, mean the two are all Outcome Variable. If regard CGI as and use for the independent variable, quite unreasonable in clinical practice. This thesis adopts two stage methods to build and construct the model for predicting. First stage score (or improving or not) to CGI first Do, predict, predict it result regard as, predict PANSS total points (or effective or not) two stage and then Independent variable. The comparison of prediction accuracy of different prediction methods described above, we are with sensitivity, specificity, positive predicted value (PPV) , negative predicted value (NPV) and predict power (PP). As predicting the good and bad indicator that judges of model.
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Yun-Chen, and 吳芸臻. "Establish of the urinary PAH metabolite analysis method for assessing general population PAH exposure." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/99539597239287788034.

Full text
Abstract:
碩士<br>高雄醫學大學<br>職業安全衛生研究所<br>97<br>Polycyclic aromatic hydrocarbons (PAHs) are a group of chemicals with two or more than two aromatic rings and widely distributed in air, in food, in soil, and in many occupational environments. PAHs form during incomplete combustion, such as smoking, cooking oil fumes. Some PAHs are considered as carcinogens or suspected carcinogens, such as benzo[a]pyrene and chrysene. Due to their ubiquitous presence and health effects, PAHs draw the public’s attention. Many studies have investigated the external and internal PAH exposure in occupational environments. Few studies examine the PAH exposure of the general population. Therefore, this study tries to establish a method to estimate internal PAH exposures of the general population. This study establishes the urinary PAH metabolite analysis method for assessing general population PAH. Four hydroxyl-PAH, which are 1-naphthol, 2-naphthol, 9-phenanthrol and 1-hydroxypyrene representing naphthalene, phenanthrene, and pyrene metabolites, respectively, are analyzed by High Performance Liquid Chromatograph/Fluorescence Detector. The linearity (expressing as R), limit of detection (LOD) and reproducibility (expressing as COV) are 0.996, 46 ~ 348 ng/L and 83.0 % ~ 107.9 %, respectively. Spot urine was collected from the test subject. Before analysis, 1.5-mL thawed urine was pretreated, purified, and condensed. The condensed extracts were quantitatively determined using the established method. The detection percentages of 1-naphthol, 2-naphthol, 9-phenanthrol, and 1-hydroxypyrene are 67.5 %, 5 %, 5 %, and 90 %, respectively. This method is good for detection of 1-naphthol and 1-hydroxypyrene. However, detection of 2-naphthol and 9-phenanthrol is fair, more advanced analysis techniques, such as high resolution gas chromatogram/mass spectrometry, are suggested to improve detection percentages of 2-naphthol and 9-phenanthrol.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography