To see the other types of publications on this topic, follow the link: Systematic Testing.

Dissertations / Theses on the topic 'Systematic Testing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Systematic Testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Fu, Xiaoying. "System modelling and systematic testing." Thesis, University of Southampton, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bruening, Derek L. (Derek Lane) 1976. "Systematic testing of multithreaded Java programs." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80050.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.<br>Includes bibliographical references (p. 149-150).<br>Derek L. Bruening.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
3

Simsa, Jiri. "Systematic and Scalable Testing of Concurrent Programs." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/285.

Full text
Abstract:
The challenge this thesis addresses is to speed up the development of concurrent programs by increasing the efficiency with which concurrent programs can be tested and consequently evolved. The goal of this thesis is to generate methods and tools that help software engineers increase confidence in the correct operation of their programs. To achieve this goal, this thesis advocates testing of concurrent software using a systematic approach capable of enumerating possible executions of a concurrent program. The practicality of the systematic testing approach is demonstrated by presenting a novel software infrastructure that repeatedly executes a program test, controlling the order in which concurrent events happen so that different behaviors can be explored across different test executions. By doing so, systematic testing circumvents the limitations of traditional ad-hoc testing, which relies on chance to discover concurrency errors. However, the idea of systematic testing alone does not quite solve the problem of concurrent software testing. The combinatorial nature of the number of ways in which concurrent events of a program can execute causes an explosion of the number of possible interleavings of these events, a problem referred to as state space explosion. To address the state space explosion problem, this thesis studies techniques for quantifying the extent of state space explosion and explores several directions for mitigating state space explosion: parallel state space exploration, restricted runtime scheduling, and abstraction reduction. In the course of its research exploration, this thesis pushes the practical limits of systematic testing by orders of magnitude, scaling systematic testing to real-world programs of unprecedented complexity.
APA, Harvard, Vancouver, ISO, and other styles
4

Muneer, Imran. "Systematic Review on Automated Testing (Types, Effort and ROI)." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-105779.

Full text
Abstract:
Software organizations always want to build software by minimizing their resources to reduce overall cost and by maintaining high quality to produce reliable software. Software testing helps us to achieve these goals in this regard. Software testing can be manual or automated. Manual testing is a very expensive activity. It takes much time to write test cases and run them one by one. It can be error-prone due to much involvement of human throughout the process. Automated testing reduces the testing time which results in reduction of overall software cost as well as it provides other benefits i.e. early time to market, improved quality. Organizations are willing to invest in test automation. Before investment, they want to know the expected cost and benefits for AST. Effort is the main factor, which increase the cost of testing.     In this thesis, a systematic review have been conducted which identifies and summarizes  all the retrieved research concerning the automated testing types, effort estimation and return on investment (ROI) / cost-benefit analysis for automated testing. To conduct the systematic review, the author has developed a comprehensive plan which follows the procedure presented in [15]. This plan provides guidance to identify relevant research articles of a defined period. After the identification of research articles, it collects, evaluates and then interprets all the retrieved data about automated testing types, effort estimation and ROI. The results have been presented in statistical and descriptive form which provides different aspects of the data.     The statistical results have been presented with the help of tables and graphs which show different aspects of data i.e. any gaps in research work of automated testing, number of articles for each testing type. The answers of the questions have been presented in descriptive form. The descriptive results show 22 automated testing types, 17 Industrial case studies out of 60 studies, benefits of automated testing and effort estimation models. The discussion part highlighted some important facts about the retrieved data and provides practical implications for conducting systematic reviews. Finally it is concluded that systematic reviews are good means of finding and analyzing research data about a topic, phenomena and area of interest. It also provides support to researchers for conducting and investigating more research.
APA, Harvard, Vancouver, ISO, and other styles
5

Thomson, Paul. "Practical systematic concurrency testing for concurrent and distributed software." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/55908.

Full text
Abstract:
Systematic concurrency testing (SCT) is a promising solution to finding and reproducing concurrency bugs. The program under test is repeatedly executed such that a particular schedule is explored on each execution. Numerous techniques have been proposed to make SCT scalable. Despite this, we have identified the following open problems: (1) There is a major lack of comparison and empirical evaluation of SCT techniques; (2) There is a need for better reduction techniques that go beyond the current theoretical limits; (3) The feasibility of applying SCT in practice is unclear, particularly for distributed systems. This thesis makes the following contributions to the field of SCT: 1. An independent, reproducible empirical study of existing SCT techniques over 49 buggy concurrent software benchmarks. Surprisingly, we found that the "naive" controlled random scheduler performs well, finding more bugs than preemption bounding. We report the results for all techniques. We discuss the benchmarks and challenges faced in applying SCT. 2. The lazy happens-before relation (lazy HBR), which provides reduction beyond partial-order reduction for programs that use mutexes. Our evaluation over 79 publicly available benchmarks shows both a large potential and large practical improvement from exploiting the lazy HBR. 3. A description of how to create an SCT tool in practice, with a focus on subtle-yet-important details that are typically not discussed in prior work. 4. A case study where we apply SCT in the context of distributed systems written for Azure Service Fabric (Fabric). We introduce our Adara actors framework for writing portable, statically-typed actors. We describe our model of Fabric and evaluate it on a system containing 15 bugs, showing that our Fabric model includes enough behaviours/asynchrony to expose these subtle pitfalls.
APA, Harvard, Vancouver, ISO, and other styles
6

Gilmore, John Y. "Testing for systematic ESG fund construction and independence measures." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118551.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, 2018.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 39-40).<br>There has been significant research concerning the investment case for Environmental, Social and Governance Funds (ESG), however research into how these funds are constructed has been less studied. The purpose of this study is not to investigate the risk return case for ESG funds. Instead, this study will focus on the uniqueness of construction, and underlying assets of ESG specific funds. The majority of ESG classified investing is done through fund firms who self willingly vet their existing funds to ESG guidelines. It is more elocutionary, rather than a focused construction methodology. The hypothesis of this study is that funds created specifically for ESG investing are built on this same methodology, and are adapted from an existing fund very similar to the S&P 500. To test for uniqueness, large cap US equity ESG funds were compared against how many of the underlying assets were shared with the S&P 500. Signals found heavy overlap. However when looking at how the underlying assets are weighted in the fund verse the S&P 500, differences become more pronounced. Interestingly in the aggregate, the portion of the ESG funds dedicated to stocks that are not included in the S&P 500 were not that significant. There are several funds that are constructed with very different underlying assets than the S&P 500 Index, and funds that are very similar. This study then investigated how much of the underlying assets of each fund differed from the S&P 500 by adjusting the weights of just the underlying assets which it shares with each fund to measure the effect of dilution from the removed "non-ESG" compliant stocks. The resulting increase in overlap was significant for several individual funds, but modest for all funds. Then this study sampled to find if there is more overlap with different common index funds. Interestingly, there was often a higher overlap with the S&P 500 than with a fund's stated benchmark such as the Russell 1000 or Russell 1000 Value Index. Finally this study looked for correlations between the 3 month, 1 year, 3 year and Morningstar ESG peer performance percentiles. Modest correlations were found slightly favoring funds which were more similar to the S&P 500. Then correlations between each fund's management fee and similarity in the underlying assets were tested. There is evidence that the more unique the fund is, the higher the management fee. However, there is no evidence of correlation between the fund's management fee and the fund's Morningstar ESG score. The take away from this study is that some funds are very similar to index funds, like the S&P 500, while other funds have very little in common with standard index funds. There was significant overlap in the underlying assets and the S&P 500, however there was also significant differences in how the underlying assets were weighted. There was not a one to one exchange with a non-ESG compliant underlying asset with another asset with similar characteristic but was ESG compliant.<br>by John Y. Gilmore.<br>S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
7

Perumal, Kumaresen Pavithra. "AGENT BASED SYSTEMS IN SOFTWARE TESTING –A SYSTEMATIC MAPPING STUDY." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shabbir, Kashif, and Muhammad Amar. "Systematic Review on Testing Aspect-orientedPrograms : Challenges, Techniques and Their Effectiveness." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3109.

Full text
Abstract:
Aspect-oriented programming is a relatively new programming paradigm and it builds on the basis of object oriented programming paradigm. It deals with those concerns that cross-cut the modularity of traditional programming mechanisms and it aims at reduction of code and to provide higher cohesion. As with any new technology aspect oriented programming provides some benefits and also there are some costs associated with it. In this thesis we have done a systematic review on aspect oriented software testing in the context of testing challenges. Detailed analysis have been made to show that how effective are the structural test techniques to handle these challenges. We have given the analysis of Aspect-oriented test techniques effectiveness, based on research literature.<br>Aspekt-orienterad programmering är ett relativt nytt programmering paradigm och det bygger på grundval av objektorienterad programmering paradigm. Det handlar om de farhågor som KORSSKUREN den modularitet av traditionell programmering mekanismer och det syftar till minskning av kod och för att ge högre sammanhållning. Som med all ny teknik aspekt-orienterad programmering ger vissa fördelar och det finns vissa kostnader associerade med den. I denna avhandling har vi gjort en systematisk översyn av aspekt orienterad mjukvara testning i samband med provning utmaningar. Detaljerad analys har gjorts för att visa att hur effektiva är de strukturella provmetoder för hantera dessa utmaningar. Vi har gett en analys av Aspect-oriented testa tekniker effektivitet, baserade på forskningslitteratur.<br>FOLKPARKSVAGEN 18 Room 03 Ronneby 37240 Sweden Mobile Number Kashif 073-9124604, Amar 073-6574048
APA, Harvard, Vancouver, ISO, and other styles
9

Mbah, Rowland. "Using reliability growth testing to reveal systematic faults in safety-instrumented systems." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for produksjons- og kvalitetsteknikk, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-25525.

Full text
Abstract:
This master thesis studies the effects of systematic faults in the development phase of a safety-instrumented system, especially the relation between systematic faults and operational common-cause failures. Safety-instrumented systems are used widely in many industry sectors to detect on the onset of hazardous events and mitigate the consequences to humans, the environment and material assets. Systematic faults are non-physical faults introduced due to design errors or mistakes. Unidentified systematic faults represent a serious problem, as their safety effects are unpredictable and are not normally susceptible to a statistical analysis like random faults. In addition to safety effects, there can also be economic losses through product recalls, high warranty costs, customer dissatisfaction and loss of market share. Reliability growth testing is the same as TAAF (test-analyze-and-fix) testing of a product early in the design and development phases of the product life cycle when design changes can be made readily in response to observed failures. Reliability growth testing, if applied in the development phase of a safety-instrumented system helps to overcome the disadvantages of doing the test in other phases, because it can be costly, highly inconvenient and time consuming in these phases. The main focus of the thesis is to study, evaluate, and discuss to what extent reliability growth testing of safety-instrumented systems is a suitable approach for identifying and avoiding systematic faults, and develop guidelines for reliability growth testing to achieve this purpose. The thesis builds on concepts, methods and definitions adopted from two major standards for safety-instrumented applications: IEC 61508 and IEC 61511, and IEC 61014: Programmes for reliability growth. The development of procedures on how to identify and correct systematic faults by reliability growth testing are inspired by these three standards and other relevant literature found during the course of the master thesis project. The main contributions of this thesis are:1. Illustrative examples of fire and gas detection and mitigation systems, car airbag and mobile phone have been used to develop procedures on how reliability growth testing is used to identify and correct systematic faults.2. Detailed discussion of systematic faults, common-cause failures and the relationship between them have been presented. It has been established that systematic faults give rise to common-cause failures, which dominate the reliability of safety-instrumented systems.3. Detailed discussion of reliability growth testing, its models and methods, and strengths and weaknesses of the models and methods have been provided. Both continuous and discrete models are studied. The Duane model, which is an example of a continuous model is commonly used because of its simplicity and graphical presentation.4. The challenges and pitfalls of reliability growth testing in relation to systematic faults are discussed. The major challenge is the introduction of new failure modes, especially in case of software testing.5. Measures to handle systematic faults revealed during the test have been provided. The measures include: use of diverse and redundant channels, design reviews, use of simple designs, use of competent designers, training and re-training of designers and use of reliability analysis to identify causes of faults.
APA, Harvard, Vancouver, ISO, and other styles
10

Wanner, Svenja. "Systematic approach on conducting fatigue testing of unidirectional continuous carbon fibre composites." Thesis, KTH, Lättkonstruktioner, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-261694.

Full text
Abstract:
High fuel saving potentials, increased load carrying capacities and therefore competitive advantages force the heavy goods vehicle industry to enhance the efforts towards comprehensive lightweight designs. Facing this challenge, the material evaluation in terms of simulations and physical testing of composite materials is required for the design against fatigue failure due to road introduced vibrations. Eliminating fatigue testing issues in order to gain acceptable and reproducible results, a future-oriented systematic approach on conducting constant amplitude tension-tension fatigue testing on a unidirectional composite material is presented. Following the material characterisation of the carbon/epoxy material in terms of tensile and shear properties as well as fibre volume fraction, several combinations of tab configurations and specimen geometries have been tested with regard to their suitability for fatigue testing. Finally, the unidirectional material was successfully tested under tension-tension fatigue and first elaborated test data were assessed. In conclusion, the usage of straight aluminium tabs completely clamped inside the grips and bonded to the straight-sided specimen with 3M DP420 adhesive, using ventilation during the test is the recommended test procedure.<br>Lastbilsindustrin tvingas öka ansträngningarna för omfattande viktbesparingar med lättviktskonstruktioner då dessa har konkurrenskraftiga fördelar med potential att minska bränsleförbrukningen samt öka den lastbärande kapaciteten. Genom att ta sig an denna utmaning kommer materialkarakterisering, provning och simulering av kompositmaterial vara av stor betydelse för att kunna konstruera produkter utsatta för cykliska laster från väginducerade vibrationer. Vid utmattningsprovning är det viktigt att kunna generera acceptabla och reproducerbara resultat. I denna rapport presenteras hur man kan undvika och eliminera problem vid utmattningsprovning, samt ett systematiskt tillvägagångsätt vid genomförande av utmattningsprovning med konstant amplitud för belastningen drag-drag på ett kompositmaterial med enkelriktad fiberorientering. Ett kolfiber/epoximaterial är karakteriserat och flertal kombinationer av tab-konfiguration och provstavsgeometri har testats, med avseende på lämplighet för utmattningsprovning. Slutligen har kolfiber/epoximaterialet provats med framgång under cyklisk drag-drag belastning i fiberriktningen. Slutsatsen för utmattningsprovning är att använda sig av raka aluminium tabbar helt fastklämda inuti greppen. Tabbarna limmas fast på provstaven med 3M DP420 lim. Ventilation är också rekommenderat under provning för att undvika en ökning av temperatur i provstaven.
APA, Harvard, Vancouver, ISO, and other styles
11

He, Rui. "Systematic Tire Testing and Model Parameterization for Tire Traction on Soft Soil." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/104386.

Full text
Abstract:
Tire performance over soft soil influences the performance of off-road vehicles on soft soil, as the tire is the only force transmitting element between the off-road vehicles and soil during the vehicle operation. One aspect of the tire performance over soft soil is the tire tractive performance on soft soil, and it attracts the attention of vehicle and geotechnical engineers. The vehicle engineer is interested in the tire tractive performance on soft soil because it is related to vehicle mobility and energy efficiency; the geotechnical engineer is concerned about the soil compaction, brought about by the tire traffic, which accompanies the tire tractive performance on soft soil. In order to improve the vehicle mobility and energy efficiency over soft soil and mitigate the soil compaction, it's essential to develop an in-depth understanding of tire tractive performance on soft soil. This study has enhanced the understanding of tire tractive performance on soft soil and promoted the development of terramechanics and tire model parameterization method through experimental tests. The experimental tests consisted of static tire deflection tests, static tire-soil tests, soil properties tests, and dynamic tire-soil tests. The series of tests (test program) presented herein produced parameterization and validation data that can be used in tire off-road traction dynamics modeling and terramechanics modeling. The 225/60R16 97S Uniroyal (Michelin) Standard Reference Test Tire (SRTT) and loamy sand were chosen to be studied in the test program. The tests included the quantification or/and measurement of soil properties of the test soil, pre-traffic soil condition, the pressure distribution in the tire contact patch, tire off-road tractive performance, and post-traffic soil compaction. The influence of operational parameters, e.g., tire inflation pressure, tire normal load, tire slip ratio, initial soil compaction, or the number of passes, on the measurement data of tire performance parameters or soil response parameters was also analyzed. New methods of the rolling radius estimation for a tire on soft soil and of the 3-D rut reconstruction were developed. A multi-pass effect phenomenon, different from any previously observed phenomenon in the available existing literature, was discovered. The test data was fed into optimization programs for the parameterization of the Bekker's model, a modified Bekker's model, the Magic Formula tire model, and a bulk density estimation model. The modified Bekker's model accounts for the slip sinkage effect which the original Bekker's pressure-sinkage model doesn't. The Magic Formula tire model was adapted to account for the combined influence of tire inflation pressure and initial soil compaction on the tire tractive performance and validated by the test data. The parameterization methods presented herein are new effective terramechanics model parameterization methods, can capture tire-soil interaction which the conventional parameterization methods such as the plate-sinkage test and shear test (not using a tire as the shear tool) cannot sufficiently, and hence can be used to develop tire off-road dynamics models that are heavily based on terramechanics models. This study has been partially supported by the U.S. Army Engineer Research and Development Center (ERDC) and by the Terramechanics, Multibody, and Vehicle (TMVS) Laboratory at Virginia Tech.<br>Doctor of Philosophy<br>Big differences exist between a tire moving in on-road conditions, such as asphalt lanes, and a tire moving in off-road conditions, such as soft soil. For example, for passenger cars commonly driven on asphalt lanes, normally, the tire inflation pressure is suggested to be between 30 and 35 psi; very low inflation pressure is also not suggested. By contrast, for off-road vehicles operated on soft soil, low inflation pressure is recommended for their tires; the inflation pressure of a tractor tire can be as low as 12 psi, for the sake of low post-traffic soil compaction and better tire traction. Besides, unlike the research on tire on-road dynamics, the research on off-road dynamics is still immature, while the physics behind the off-road dynamics could be more complex than the on-road dynamics. In this dissertation, experimental tests were completed to study the factors influencing tire tractive performance and soil behavior, and model parameterization methods were developed for a better prediction of tire off-road dynamics models. Tire or vehicle manufacturers can use the research results or methods presented in this dissertation to offer suggestions for the tire or vehicle operation on soft soil in order to maximize the tractive performance and minimize the post-traffic soil compaction.
APA, Harvard, Vancouver, ISO, and other styles
12

Grottke, Michael [Verfasser]. "Modeling Software Failures during Systematic Testing : The Influence of Environmental Factors / Michael Grottke." Aachen : Shaker, 2003. http://d-nb.info/1170541119/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Choi, Ka-man, and 蔡嘉敏. "Cost-effectiveness of primary HPV testing for cervical cancer screening : a systematic review." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/193758.

Full text
Abstract:
Background: Human papillomavirus (HPV) DNA test is more sensitive and can detect more high-grade cervical intraepithelial lesions than cytology test in cervical cancer screening. There are studies confirming HPV test being more effective in cervical cancer screening by detecting the persistence of HPV infection that could lead to cancer. However, the costs associated with a HPV test is higher than a cytology test. Moreover, HPV test is less specific which could subject more women to further triage tests or unnecessary invasive diagnostic procedures. Therefore healthcare costs could possibly increase if primary HPV screening is to be adopted. Study objective: The aim of the study is to systematically review the cost-effectiveness of primary HPV testing in cervical cancer screening Method: Electronic search was performed in three biomedical databases (PubMed, Medline, Cochrane Library) and one economic evaluation database to identify relevant studies. Studies were selected according to the explicit inclusion and exclusion criteria defined. Only those studies carried out in high-income countries were included so that result could be better applied to Hong Kong. Results: A total of 19 studies were included in this systematic review. Cytology-only method is generally not cost-effective. To be cost-effective, it has to be performed in a longer screening interval which would reduce not only the screening costs but also a reduction in the health outcomes. Among the different options in HPV-based primary screening, HPV testing with cytology triage is the most cost-effective strategy in many of the studies. Combined HPV/cytology co-screening could achieve the biggest health benefit but is also most costly. HPV-based screening is more cost-effective for those >30 years of age and is usually less cost-effective if applied to young women. From the result in sensitivity analysis, HPV-based screening is sensitive to an increase in the costs of the HPV test, a low HPV test sensitivity and a low screening compliance rate. Conclusion: Primary HPV screening is cost-effective and generally performs better than cytology screening. The result of this systematic review guides the future direction of developing an optimal cervical screening strategy in Hong Kong. Local context has to be considered when examining the cost-effectiveness of primary HPV testing for cervical screening. Good quality local epidemiological data on HPV infection and cervical cancer and screening would be required to aid future research on the application of HPV test for cervical cancer screening in Hong Kong.<br>published_or_final_version<br>Public Health<br>Master<br>Master of Public Health
APA, Harvard, Vancouver, ISO, and other styles
14

Mamun, Md Abdullah Al, and Aklima Khanam. "Concurrent Software Testing : A Systematic Review and an Evaluation of Static Analysis Tools." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4310.

Full text
Abstract:
Verification and validation is one of the most important concerns in the area of software engineering towards more reliable software development. Hence it is important to overcome the challenges of testing concurrent programs. The extensive use of concurrent systems warrants more attention to the concurrent software testing. For testing concurrent software, automatic tools development is getting increased focus. The first part of this study presents a systematic review that aims to explore the state-of-the-art of concurrent software testing. The systematic review reports several issues like concurrent software characteristics, bugs, testing techniques and tools, test case generation techniques and tools, and benchmarks developed for the tools. The second part presents the evaluation of four commercial and open source static analysis tools detecting Java multithreaded bugs. An empirical evaluation of the tools would help the industry as well as the academia to learn more about the effectiveness of the static analysis tools for concurrency bugs.
APA, Harvard, Vancouver, ISO, and other styles
15

Khan, M. Shahan Ali, and Ahmad ElMadi. "Data Warehouse Testing : An Exploratory Study." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4767.

Full text
Abstract:
Context. The use of data warehouses, a specialized class of information systems, by organizations all over the globe, has recently experienced dramatic increase. A Data Warehouse (DW) serves organiza-tions for various important purposes such as reporting uses, strategic decision making purposes, etc. Maintaining the quality of such systems is a difficult task as DWs are much more complex than ordi-nary operational software applications. Therefore, conventional methods of software testing cannot be applied on DW systems. Objectives. The objectives of this thesis study was to investigate the current state of the art in DW testing, to explore various DW testing tools and techniques and the challenges in DW testing and, to identify the improvement opportunities for DW testing process. Methods. This study consists of an exploratory and a confirmatory part. In the exploratory part, a Systematic Literature Review (SLR) followed by Snowball Sampling Technique (SST), a case study at a Swedish government organization and interviews were conducted. For the SLR, a number of article sources were used, including Compendex, Inspec, IEEE Explore, ACM Digital Library, Springer Link, Science Direct, Scopus etc. References in selected studies and citation databases were used for performing backward and forward SST, respectively. 44 primary studies were identified as a result of the SLR and SST. For the case study, interviews with 6 practitioners were conducted. Case study was followed by conducting 9 additional interviews, with practitioners from different organizations in Sweden and from other countries. Exploratory phase was followed by confirmatory phase, where the challenges, identified during the exploratory phase, were validated by conducting 3 more interviews with industry practitioners. Results. In this study we identified various challenges that are faced by the industry practitioners as well as various tools and testing techniques that are used for testing the DW systems. 47 challenges were found and a number of testing tools and techniques were found in the study. Classification of challenges was performed and improvement suggestions were made to address these challenges in order to reduce their impact. Only 8 of the challenges were found to be common for the industry and the literature studies. Conclusions. Most of the identified challenges were related to test data creation and to the need for tools for various purposes of DW testing. The rising trend of DW systems requires a standardized testing approach and tools that can help to save time by automating the testing process. While tools for operational software testing are available commercially as well as from the open source community, there is a lack of such tools for DW testing. It was also found that a number of challenges are also related to the management activities, such as lack of communication and challenges in DW testing budget estimation etc. We also identified a need for a comprehensive framework for testing data warehouse systems and tools that can help to automate the testing tasks. Moreover, it was found that the impact of management factors on the quality of DW systems should be measured.<br>Shahan (+46 736 46 81 54), Ahmad (+46 727 72 72 11)
APA, Harvard, Vancouver, ISO, and other styles
16

Abdeen, Waleed, and Xingru Chen. "Model-Based Testing for Performance Requirements : A Systematic Mapping Study and A Sample Study." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18778.

Full text
Abstract:
Model-Based Testing is a method that supports automated test design by using amodel. Although it is adopted in industrial, it is still an open area within performancerequirements. We aim to look into MBT for performance requirements and find out aframework that can model the performance requirements. We conducted a systematicmapping study, after that we conducted a sample study on software requirementsspecifications, then we introduced the Performance Requirements Verification andValidation (PRVV) model and finally, we completed another sample study to seehow the model works in practice. We found that there are many models can beused for performance requirement while the maturity is not enough. MBT can beimplemented in the context of performance, and it has been gaining momentum inrecent years compared to earlier. The PRVV model we developed can verify theperformance requirements and help to generate the test case.
APA, Harvard, Vancouver, ISO, and other styles
17

Kurmaku, Ted, and Musa Kumrija. "A SYSTEMATIC LITERATURE REVIEW AND META-ANALYSIS COMPARING AUTOMATED TEST GENERATION AND MANUAL TESTING." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48815.

Full text
Abstract:
Software testing is among the most critical parts of the software development process. The creation of tests plays a substantial role in the evaluation of software quality yet being one of the most expensive tasks in software development. This process typically involves intensive manual efforts and it is one of the most labor-intensive steps during software testing. To reduce manual efforts, automated test generation has been proposed as a method of creating tests more efficiently. In recent decades, several approaches and tools have been proposed in the scientific literature to automate the test generation. Yet, how these automated approaches and tools compare to or complement manually written is still an open research question that has been tackled by some software researchers in different experiments. In the light of the potential benefits of automated test generation in practice, its long history, and the apparent lack of summative evidence supporting its use, the present study aimed to systematically review the current body of peer-reviewed publications on the comparison between automated test generation and manual test design. We conducted a systematic literature review and meta-analysis for collecting data from studies comparing manually written tests with automatically generated ones in terms of test efficiency and effectiveness metrics as they are reported. We used a set of primary studies to collect the necessary evidence for analyzing the gathered experimental data. The overall results of the literature review suggest that automated test generation outperforms manual testing in terms of testing time, test coverage, and the number of tests generated and executed. Nevertheless, manually written tests achieve a higher mutation score and they prove to be highly effective in terms of fault detection. Moreover, manual tests are more readable compared to the automatically generated tests and can detect more special test scenarios that the ones created by human subjects. Our results suggest that just a few studies report specific statistics (e.g., effect sizes) that can be used in a proper meta-analysis. The results of this subset of studies suggest rather different results than the ones obtained from our literature review, with manual tests being better in terms of mutation score, branch coverage, and the number of tests executed. The results of this meta-analysis are inconclusive due to the lack of sufficient statistical data and power that can be used for a meta-analysis in this comparison. More primary studies are needed to bring more evidence on the advantages and disadvantages of using automated test generation over manual testing.
APA, Harvard, Vancouver, ISO, and other styles
18

Nosek, Jakub. "Testování metody Precise Point Positioning." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2020. http://www.nusl.cz/ntk/nusl-414313.

Full text
Abstract:
This diploma thesis deals with the Precise Point Positioning (PPP) method in various variants. The thesis describes the theoretical foundations of the PPP method and the most important systematic errors that affect accuracy. The accuracy of the PPP method was evaluated using data from the permanent GNSS station CADM, which is part of the AdMaS research center. Data of the period 2018 – 2019 were processed. The results of combinations of different GNSS and the results of different observation periods were compared. Finally, the accuracy was verified at 299 IGS GNSS stations.
APA, Harvard, Vancouver, ISO, and other styles
19

Singh, Inderjeet. "A Mapping Study of Automation Support Tools for Unit Testing." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-15192.

Full text
Abstract:
Unit testing is defined as a test activity usually performed by a developer for the purpose of demonstrating program functionality and meeting the requirements specification of module. Nowadays, unit testing is considered as an integral part in the software development cycle. However, performing unit testing by developers is still considered as a major concern because of the time and cost involved in it. Automation support for unit testing, in the form of various automation tools, could significantly lower the cost of performing unit testing phase as well as decrease the time developer involved in the actual testing. The problem is how to choose the most appropriate tool that will suit developer requirements consisting of cost involved, effort needed, level of automation provided, language support, etc. This research work presents results from a systematic literature review with the aim of finding all unit testing tools with an automation support. In the systematic literature review, we initially identified 1957 studies. After performing several removal stages, 112 primary studies were listed and 24 tools identified in total. Along with the list of tools, we also provide the categorization of all the tools found based on the programming language support, availability (License, Open source, Free), testing technique, level of effort required by developer to use tool, target domain, that we consider as good properties for a developer to make a decision on which tool to use. Additionally, we categorized type of error(s) found by some tools, which could be beneficial for a developer when looking at the tool’s effectiveness. The main intent of this report is to aid developers in the process of choosing an appropriate unit testing tool from categorization table of available tools with automation unit testing support that ease this process significantly. This work could be beneficial for researchers considering to evaluate efficiency and effectiveness of each tool and use this information to eventually build a new tool with the same properties as several others.
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Yu. "The development of a systematic experimental method for damage identification." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-06112009-063906/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Stürmer, Ingo. "Systematic testing of code generation tools a test suite oriented approach for safeguarding model based code generation." Berlin Pro Business, 2006. http://deposit.ddb.de/cgi-bin/dokserv?id=2788859&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Mikolajczak, Jochen. "Promoting HIV-testing among MSM in the Netherlands the systematic development of an online HIV-prevention intervention /." Maastricht : Maastricht : Universitaire Pers Maastricht ; University Library, Universiteit Maastricht [host], 2008. http://arno.unimaas.nl/show.cgi?fid=12820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Bianda, Nkembi Lydie. "The Role of Prenatal Care and Systematic HIV Testing in Preventing Perinatal Transmission in Tanzania, 2011-2012." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/3486.

Full text
Abstract:
In 2012, Tanzania, the prevalence of HIV infection among Tanzanian women was 6.3%; that same year, 18% of Tanzanian children were born already infected with HIV. The purpose of this study was to determine the importance of prenatal care attendance on comprehensive knowledge of HIV mother-to-child transmission (MTCT), and HIV testing and counseling, as well as awareness of HIV testing coverage services, in Tanzania. The study population was Tanzanian women of childbearing, aged 15 to 49 years old. Guided by the health belief model, this cross-sectional survey design used secondary data from the 2011-2012 Tanzania Demographic Health Survey. Independent variables were comprehensive knowledge of HIV MTCT, HIV testing and counseling, and awareness of HIV testing coverage services; the dependent variable was prenatal care visit (PNCV) attendance. Findings showed that 69% of women had their first PNCV in the second trimester, meaning that they attended less than 4 visits. Multinomial logistic regression modeling assessed the association between independent variables and PNCV attendance after controlling for sociodemographic factors. Findings denoted that comprehensive knowledge of HIV MTCT after controlling for married vs. never married, maternal age, and wealth was associated with PNVC. HIV testing and post counseling, and awareness of HIV testing coverage services were also significant for women who attended their first prenatal visit in the 2nd trimester. These findings have positive social change implications by informing efforts to identify at-risk pregnant women through systematic HIV testing and counseling for early medical intervention; such efforts may reduce MTCT and encourage them to start their PNCV in the first trimester.
APA, Harvard, Vancouver, ISO, and other styles
24

Pickering, Philip. "Towards a systematic methodology for the design, testing and manufacture of high brightness light emitting diode lighting luminaires." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/towards-a-systematic-methodology-for-the-design-testing-and-manufacture-of-high-brightness-light-emitting-diode-lighting-luminaires(e1573c06-9abf-402a-bfa3-13597d60079b).html.

Full text
Abstract:
Reducing the consumption of electricity is one of the principal areas of current research into energy saving technologies. Within this area is the effort to reduce the demand for electricity for lighting purposes. To consider just the domestic requirements, during 2011 domestic electricity consumption in the UK was 30% of a total electricity demand of 374TWh. Of this figure of 112TWh some 13TWh were used for lighting alone. This thesis describes research made in this area, in particular the manufacture of lighting luminaires making use of High Brightness Light Emitting Diodes (HBLEDs). The thesis outlines and demonstrates a methodology for the design, testing and subsequent manufacture of complete luminaires which make suitable, low energy consumption alternatives to conventional lighting using filament lamps and fluorescent fittings. Work has been done in the areas of:Thermal management, Power supply design, Luminaire design, Performance simulation in software, 'Remote phosphor’ luminaires in which LEDs with blue light outputs are used to provide blue light which is converted to usable white light by phosphor laden acrylic plates employed in luminaires for both wavelength conversion and light diffusion, Luminaire performance measurementIt is shown that substantial savings in energy (over 50%) can be made by using HBLEDs in lighting luminaires whilst producing satisfactory lighting for a variety of purposes.
APA, Harvard, Vancouver, ISO, and other styles
25

Bhatti, Khurram, and Ahmad Nauman Ghazi. "Effectiveness of Exploratory Testing, An empirical scrutiny of the challenges and factors affecting the defect detection efficiency." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5456.

Full text
Abstract:
Context: Software testing is an integral part of software development life cycle. To improve the quality of software there are different testing approaches practiced over the years. Traditionally software testing is carried out by following approach focusing on prior test design. While exploratory testing is an approach to test software where the tester does not require to follow a specific test design. But rather, exploratory testing should facilitate the tester in testing the complete system comprehensively. Exploratory testing is seen by some, as a way to conduct simultaneous learning, test design and execution of tests simultaneously. While others point to exploratory testing enabling constant evolution of tests in an easy manner. Objectives: In this study we have investigated the field of exploratory testing in literature and industry to understand its perception and application. Further among the stated claims by practitioners, we selected defect detection efficiency and effectiveness claim for empirical validation through an experiment and survey. Methods: In this study, a systematic literature review, interview, experiment and survey are conducted. In the systematic review a number of article sources are used, including IEEE Xplore, ACM Digital Library, Engineering village, Springer Link, Google Scholar and Books database. The systematic review also includes the gray literature published by the practitioners. The selection of studies was done using two-phase and tollgate approach. A total of 47 references were selected as primary studies. Eight semi-structures interviews were conducted with industry practitioners. Experiment had total 4 iterations and 70 subjects. The subjects were selected from industry and academia. The experimental design used was one factor with two interventions and one response variable. Results: Based on our findings from literature review and interviews, the understanding of exploratory testing has improved over the period but still lacks empirical investigation. The results drawn from experimental and survey data shows that exploratory testing proved effective and efficient in finding more critical bugs in limited time. Conclusions: We conclude that exploratory testing has a lot of potential and much more to offer to testing industry. But more empirical investigation and true facts and figures are required to motivate the testing industry to adapt it. We have reported a number of advantages, disadvantages, challenges and factors in this study. We further investigated the claims stated by the ET practitioners through an experiment and survey. The statistical tests were conducted on the collected data to draw meaningful results. We found statistical significance difference in number of true defects found. Using exploratory testing approach testers found far more defects than test case based testing. Although, there was no statistical significance difference between the two approaches for false defects.<br>Slutsatser: Vi anser att det experimentella tester har stor potential och mycket mer att erbjuda testning industrin. Men mer empirisk undersökning och sann fakta och siffror är skyldiga att motivera testning industrin att anpassa den. Vi har rapporterat en rad fördelar, nackdelar, utmaningar och faktorer i denna studie. Vi undersökte vidare fordringar anges av ET utövare genom ett experiment och undersökning. De statistiska test genomfördes på insamlade data för att dra meningsfulla resultat. Vi fann statistisk signifikans skillnaden i antalet sann fel som upptäcks. Använda utforskande testning strategi testare fann långt fler fel än testfall baserat testning. Även om det inte fanns någon statistisk signifikans skillnad mellan de två synsätten för falska defekter.<br>0046 73 651 8048
APA, Harvard, Vancouver, ISO, and other styles
26

Orzeszyna, Wojciech. "Solutions to the equivalent mutants problem : A systematic review and comparative experiment." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3727.

Full text
Abstract:
Context: Mutation testing is a fault-based technique for measuring the effectiveness of a test set in terms of its ability to detect faults. Mutation testing seeds artificial faults into an application and checks whether a test suite can detect them. If these faults are not found, the test suite is still not considered to be &apos;good enough&apos;. However, there are also mutations which keep the program semantics unchanged and thus cannot be detected by any test suite. Finding a way to assess these mutations is also known as the equivalent mutant problem (EMP). Objectives: The main objective of this thesis is to conduct a systematic literature review in the field of mutation testing, to identify and classify existing methods for equivalent mutants detection. In addiction, other objectives are: analyze possibilities to improve existing methods for equivalent mutant detection, implement new or improved method and compare it with existing ones. Methods: Based on the systematic literature review method we have went over publications from six electronic databases and one conference proceedings. Standard method was extended by scanning lists of references and some alternative sources: searching in Google Scholar, checking personal websites of relevant authors and contacting all of them. We have performed all the systematic literature review steps such as the protocol development, initial selection, final selection, quality assessment, data extraction and data synthesis. In the second part of this thesis - an experiment, we have implemented four second order mutation testing strategies and compared them each other from four different perspectives: mutants reduction, equivalent mutants reduction, fault detection loss and mutation testing process time reduction. Results: The search identified 17 relevant techniques in 22 articles. Three categories of techniques can be distinguished: detecting (DEM), suggesting (SEM) and avoiding equivalent mutants generation (AEMG). Furthermore, for each technique current state of development and some ideas on how to improve it are provided. The experiment proved that DifferentOperators strategy gives the best results in all four investigated areas. In addition, time for manual mutants classification against equivalence was measured. Assessing one first order mutant takes 11 minutes 49 seconds, while for the second order mutants classification time is 9 minutes 36 seconds in average. Conclusions: After three decades of studies, results obtained for techniques from the DEM group are still far from perfection (best one is detecting 47,63% of equivalent mutants). Thus, new paths for the solution have been developed - SEM and AEMG group. Methods from both categories help in dealing with EMP, however from SEM provide only mutants likely to equivalent, while from AEMG cause some loss of test effectiveness. The conclusion from the experiment is that DifferentOperators strategy gives the best results among all proposed.
APA, Harvard, Vancouver, ISO, and other styles
27

Shabbir, Ali M. Eng Massachusetts Institute of Technology. "Scale-up of a high-technology manufacturing startup : improving product reliability through systematic failure analysis and accelerated life testing." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101334.

Full text
Abstract:
Thesis: M. Eng. in Manufacturing, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2015.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 112-115).<br>Ensuring product reliability is a key driver of success during the scale-up of a high-technology manufacturing startup. Reliability impacts the company image and its financial health, however most manufacturing startups do not have a solid understanding of their product's reliability. The purpose of this thesis is to introduce systematic failure analysis to the engineering design process and to establish a framework for testing and analyzing product life so that imperative business decisions and design improvements could be made with regards to reliability. A detailed study and implementation of these process improvements to address reliability issues was conducted at New Valence Robotics Corporation (NVBOTS) in Boston, Massachusetts. Systematic failure analysis was achieved through the creation and implementation of Failure Modes and Effects Analysis (FMEA) procedures. A single FMEA iteration was performed on the NVPro printer to identify the top risk component-linear ball bushings-for detailed life analysis. Following an in-depth investigation of potential failure modes of the linear bushings, an Accelerated Life Test (ALT) was designed using Design of Experiments (DOE) principles. An accompanying test apparatus with mechatronic control was also designed. The ALT was not actually executed but representative data was analyzed for illustrative purposes using the General Log-Linear (GLL) life-stress relationship and a 2-parameter Weibull distribution for the accelerating stresses of mechanical load and lubrication. The work performed provides NVBOTS and similar high-technology manufacturing startups a complete starting point for systematically analyzing their product's reliability and quantitatively evaluating its life in a resource efficient way.<br>by Ali Shabbir.<br>M. Eng. in Manufacturing
APA, Harvard, Vancouver, ISO, and other styles
28

Dulal, Nabin Raj, and Sabindra Maharjan. "A Comparative Study of Component Based Regression Testing Approaches without Source Code." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4357.

Full text
Abstract:
Context: Today, most of the software products are built with COTS components. When a new version of these components is available, it is difficult to perform testing as the vendors of the component do not usually provide source code. Various regression testing techniques have been developed, but most of the techniques are based on the source code for change identification. So, the testers are facing different challenges in performing effective testing. Objectives: The goal of this research is to find out the different approaches that are used to identify changes in modified COTS component, analyze the main characteristics of those approaches and investigate how these characteristics can be used in selection and development of CBRT approach. Methods: To fulfill the aims of the research, we have conducted systematic literature review of different CBRT approaches from the year 1993-2010. From systematic literature we found out 32 papers relevant to our study. Data related to our research are extracted from those papers and conclusion is made. The relevant articles were searched in six scientific databases such as IEEE Explore, ACM Digital Library, SpringerLink, Science Direct, Scopus, and Engineering Village. Furthermore, online survey was conducted based on the characteristics of CBRT approaches. This survey was conducted to validate the SLR result. Results: From the systematic Literature Review we have found out 8 different characteristics of CBRT approaches such as applicability, automation, complexity, behavior model used, coverage criteria, strength and weakness, theory used and input. We observe that these are the most important characteristics in CBRT approaches and these approaches should be considered in selecting or developing new CBRT approach. The results from the survey also validate our findings. From survey some more factors were identified. Conclusion: The research develops the state-of-art of CBRT approaches towards future research. The result of this thesis will be helpful for the researchers as well as practitioners who are working on CBRT. The result of the thesis can be considered as a basis for further study. Based on the result of this thesis further study can be done on making a framework based on these characteristics and support component based regression testing.<br>Nabin Raj Dulal, 139, Jagriti Tole Marg, Balaju-16, Kathmandu , Nepal ph: +97714351087
APA, Harvard, Vancouver, ISO, and other styles
29

Zeneli, Mirjan. "Developing, testing and interpreting a cross age peer tutoring intervention for mathematics : social interdependence, systematic reviews and an empirical study." Thesis, Durham University, 2015. http://etheses.dur.ac.uk/11367/.

Full text
Abstract:
Cross-age peer tutoring is a peer learning strategy which has been shown to improve both socio and academic process of learning factors as well improve attainment in various subjects. There is, however, still room for the intervention to be developed: which was the aim of this work. This was done by applying important socio interdependent aspects such as resource, interpersonal and goal interdependence to a cross-age-peer tutoring intervention in mathematics. Prior to developing the method, the researcher engaged with the theoretical literature as well as provides two forms of systematic reviews. The newly informed cross-age peer tutoring method was then tested on three schools, two of which adopted a pre-post-test quasi-experimental design and one took a single group pre-post-test design. All the schools applied an Interdependent Cross-Age Tutoring (ICAT) format for a period of 6 weeks, on the basis of a 30 minute session once a week. Mathematics head-teachers, facilitators, teachers and students were all trained in various aspects of ICAT. To capture and interpret the impact of the intervention, performance instruments were innovated for each school, together with various previously established attitude sub-scales. In order to measure implementation fidelity ICAT lesson materials were collected for most of the topics and each school received general as well as structured pair observations from the researcher. Also, in order to explore how different groups learned under ICAT the lesson materials of the higher performing tutees were compared to those of the lower performing tutees on various aspects. The findings were mixed, with one of the quasi-experimental design schools showing a highest effect size of 0.81 favoring the ICAT group. The impact of ICAT on important and broader processes of learning attitude variables, social as well as academic, are also discussed. Comparisons of lesson materials between higher performing tutees and lower performing tutees revealed that the highest performing tutees showed better implementation of an essential socio-interdependent aspect: setting a shared academic goal.
APA, Harvard, Vancouver, ISO, and other styles
30

Hulbert-Williams, Nicholas James. "Systematic review and empirical investigation of adjustment to cancer diagnosis : predicting clinically relevant psychosocial outcomes and testing Lazarus's Transactional Model of stress." Thesis, Cardiff University, 2009. http://orca.cf.ac.uk/55823/.

Full text
Abstract:
Cancer is one of the leading causes of death in the UK. The Cancer Reform Strategy (2007) highlighted the need for integration of psychological services into routine cancer care. Previous research into psychosocial aspects of adjustment is, however, inconsistent This thesis opens with a background on cancer epidemiology and policy the psychological impact of cancer and, the shortcomings of previous intervention-based research. The Transactional Model is introduced as a potential framework for modelling adjustment. The thesis aimed to test this model for cancer patients in order to provide evidence to better inform the provision of psychological services for cancer patients. A systematic review summarised the literature exploring the extent to which personality, appraisals and emotions were associated with psychosocial outcome. 68 studies were included. A number of small meta-analyses were performed using the Hunter and Schmidt method. Findings demonstrated a lack of consistency, and a number of research questions still unanswered. A methodological critique was provided based on systematic quality assessment. The empirical study had two purposes: prediction of clinical outcome and theory development 160 recently diagnosed colorectal, breast, lung and prostate cancer patients were recruited. Measures of personality, appraisal, emotion, coping and outcome (anxiety, depression and quality of life) were collected at baseline, three- and six-month follow-up. Analyses demonstrated that the data generally fitted the model but adaptations were proposed. Clinically, between 47 and 74% of variance in psychosocial outcome was explained by these predictor variables, with cognitive appraisals most predictive of all Transactional Model components. Statistical theory testing of cognition-emotion processes did not confirm the Transactional Model (Lazarus, 1999). These findings question the prescriptive nature of the theory and further testing is suggested, particularly in response to chronic stressors. Guidelines for methodological improvements are provided. The thesis concludes with proposals for further research, including suggestions for theory- informed interventions.
APA, Harvard, Vancouver, ISO, and other styles
31

gao, shenjian, and Yanwen Tan. "Paving the Way for Self-driving Cars - Software Testing for Safety-critical Systems Based on Machine Learning : A Systematic Mapping Study and a Survey." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15681.

Full text
Abstract:
Context: With the development of artificial intelligence, autonomous vehicles are becoming more and more feasible and the safety of Automated Driving (AD) system should be assured. This creates a need to analyze the feasibility of verification and validation approaches when testing safety-critical system that contains machine learning (ML) elements. There are many studies published in the context of verification and validation (V&amp;V) research area related to safety-critical components. However, there are still blind spots of research to identify which test methods can be used to test components with deep learning elements for AD system. Therefore, research should focus on researching the relation of test methods and safety-critical components, also need to find more feasible V&amp;V testing methods for AD system with deep learning structure. Objectives: The main objectives of this thesis is to understand the challenges and solution proposals related to V&amp;V of safety-critical systems that rely on machine learning and provide recommendations for future V&amp;V of AD based on deep learning, both for research and practice. Methods: We performed a Systematic Literature Review (SLR) through a snowballing method, based on the guidelines from Wohlin [1], to identify research on V&amp;V methods development for machine learning. A web-based survey was used to complement the result of literature review and evaluate the V&amp;V challenge and methods for machine learning system. We identified 64 peer-reviewed papers and analysed the methods and challenges of V&amp;V for testing machine learning components. We conducted an industrial survey that was answered by 63 subjects. We analyzed the survey results with the help of descriptive statistics and Chi-squared tests. Result: Through the SLR we identified two peaks for research on V&amp;V of machine learning. Early research focused on the aerospace field and in recent years the research has been more active in other fields like automotive and robotics. 21 challenges during V&amp;V safety-critical systems have been described and 32 solution proposals are addressing the challenges have been identified. To find the relationship between challenges and methods, a classification has been done that seven different type of challenges and five different type of solution proposals have been identified. The classification and mapping of challenges and solution methods are included in the survey questionnaire. From the survey, it was observed that some solution proposals which have attracted much research are not considered as particularly promising by practitioners. On the other hand, some new solution methods like simulated test cases are extremely promising to support V&amp;V for safety-critical systems. Six suggestions are provided to both researchers and practitioners. Conclusion: To conclude the thesis, our study presented a classification of challenges and solution methods for V&amp;V of safety-critical ML-based systems. We also provide a mapping for helping practitioners understand the different kinds of challenges the respective solution methods address. Based on our findings, we provide suggestions to both researchers and practitioners. Thus, through the analysis, we have given the most concern on types of challenges and solution proposals for AD systems that use deep learning, which provides certain help to design processes for V&amp;V of safety-critical ML-based systems in the future.
APA, Harvard, Vancouver, ISO, and other styles
32

Shao, Jing. "Glycated haemoglobin A1c compared to fasting plasma glucose and oral glucose tolerance testing for diagnosing type 2 diabetes and pre-diabetes : a meta-analysis." Diss., University of Pretoria, 2014. http://hdl.handle.net/2263/43240.

Full text
Abstract:
BACKGROUND In 2010, glycated haemoglobin A1c (HbA1c) was officially recommended as a screening tool to diagnose type 2 diabetes mellitus (T2DM) and pre-diabetes, with cut-off points 6.5% and 5.7% to 6.4% respectively. The implications of using the HbA1c criterion, compared to the general diagnostic criteria: fasting glucose test (FPG) and oral glucose tolerance test (OGTT), is however still being debated. OBJECTIVES The objectives of this study were to evaluate and compare the pooled prevalence of type 2 diabetes mellitus (T2DM) and pre-diabetes, as measured by the Haemoglobin A1c (HbA1c) test, or the fasting plasma glucose (FPG) and oral glucose tolerance test (OGTT). Secondly, to determine and compare the diagnostic test characteristics (sensitivity, specificity) of these tests. METHODS Published papers, with a cross sectional study design, were selected for a systematic review and meta-analysis. The search strategy was an electronic review of journal articles listed on MEDLINE, PubMed and Google scholar between 1996 and 2012. Reference lists were checked, journals were hand searched and experts were contacted when necessary. Initially all studies related to the validation of HbA1c as a tool to detect pre-diabetes or T2DM in humans, published in English, were examined. Studies were excluded if they did not meet the above mentioned criteria, and/or were conducted with pregnant women. Further analysis was done if FPG or OGTT was compared to HbA1c. The diagnosis of diabetes had to have been based on ADA or WHO criteria. These criteria are: HbA1c 5.7%-6.4% for pre-diabetes and >=6.5% for T2DM; FPG 5.6mmol-7mmol/l for pre-diabetes and >=7mmol/l for T2DM; OGTT 7.8mmol-11.1mmol/l for pre-diabetes and >=11.1mmol/l for T2DM). The OGTT and FPG tests were used as the reference tests and the prevalence reflected as a positive or negative proportion. The sensitivity and specificity of HbA1c >=6.5% among cases defined by OGTT or FPG should have been reported, or it was possible to calculate these from the data provided. Study results relating to diagnostic accuracy were extracted and synthesized using multivariate random effects meta-analysis methods. This study focused on patients who were suspected of having T2DM, from two sub-groups (a community-based group and a high-risk group) to compare the detection rate of HbA1c with FPG and OGTT.<br>Dissertation (MSc)--University of Pretoria, 2014.<br>lk2014<br>School of Health Systems and Public Health (SHSPH)<br>MSc<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
33

Nha, Vi Tran Ngoc. "Identification and Analysis of Combined Quality Assurance Approaches." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4007.

Full text
Abstract:
Context: Due to the increasing size and complexity of software today, the amount of effort for software quality assurance (QA) is growing and getting more and more expensive. There are many techniques lead to the improvement in software QA. Static analysis can obtain very good coverage while analyze program without execution, but it has the weakness of imprecision by false errors. In contrast, dynamic analysis can obtain only partial coverage due to a large number of possible test cases, but the reported errors are more precise. Static and dynamic analyses can complement each other by providing valuable information that would be missed by using isolated analysis technique. Although many studies investigate the QA approaches that combine static and dynamic QA techniques, it is unclear what we have learned from these studies, because no systematic synthesis exists to date. Method: This thesis is intended to provide basic key concepts for combined QA approaches. A major part of this thesis presents the systematic review that brings details discussion about state of the art on the approaches that combine static and dynamic QA techniques. The systematic review is aimed at the identification of the existed combined QA approaches, how to classify them, their purposes and input as well as introduce which combination is available. Result: The results show that, there are two relations in the combination of static and dynamic techniques such as integration and separation. Besides, the objectives of combined QA approaches were introduced according to QA process quality and product quality. The most common inputs for combined approaches were also discussed. Moreover, we identified which combination of static and dynamic techniques should or should not be used as well as the potential combination for further research.
APA, Harvard, Vancouver, ISO, and other styles
34

Salman, I. (Iflaah). "The effects of confirmation bias and time pressure in software testing." Doctoral thesis, Oulun yliopisto, 2019. http://urn.fi/urn:isbn:9789526224442.

Full text
Abstract:
Abstract Background: Confirmation bias is the tendency to search for evidence that confirms a person’s preconceptions. Confirmation bias among software testers is their tendency to validate the correct functioning of the program rather than testing it to reveal errors. Psychology literature suggests that time pressure may promote confirmation bias because time pressure impedes analytical processing of the task at hand. Time pressure is perceived negatively for its effects in software engineering (SE), therefore, its effect on confirmation bias may exacerbate software quality. Objective: We aim to examine confirmation bias among software testers. Additionally, we examine the effect of time pressure on confirmation bias and how time pressure affects the testers’ perception of the performance. We also question what other antecedents to confirmation bias exist in software testing and how they lead to it. Method: We first examined the state of the art research on cognitive biases in SE using systematic mapping. Then, we empirically examined the feasibility of using students in further experiments. An experiment with 42 students (novice professionals) investigated the manifestation of confirmation bias and whether time pressure promotes it. Another experiment with 87 novice professionals examined the perception of the performance of software testers under time pressure. A grounded theory study based on the interview-data of 12 practitioners explored other antecedents to confirmation bias in software testing and how they lead to it. Results: Time pressure emerged as a major antecedent to confirmation bias in the grounded theory. Testers prefer to validate the correct functioning of the program under time pressure. However, time pressure could not significantly promote confirmation bias among testers. Software testers significantly manifest confirmation bias irrespective of time pressure. The perception of performance is also sustained irrespective of time pressure. Conclusion: Testers should develop self-awareness of confirmation bias and improve their perception of performance to improve their actual testing. In the industry, automated testing may alleviate confirmation bias due to time pressure by rapidly executing the test suites<br>Tiivistelmä Tausta: Vahvistusharha tarkoittaa taipumusta hakea ennakko-odotuksia vahvistavaa todistusaineistoa. Ohjelmistotestaajien vahvistusharha tarkoittaa taipumusta varmistaa ohjelmiston oikea toiminta mieluummin kuin hakea siitä virheitä. Psykologinen tutkimus esittää, että aikataulupaine voi lisätä vahvistusharhaa heikentämällä työn analyyttista tarkastelua. Aikataulupainetta pidetään ohjelmistotekniikan soveltamiseen kielteisesti vaikuttavana asiana, minkä johdosta sen vaikutus vahvistusharhaan voi heikentää ohjelmiston laatua. Tavoite: Tarkastelimme ohjelmistotestaajien vahvistusharhaa tutkimalla aikataulupaineen vaikutusta vahvistusharhaan ja testaajien käsitykseen testauksen tehokkuudesta. Lisäksi kysymme, mitkä muut tekijät johtavat ohjelmistotestauksen vahvistusharhaan, ja millä tavoin. Menetelmä: Ensiksi tarkastelimme ohjelmistotekniikan kognitiivisten harhojen viimeisintä tutkimusta systemaattista kirjallisuuskartoituksella. Sitten tutkimme kokeellisesti, miten yliopisto-opiskelijat soveltuvat käytettäväksi tutkimusjoukkona vahvistusharhan kokeellisessa tutkimuksessa. Kokeellinen tutkimus, johon osallistui 42 opiskelijaa (aloittelevaa ammattilaista), tarkasteli vahvistusharhan lisääntymistä aikataulupaineen vaikutuksesta. Toinen kokeellinen tutkimus, johon osallistui 87 aloittelevaa ammattilaista, tarkasteli ohjelmistotestaajien käsitystä testauksen tehokkuudesta aikataulupaineen alla. Kahdeltatoista ammattilaiselta haastattelemalla kerätystä tutkimusaineistosta tarkasteltiin ankkuroidun teorian menetelmällä muiden mahdollisten tekijöiden vaikutusta vahvistusharhaan. Tulokset: Ankkuroidussa teoriassa aikataulupaine osoittautui merkittäväksi vahvistusharhan tekijäksi. Ammattimaiset ohjelmistotestaajat haluavat mieluummin validoida ohjelmiston oikean toiminnan aikataulupaineessa. Toisessa kokeellisessa tutkimuksessa aikataulupaine ei kuitenkaan lisännyt merkittävästi testaajien vahvistusharhaa, vaan testaajien vahvistusharha ilmeni merkittävästi aikataulupaineista riippumatta. Myös käsitys työn tehokkuudesta säilyi riippumatta aikataulupaineesta. Johtopäätös: Ohjelmistotestaajien on syytä kehittää tietoisuuttaan vahvistusharhasta ja parantaa käsitystään työn tehokkuudesta parantaakseen testaustyötä. Teollisuudessa automaattinen testaus voi lieventää aikataulupaineen aiheuttamaa vahvistusharhaa nopeuttamalla testisarjoja
APA, Harvard, Vancouver, ISO, and other styles
35

Raby, Carlotta. "Identifying risks for male street gang affiliation : a systematic review and design and validation of the gang affiliation risk measure (GARM)." Thesis, Canterbury Christ Church University, 2016. http://create.canterbury.ac.uk/14700/.

Full text
Abstract:
This study aimed to create the first measure of risk for UK gang-affiliation. A pilot stage invited gang affiliated and non-gang affiliated participants between the ages of 16–25 to retrospectively self-report on 58 items of risk exposure at the age of 11. Based on performance of these items, a 26-item measure was developed and administered to a main study sample (n=185) of gang affiliated and non-gang affiliated participants. Categorical Principal Component Analysis was applied to data, yielding a single-factor solution (historic lack of safety and current perception of threat). A 15-item gang-affiliation risk measure (GARM) was subsequently created. The GARM demonstrated good internal consistency, construct validity and discriminative ability. Items from the GARM were then transformed to read prospectively, resulting in a test measure for predictive purposes (T-GARM). However, the T-GARM requires further validation regarding its predictive utility and generalisability.
APA, Harvard, Vancouver, ISO, and other styles
36

Hensen, Bernadette. "Increasing men's uptake of HIV-testing in sub-Saharan Africa : a systematic review of interventions and analyses of population-based data from rural Zambia." Thesis, London School of Hygiene and Tropical Medicine (University of London), 2016. http://researchonline.lshtm.ac.uk/2531234/.

Full text
Abstract:
Men's uptake of HIV-testing and counselling services across sub-Saharan Africa is inadequate relative to universal access targets. A better understanding of the effectiveness of available interventions to increase men’s HIV-testing and of men’s HIV-testing behaviours is required to inform the development of strategies to increase men’s levels and frequency of HIV-testing. My thesis aims to fill this gap. To achieve this, I combine a systematic review of randomised trials of interventions to increase men’s uptake of HIV-testing in sub-Saharan Africa with analyses of two population-based surveys from Zambia, through which I investigate the levels of and factors associated with HIV-testing behaviours. I also conduct an integrated analyses to explore whether the scale-up of voluntary medical male circumcision (VMMC) services between 2009 and 2013 contributed to increasing men’s population levels of HIV-testing. In the systematic review I find that strategies to increase men's HIV-testing are available. Health facility-based strategies, including reaching men through their pregnant partners, reach a high proportion of men attending facilities, however, they have a low reach overall. Community-based mobile HIV-testing is effective at reaching a high proportion of men, reaching 44% of men in Tanzania and 53% in Zimbabwe compared to 9% and 5% in clinic-based communities, respectively. In the population-based surveys, HIV-testing increased with time: 52% of men evertested in 2011/12 compared to 61% in 2013. Less than one-third of men reported a recent-test in both surveys and 35% multiple lifetime HIV-tests. Having a spouse who ever-tested and markers of socioeconomic position were associated with HIV-testing outcomes and a history of TB with ever-testing. The scale-up of VMMC provided men who opt for circumcision with access to HIV-testing services: 86% of circumcised men ever-tested for HIV compared to 59% of uncircumcised men. However, there was little evidence that VMMC services contributed to increasing HIV-testing among men in this rural Zambian setting. Existing strategies to increase men’s uptake of HIV-testing are effective. Over half the men in two population-based surveys reported ever-testing for HIV in rural Zambia. Nonetheless, some 40% of men never-tested. Men’s frequency of HIV-testing was low relative to recommendations that individuals with continued risk of HIV-infection retest annually for HIV. Innovative strategies are required to provide never-testers with access to available services and to increase men’s frequency of HIV-testing.
APA, Harvard, Vancouver, ISO, and other styles
37

De, Silva Jayasekera Varthula Janya. "Systematic Generation of Lack-of-Fusion Defects for Effects of Defects Studies in Laser Powder Bed Fusion AlSi10Mg." Youngstown State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1598531488781737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chapman, Andrew R. "Improving the risk stratification, diagnosis and classification of patients with suspected myocardial infarction." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33163.

Full text
Abstract:
Myocardial infarction is a leading cause of morbidity and mortality worldwide. The purpose of this thesis was to develop strategies for the assessment of patients with suspected myocardial infarction using a high-sensitivity cardiac troponin I assay, and to evaluate the relationship between the aetiology of myocardial infarction and long term clinical outcomes to identify opportunities to modify outcomes. In the United Kingdom, approximately 1 million patients present to hospital with chest pain each year and are assessed for suspected myocardial infarction, yet fewer than 20% of patients receive this diagnosis. Prior clinical standards mandated the admission of patients for serial cardiac troponin testing to identify myocardial necrosis and determine if myocardial infarction had occurred. However, new high-sensitivity assays offer a magnitude improvement in diagnostic precision, and as such provide a novel approach to diagnose or exclude myocardial infarction at an earlier stage. In our first study, I evaluate the performance of a high-sensitivity cardiac troponin I assay as a risk stratification tool in patients with suspected acute coronary syndrome. A systematic review and individual patient-level data meta-analysis was performed, including prospective studies measuring high-sensitivity cardiac troponin I in patients with suspected acute coronary syndrome, where the diagnosis was adjudicated according to the universal definition of myocardial infarction. The primary outcome was myocardial infarction or cardiac death during the index hospitalization or at 30 days. Meta-estimates for primary and secondary outcomes were derived using a binomial-normal random effects model. Performance was evaluated in subgroups and across a range of troponin concentrations (2-16 ng/L) using individual patient data. A total of 22,457 patients were included in the meta-analysis (age 62 [15.5] years; n=9,329 (41.5%) women), of whom 2,786 (12.4%) experienced myocardial infarction or cardiac death at 30 days. Cardiac troponin I concentrations were < 5 ng/L at presentation in 11,012 (49%) patients, with a negative predictive value of 99.5% (95% confidence interval [CI] 99.3-99.6) for myocardial infarction or cardiac death at 30 days. Lower thresholds did not improve safety, but did significantly reduce the proportion identified as low risk. This threshold of 5 ng/L formed the basis for the development of a diagnostic pathway for patients with suspected acute coronary syndrome. In a cohort study of 1,218 patients with suspected acute coronary syndrome who underwent high-sensitivity cardiac troponin I measurement at presentation, 3 and 6 or 12 hours, I derived and validated a novel pathway (rule out myocardial infarction if < 5 ng/L at presentation, or change < 3 ng/L and < 99th centile at 3 hours), and compared this with the established European Society of Cardiology 3-hour pathway (rule out myocardial infarction if < 99th centile at presentation, or at 3 hours if symptoms < 6 hours). The primary outcome was a comparison of the negative predictive value (NPV) of both pathways for myocardial infarction or cardiac death at 30 days. The primary outcome was evaluated in pre-specified subgroups stratified by age, gender, time of symptom onset and known ischaemic heart disease. In those < 99th centile at presentation, the ESC pathway ruled out myocardial infarction in 28.1% (342/1,218) and 78.9% (961/1,218) at presentation and 3 hours respectively, missing 18 index and two 30-day events (NPV 97.9%, 95% confidence intervals [CI] 96.9-98.7%). The novel pathway ruled out 40.7% (496/1,218) and 74.2% (904/1,218) at presentation and 3 hours, missing two index and two 30-day events (NPV 99.5%, 95% CI 99.0-99.9%; P < 0.001 for comparison). The NPV of the novel pathway was greater than the ESC pathway overall (P < 0.001), and in all subgroups including those presenting early or known to have ischaemic heart disease. There are a number of additional approaches for the rule out of myocardial infarction. Clinical risk scores apply conventional risk factors to estimate the probability of myocardial infarction. The most widely implemented scores, HEART, EDACS, GRACE and TIMI, have been extensively validated when used alongside contemporary troponin assays, however, their impact on pathways applying high-sensitivity cardiac troponin testing is less clear. In 1,935 patients with suspected acute coronary syndrome, I evaluated the safety and efficacy of our novel pathway or the European Society of Cardiology 3-hour pathway alone, or in conjunction with low-risk TIMI (0 or 1), GRACE (≤108), EDACS (< 16) or HEART (≤3) scores. Myocardial infarction or cardiac death at 30-days occurred in 14.3% (276/1,935). The ESC pathway ruled out 70% with 27 missed events giving a negative predictive value (NPV) of 97.9% (95% confidence interval [CI], 97.1 to 98.6%). Addition of a HEART score ≤3 reduced the proportion ruled out by the ESC pathway to 25%, but improved the NPV to 99.7% (95%CI 99.0 to 100%, P < 0.001). The novel pathway ruled out 65% with three missed events for a NPV of 99.7% (95%CI 99.4 to 99.9%). No risk score improved the NPV, but all reduced the proportion ruled out (24-47%, P < 0.001 for all). Whilst myocardial infarction due to atherosclerotic plaque rupture and thrombosis (type 1) is well described, the natural disease course of myocardial infarction due to oxygen supply-demand imbalance without atherothrombosis (type 2) is poorly understood. I aimed to define long-term outcomes and explore risk stratification in patients with type 2 myocardial infarction and myocardial injury. Consecutive patients (n=2,122) with elevated cardiac troponin I concentrations (≥0.05 μg/L) were identified at a tertiary cardiac centre. All diagnoses were adjudicated as per the Universal Definition of Myocardial Infarction. The primary outcome was all-cause death. Secondary outcomes included major adverse cardiovascular events (MACE; non-fatal myocardial infarction or cardiovascular death) and non-cardiovascular death. To explore competing risks, cause-specific hazard ratios were obtained using Cox regression models. The adjudicated index diagnosis was type 1 or type 2 myocardial infarction or myocardial injury in 1,171 (55.2%), 429 (20.2%) and 522 (24.6%) patients, respectively. At five years, all-cause death rates were higher in those with type 2 myocardial infarction (62.5%) or myocardial injury (72.4%) compared with type 1 myocardial infarction (36.7%). The majority of excess deaths in those with type 2 myocardial infarction or myocardial injury were due to non-cardiovascular causes (HR 2.32, 95%CI 1.92-2.81, versus type 1 myocardial infarction). Despite this, the observed crude MACE rates were similar between groups (30.6% versus 32.6%), with differences apparent after adjustment for co-variates (HR 0.82, 95%CI 0.69-0.96). Coronary heart disease was an independent predictor of MACE in those with type 2 myocardial infarction or myocardial injury (HR 1.71, 95%CI 1.31-2.24). Patients with type 2 myocardial infarction were less likely to receive secondary prevention therapy, suggesting a treatment gap may exist and there may be potential to modify clinical outcomes. A risk stratification threshold has been defined using high-sensitivity cardiac troponin I which identifies patients at very low risk of myocardial infarction or cardiac death. A diagnostic pathway incorporating this risk stratification threshold appears safer than established guidelines which apply the 99th centile alone. The use of clinical risk scores does not appear to improve the safety of this approach, however, does significantly reduce efficacy. Overall, these findings demonstrate the potential of high-sensitivity cardiac troponin testing to improve the efficiency of the assessment of patients with suspected acute coronary syndrome without compromising patient safety. The observations in those with myocardial injury and infarction have identified a phenotype of patients with type 2 myocardial infarction and coronary artery disease who are at increased cardiovascular risk, and who may benefit from targeted secondary prevention. The studies presented will inform the design of future clinical trials, and may inform international guidelines for the assessment of patients with suspected acute coronary syndrome.
APA, Harvard, Vancouver, ISO, and other styles
39

Ross-Davie, Mary C. "Measuring the quantity and quality of midwifery support of women during labour and childbirth : the development and testing of the 'Supportive Midwifery in Labour Instrument'." Thesis, University of Stirling, 2012. http://hdl.handle.net/1893/9796.

Full text
Abstract:
The thesis describes the development and testing of a new computer based systematic observation instrument designed to facilitate the recording and measurement of the quantity and quality of midwifery intrapartum support. The content of the systematic observation instrument, the ‘SMILI’ (Supportive Midwifery in Labour Instrument), was based on a comprehensive review of the literature. The instrument was found to be valid and reliable in a series of studies. The feasibility and usability of the SMILI was extensively tested in the clinical setting in four maternity units in Scotland, UK. One hundred and five hours of direct observation of forty nine labour episodes were undertaken by four trained midwife observers. The clinical study demonstrated that the study and the instrument were feasible, usable and successful in measuring the quantity and quality of midwifery intrapartum support. The data collected has provided significant new information about the support given by midwives in the National Health Service of Scotland, UK. Continuous one to one support was the norm, with 92% of the observed midwives in the room for more than 80% of the observation period. Emotional support, including rapport building, encouragement and praise, was the most frequently recorded category of support.
APA, Harvard, Vancouver, ISO, and other styles
40

Daza, Rojas Juan Manuel. "BIOGEOGRAPHY AND DIVERSIFICATION IN THE NEOTROPICS: TESTING MACROEVOLUTIONARY HYPOTHESES USING MOLECULAR PHYLOGENETIC DATA." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2665.

Full text
Abstract:
Lineage diversification in the Neotropics is an interesting topic in evolutionary biology and one of the least understood. The complexity of the region precludes generalizations regarding the historical and evolutionary processes responsible for the observed high diversity. Here, I use molecular data to infer evolutionary relationships and test hypotheses of current taxonomy, species boundaries, speciation and biogeographic history in several lineages of Neotropical snakes. I comprehensively sampled a widely distributed Neotropical colubrid snake and Middle American pitvipers and combined my data with published sequences. Within the colubrid genus Leptodeira, mitochondrial and nuclear markers revealed a phylogeograhic structure that disagrees with the taxonomy based only on morphology. Instead, the phylogenetic structure corresponds to specific biogeographic regions within the Neotropics. Molecular evidence combined with explicit divergence time estimates reject the hypothesis that highland pitvipers in Middle America originated during the climatic changes during the Pleistocene. My data, instead, shows that pitviper diversification occurred mainly during the Miocene, a period of active orogenic activity. Using multiple lineages of Neotropical snakes in a single phylogenetic tree, I describe how the closure of the Isthmus of Panama generated several episodes of diversification as opposed to the Motagua-Polochic fault in Guatemala where a single vicariant event may have led to diversification of snakes with different ecological requirements. This finding has implications for future biogeographic studies in the region as explicit temporal information can be readily incorporated in molecular clock analyses. Bridging the gap between the traditional goals of historical biogeography (i.e., area relationships) with robust statistical methods, my research can be applied to multiple levels of the biological hierarchy (i.e., above species level), other regional systems and other sub-disciplines in biology such as medical research, evolutionary ecology, taxonomy and conservation.<br>Ph.D.<br>Department of Biology<br>Sciences<br>Conservation Biology PhD
APA, Harvard, Vancouver, ISO, and other styles
41

Karhapää, P. (Pertti). "Alignment of requirements engineering and software testing:a systematic mapping study." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201611103000.

Full text
Abstract:
Context: As a result of the separation of concerns of a software development project into different phases, the requirements engineering (RE) and software testing (ST) activities have drifted far apart. RE and ST are two activities of a software development process that supplement each other. The requirements dictate how the software to be developed should operate and testing should verify that the software operates exactly as required. Thus the development process could benefit from linking the RE and ST activities for closer collaboration. This is highly important in industry today where systems are oftentimes very complex with thousands of requirements and in particularly in safety critical domains. Objective: The objective of this thesis is to identify, aggregate, classify and structure all existing research regarding alignment of RE and ST published by the end of 2015 through a systematic mapping study. The contributions are analysed in terms of publication venues, publication year, contribution and research types, benefits and challenges, and how alignment is supported in the studies both from an academic and practitioner viewpoint. Method: The method applied in this thesis is systematic mapping study that is very similar to a systematic literature review. The research question can be much less specific and more open in a systematic mapping study compared to one of a systematic literature review, since the aim is not to find an applicable solution to a certain problem, but to structure research in a certain area. Results: The intensity of research show that there has been an increased interest towards the topic in the previous decade and the number of journal or magazine publications has increased during the recent years. Most of the studies contribute with evaluation of frameworks, methods and techniques in case studies together with a few studies presenting practices to support alignment. Tool support encompassing the whole development process and metrics of alignment are concerns requiring more research. The arguments of the benefits of alignment are very convincing, but evidence of these benefits are scarce. Conclusion: The importance of aligning RE and ST for an optimized development process have been recognized by both researchers and industrial practitioners. RE is as important as ever in the development process to be able to meet the needs of the users, however RE alone cannot guarantee success of a development project, but testing have to be taken into account early on. The main benefits of aligning RE and ST, together with the right tool support for automation, are the decreased burden of engineers, shorter time to market, reduced cost of the development process, and more satisfied customers. This thesis provides an inventory of studies relevant to the topic that are otherwise scattered around in many different journals, workshops and conferences.
APA, Harvard, Vancouver, ISO, and other styles
42

Breinholt, Jesse W. "Testing Crayfish Evolutionary Hypotheses with Phylogenetic Methods." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3563.

Full text
Abstract:
This dissertation focuses on increasing the understanding of the evolution processes that have contributed to the diversification of freshwater crayfish. Chapter one estimates the divergence time of the three crayfish families and tests the hypothesis that diversification is tied to the break-up of Pangaea, Gondwanna, and Laurasia. I find that the families of crayfish diverged prior to or in association with the break-up of the three super continents. Chapter two addresses the evolutionary history of the genus Cambarus, using molecular data to test hypotheses of relationships based on chela and carapace morphology. The results provide evidence that the morphology used to determine Cambarus relationships do not reflect evolutionary history and that convergent evolution of morphological traits is common in crayfish. Chapter three addresses evolution at the population level and tests for differences in the genetic population structure of two crayfish with different physiological needs. I find that physiological requirements of these crayfish have influenced their population genetic structure. The last chapter addresses a molecular based hypothesis that rates of mitochondrial evolution are reduced in cave crayfish that have increased longevity, reduced metabolism, and restricted diets compared to surface crayfish. I find that cave crayfish rates of mitochondrial evolution do not significantly differ from surface crayfish. Therefore, increased longevity, reduced metabolism, and restricted diets do not slow the rate of mitochondrial evolution as predicted in this group of cave crayfish.
APA, Harvard, Vancouver, ISO, and other styles
43

Lu, Bin. "Energy Usage Evaluation and Condition Monitoring for Electric Machines using Wireless Sensor Networks." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14152.

Full text
Abstract:
Energy usage evaluation and condition monitoring for electric machines are important in industry for overall energy savings. Traditionally these functions are realized only for large motors in wired systems formed by communication cables and various types of sensors. The unique characteristics of the wireless sensor networks (WSN) make them the ideal wireless structure for low-cost energy management in industrial plants. This work focuses on developing nonintrusive motor-efficiency-estimation methods, which are essential in the wireless motor-energy-management systems in a WSN architecture that is capable of improving overall energy savings in U.S. industry. This work starts with an investigation of existing motor-efficiency-evaluation methods. Based on the findings, a general approach of developing nonintrusive efficiency-estimation methods is proposed, incorporating sensorless rotor-speed detection, stator-resistance estimation, and loss estimation techniques. Following this approach, two new methods are proposed for estimating the efficiencies of in-service induction motors, using air-gap torque estimation and a modified induction motor equivalent circuit, respectively. The experimental results show that both methods achieve accurate efficiency estimates within ¡À2-3% errors under normal load conditions, using only a few cycles of input voltages and currents. The analytical results obtained from error analysis agree well with the experimental results. Using the proposed efficiency-estimation methods, a closed-loop motor-energy-management scheme for industrial plants with a WSN architecture is proposed. Besides the energy-usage-evaluation algorithms, this scheme also incorporates various sensorless current-based motor-condition-monitoring algorithms. A uniform data interface is defined to seamlessly integrate these energy-evaluation and condition-monitoring algorithms. Prototype wireless sensor devices are designed and implemented to satisfy the specific needs of motor energy management. A WSN test bed is implemented. The applicability of the proposed scheme is validated from the experimental results using multiple motors with different physical configurations under various load conditions. To demonstrate the validity of the measured and estimated motor efficiencies in the experiments presented in this work, an in-depth error analysis on motor efficiency measurement and estimation is conducted, using maximum error estimation, worst-case error estimation, and realistic error estimation techniques. The conclusions, contributions, and recommendations are summarized at the end.
APA, Harvard, Vancouver, ISO, and other styles
44

Zaib, Shah, and Pavan Kumar Lakshmisetty. "A systematical literature review and industrial survey in addressing the possible impacts with the continuous testing and delivery during DevOps transformation." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21107.

Full text
Abstract:
Context: Digital transformation poses new challenges to the organization. Market needs and competition in business have changed the continuous testing environment and continuous delivery in software organizations. There is a great possibility of conflict between the development and testing operations because they are a separate entity in large organizations. Organizations where testers, developers, and operation teams do not communicate well and lack collaboration can have their productivity affected negatively. This leads to defects and errors at the early stage of the development process. DevOps’ approach enhances the integration, delivery, performance,and communication between the developers, testers, and operational members. Organizations are reluctant to apply DevOps practices because there is a lack of agreement on DevOps characteristics. The most difficult part of a large organization is DevOpsadaptation and its implementation because of its legacy structure. It is required to get an understanding of DevOps implementation in organizations before they start transforming. Objectives: The thesis aims to identify the challenges in organizations towards continuous delivery and provide a list of techniques or strategies to overcome continuous testing and DevOps challenges. This thesis also identifies the communication challenges between continuous testing and delivery teams during the COVID-19 pandemic and the software architecture effect on testing in the DevOps environment. Methods: To achieve the research goal a multiple research method techniques are applied. A systematic literature review is conducted to identify the literature and to meet the research goal. A survey is conducted for the verification of the data from SLR. An interview is used as a data collection method in Survey and explores the actual process of continuous testing and delivery in large DevOps companies. Results: A list of challenges to large organizations towards continuous delivery is generated. A list of strategies and solutions towards the challenges of continuous testing and DevOps is generated. A list of post COVID-19 communication challenges between testing and delivery groups in DevOps is created. A list of software architecture and production environment effects on testing is also generated. After analyzing the SLR results, a survey is conducted to validate the results from software practitioners. Thematic analysis is performed on the results. "Finally", the findings from the SLR and Survey are compared. Conclusions: This research’s findings can help researchers, industry practitioners, and someone who wants to investigate further the possible effects with the continuous testing and delivery during DevOps transformation. We observed that industry practitioners could enhance their communication channels by reviewing the post-COVID-19 communication challenges between testing and delivery teams. We also observed that there is more research required to continue on this topic.
APA, Harvard, Vancouver, ISO, and other styles
45

Small, Nicola. "Patient empowerment in long-term conditions : development and validation of a new measure." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/patient-empowerment-in-longterm-conditions-development-and-validation-of-a-new-measure(b85db41b-5898-4c51-a180-78439eb94ea7).html.

Full text
Abstract:
Background: Patient empowerment is viewed as a priority by policy makers, patients and practitioners worldwide. Although there are a number of measures available, none have been developed specifically for patients in the UK with long-term conditions. It is the aim of this study to report the development and preliminary validation of an empowerment instrument for patients with long-term conditions in primary care.Methods: The study involved three methods. Firstly, a systematic review was conducted to identify existing empowerment instruments, and to describe, compare and appraise their content and quality. The results supported the need for a new instrument. Item content of existing instruments helped support development of the new instrument. Secondly, empowerment was explored in patients with long-term conditions and primary care practitioners using qualitative methods, to explore its meaning and the factors that support or hinder empowerment. This led to the development of a conceptual model to support instrument development. Thirdly, a new instrument for measuring empowerment in patients with long-term conditions in primary care was developed. A cross-sectional survey of patients was conducted to collect preliminary data on acceptability, reliability and validity, using pre-specified hypotheses based on existing theoretical and empirical work. Results: Nine instruments meeting review inclusion criteria were identified. Only one instrument was developed to measure empowerment in long-term conditions in the context of primary care, and that was judged to be insufficient in terms of content and purpose. Five dimensions (‘identity’, ‘knowledge and understanding’, ‘personal control’, personal decision-making’, and ‘enabling other patients’) of empowerment were identified through published literature and the qualitative work and incorporated into a preliminary version of the new instrument. A postal survey achieved 197 responses (response rate 33%). Almost half of the sample reported circulatory, diabetic or musculoskeletal conditions. Exploratory factor analysis suggested a three factor solution (‘identity’, ‘knowledge and understanding’ and ‘enabling’). Two dimensions of empowerment (‘identity’ and ‘enabling’) and total empowerment showed acceptable levels of internal consistency. The measure showed relationships with external measures (including quality of chronic illness care, self-efficacy and educational qualifications) that were generally supportive of its construct validity.Conclusion: Initial analyses suggest that the new measure meets basic psychometric criteria and has potential for the measurement of patient empowerment in long-term conditions in primary care. The scale may have a role in research on quality of care for long-term conditions, and could function as a patient-reported outcome measure. However, further validation is required before more extensive use of the measure.
APA, Harvard, Vancouver, ISO, and other styles
46

Blix, Ellen. "INNKOMST-CTG. En vurdering av testens prediktive verdier, reliabilitet og effekt : Betydning for jordmødre i deres daglige arbeid." Doctoral thesis, Nordic School of Public Health NHV, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:norden:org:diva-3393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Dhima, Julien. "Evolution des méthodes de gestion des risques dans les banques sous la réglementation de Bale III : une étude sur les stress tests macro-prudentiels en Europe." Thesis, Paris 1, 2019. http://www.theses.fr/2019PA01E042/document.

Full text
Abstract:
Notre thèse consiste à expliquer, en apportant quelques éléments théoriques, les imperfections des stress tests macro-prudentiels d’EBA/BCE, et de proposer une nouvelle méthodologie de leur application ainsi que deux stress tests spécifiques en complément. Nous montrons que les stress tests macro-prudentiels peuvent être non pertinents lorsque les deux hypothèses fondamentales du modèle de base de Gordy-Vasicek utilisé pour évaluer le capital réglementaire des banques en méthodes internes (IRB) dans le cadre du risque de crédit (portefeuille de crédit asymptotiquement granulaire et présence d’une seule source de risque systématique qui est la conjoncture macro-économique), ne sont pas respectées. Premièrement, ils existent des portefeuilles concentrés pour lesquels les macro-stress tests ne sont pas suffisants pour mesurer les pertes potentielles, voire inefficaces si ces portefeuilles impliquent des contreparties non cycliques. Deuxièmement, le risque systématique peut provenir de plusieurs sources ; le modèle actuel à un facteur empêche la répercussion propre des chocs « macro ».Nous proposons un stress test spécifique de crédit qui permet d’appréhender le risque spécifique de crédit d’un portefeuille concentré, et un stress test spécifique de liquidité qui permet de mesurer l’impact des chocs spécifiques de liquidité sur la solvabilité de la banque. Nous proposons aussi une généralisation multifactorielle de la fonction d’évaluation du capital réglementaire en IRB, qui permet d’appliquer les chocs des macro-stress tests sur chaque portefeuille sectoriel, en stressant de façon claire, précise et transparente les facteurs de risque systématique l’impactant. Cette méthodologie permet une répercussion propre de ces chocs sur la probabilité de défaut conditionnelle des contreparties de ces portefeuilles et donc une meilleure évaluation de la charge en capital de la banque<br>Our thesis consists in explaining, by bringing some theoretical elements, the imperfections of EBA / BCE macro-prudential stress tests, and proposing a new methodology of their application as well as two specific stress tests in addition. We show that macro-prudential stress tests may be irrelevant when the two basic assumptions of the Gordy-Vasicek core model used to assess banks regulatory capital in internal methods (IRB) in the context of credit risk (asymptotically granular credit portfolio and presence of a single source of systematic risk which is the macroeconomic conjuncture), are not respected. Firstly, they exist concentrated portfolios for which macro-stress tests are not sufficient to measure potential losses or even ineffective in the case where these portfolios involve non-cyclical counterparties. Secondly, systematic risk can come from several sources; the actual one-factor model doesn’t allow a proper repercussion of the “macro” shocks. We propose a specific credit stress test which makes possible to apprehend the specific credit risk of a concentrated portfolio, as well as a specific liquidity stress test which makes possible to measure the impact of liquidity shocks on the bank’s solvency. We also propose a multifactorial generalization of the regulatory capital valuation model in IRB, which allows applying macro-stress tests shocks on each sectorial portfolio, stressing in a clear, precise and transparent way the systematic risk factors impacting it. This methodology allows a proper impact of these shocks on the conditional probability of default of the counterparties of these portfolios and therefore a better evaluation of the capital charge of the bank
APA, Harvard, Vancouver, ISO, and other styles
48

Hsu, Hung-Sheng, and 許鴻生. "Systematic Approach to Penetration Testing Mobile Devices." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/37790568365594569191.

Full text
Abstract:
碩士<br>國立交通大學<br>電控工程研究所<br>99<br>Advanced of modern technologies, mobile devices and smart phones are widely deployed and ubiquitous anywhere at any time. Recently, mobile devices have been attacked due to their desktop-like operating systems, hence mobile devices have faced similar threats as desktop computers. Researchers studied several skills to penetrate and evaluate the security of a mobile device, but neither a systematic procedure nor a full resolution is provided based on these skills. Such penetration tests examine robustness of a computing device by launching or simulating attacks from the position of a potential attacker. This kind of testing technologies have been widely adopted to ensure the security of a computing device or a network. To systematically analyze the security of a computing device, NIST and OSSIG proposed several testing methodologies, including GNST, ISSAF, etc. However, these methodologies are not designed for a mobile device. In this paper, we modify ISSAF to support penetration tests for a mobile device. Based on the ISSAF modification (MoSAF), we design an nine-step assessing flow to systematically penetration test a mobile device. In addition to the steps of external penetration, we also design the internal vulnerability checks to statistically and dynamically analyze mobile applications and configurations used in the mobile device. Taking Android mobile devices as testing targets, we conduct a series of experiments to demonstrate how MoSAF and its assessing flow can be used in providing a systematic and complete penetration test for a mobile device. Moreover, we analyze the insufficiency of the existing penetration testing skills for mobile devices and show the improvements after applying MoSAF and its assessing flow.
APA, Harvard, Vancouver, ISO, and other styles
49

Abdul, Khalek Shadi. "Systematic testing using test summaries : effective and efficient testing of relational applications." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-12-4574.

Full text
Abstract:
This dissertation presents a novel methodology based on test summaries, which characterize desired tests as constraints written in a mixed imperative and declarative notation, for automated systematic testing of relational applications, such as relational database engines. The methodology has at its basis two novel techniques for effective and efficient testing: (1) mixed-constraint solving, which provides systematic generation of inputs characterized by mixed-constraints using translations among different data domains; and (2) clustered test execution, which optimizes execution of test suites by leveraging similarities in execution traces of different tests using abstract-level undo operations, which allow common segments of partial traces to be executed only once and the execution results to be shared across those tests. A prototype embodiment of the methodology enables a novel approach for systematic testing of commonly used database engines, where test summaries describe (1) input SQL queries, (2) input database tables, and (3) expected output of query execution. An experimental evaluation using the prototype demonstrates its efficacy in systematic testing of relational applications, including Oracle 11g, and finding bugs in them.<br>text
APA, Harvard, Vancouver, ISO, and other styles
50

Hwan, Hwang Gwan, and 黃冠寰. "A systematic parallel testing method for concurrent program." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/22315907608667548436.

Full text
Abstract:
碩士<br>國立交通大學<br>資訊工程研究所<br>81<br>The validation process of a concurrent program includes the testing and debugging phase. The testing of concurrent programs is the process of executing the concurrent program and then verifies the results to detect whether the concurrent program existing bugs or not. We suggest a systematic parallel testing method which can sutomatically test the concurrent program in parallel without repeating the same test. The scheme can even exhaust all the possible tests of a concurrent program. Also, it does not use any static analysis technique but works in run time dynamically. Thus, it can reduce the static analysis overhead. Furthermore, this scheme is suitable for the multiprocessor computer system and the distributed system. It can speed up the testing in the following ways. First, because our method can naturally work in parallel, users can spend less time doing the testing. Second, the duplicated computation of different tests can be eliminated. We describe a prototype implementation of our scheme in Sequent Symmetry S27, a shared- bus multiprocessor computer system. In addition, we also create a virual parallel processing environment to simulate an example.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!