To see the other types of publications on this topic, follow the link: Software failures.

Dissertations / Theses on the topic 'Software failures'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Software failures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hou, Wei. "Integrated reliability and availability analysis of networks with software failures and hardware failures." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hou, Wei. "Integrated Reliability and Availability Aanalysis of Networks With Software Failures and Hardware Failures." Scholar Commons, 2003. https://scholarcommons.usf.edu/etd/1393.

Full text
Abstract:
This dissertation research attempts to explore efficient algorithms and engineering methodologies of analyzing the overall reliability and availability of networks integrated with software failures and hardware failures. Node failures, link failures, and software failures are concurrently and dynamically considered in networks with complex topologies. MORIN (MOdeling Reliability for Integrated Networks) method is proposed and discussed as an approach for analyzing reliability of integrated networks. A Simplified Availability Modeling Tool (SAMOT) is developed and introduced to evaluate and analyze the availability of networks consisting of software and hardware component systems with architectural redundancy. In this dissertation, relevant research efforts in analyzing network reliability and availability are reviewed and discussed, experimental data results of proposed MORIN methodology and SAMOT application are provided, and recommendations for future researches in the network reliability study are summarized as well.
APA, Harvard, Vancouver, ISO, and other styles
3

Georgiadis, Ioannis. "Self-organising distributed component software architectures." Thesis, Imperial College London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.396255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Clause, James Alexander. "Enabling and supporting the debugging of software failures." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39514.

Full text
Abstract:
This dissertation evaluates the following thesis statement: Program analysis techniques can enable and support the debugging of failures in widely-used applications by (1) capturing, replaying, and, as much as possible, anonymizing failing executions and (2) highlighting subsets of failure-inducing inputs that are likely to be helpful for debugging such failures. To investigate this thesis, I developed techniques for recording, minimizing, and replaying executions captured from users' machines, anonymizing execution recordings, and automatically identifying failure-relevant inputs. I then performed experiments to evaluate the techniques in realistic scenarios using real applications and real failures. The results of these experiments demonstrate that the techniques can reduce the cost and difficulty of debugging.
APA, Harvard, Vancouver, ISO, and other styles
5

Savor, Tony. "Automatic detection of software failures with hierarchical supervisors." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22233.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Taing, Nguonly, Thomas Springer, Nicolás Cardozo, and Alexander Schill. "A Rollback Mechanism to Recover from Software Failures in Role-based Adaptive Software Systems." ACM, 2017. https://tud.qucosa.de/id/qucosa%3A75214.

Full text
Abstract:
Context-dependent applications are relatively complex due to their multiple variations caused by context activation, especially in the presence of unanticipated adaptation. Testing these systems is challenging, as it is hard to reproduce the same execution environments. Therefore, a software failure caused by bugs is no exception. This paper presents a rollback mechanism to recover from software failures as part of a role-based runtime with support for unanticipated adaptation. The mechanism performs checkpoints before each adaptation and employs specialized sensors to detect bugs resulting from recent configuration changes. When the runtime detects a bug, it assumes that the bug belongs to the latest configuration. The runtime rolls back to the recent checkpoint to recover and subsequently notifes the developer to fix the bug and re-applying the adaptation through unanticipated adaptation. We prototype the concept as part of our role-based runtime engine LyRT and demonstrate the applicability of the rollback recovery mechanism for unanticipated adaptation in erroneous situations.
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Bing. "Study of the impact of hardware failures on software reliability." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3853.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Mechanical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Xiaoni. "An Analysis of the Effect of Environmental and Systems Complexity on Information Systems Failures." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc2857/.

Full text
Abstract:
Companies have invested large amounts of money on information systems development. Unfortunately, not all information systems developments are successful. Software project failure is frequent and lamentable. Surveys and statistical analysis results underscore the severity and scope of software project failure. Limited research relates software structure to information systems failures. Systematic study of failure provides insights into the causes of IS failure. More importantly, it contributes to better monitoring and control of projects and enhancing the likelihood of the success of management information systems. The underlining theories and literature that contribute to the construction of theoretical framework come from general systems theory, complexity theory, and failure studies. One hundred COBOL programs from a single company are used in the analysis. The program log clearly documents the date, time, and the reasons for changes to the programs. In this study the relationships among the variables of business requirements change, software complexity, program size and the error rate in each phase of software development life cycle are tested. Interpretations of the hypotheses testing are provided as well. The data shows that analysis error and design error occur more often than programming error. Measurement criteria need to be developed at each stage of the software development cycle, especially in the early stage. The quality and reliability of software can be improved continuously. The findings from this study suggest that it is imperative to develop an adaptive system that can cope with the changes to the business environment. Further, management needs to focus on processes that improve the quality of the system design stage.
APA, Harvard, Vancouver, ISO, and other styles
9

Aggarwal, Sonia. "State Intervention in the Indian Software Industry." Scholarship @ Claremont, 2012. http://scholarship.claremont.edu/cmc_theses/438.

Full text
Abstract:
India's meteoric economic growth rate has been a subject of much discussion since the country began its economic liberalization in the early 1990s. The software segment, in particular, is growing at a rate of 48.5 percent. The conventional wisdom argues that market forces have driven India's software's success, and more broadly, information technology. This thesis marshals evidence for the role of the state in interaction with the software sector. More specifically, by discussing India's broad-scale import substitution industrialization efforts from the 1950s to 1991 and its transition to a more open economic structure, as well as more industry specific policies within a theoretical context, this work attempts to identify the key driving forces and impact of government policy. Most works that have attempted to assess such state efforts have done so in a casual fashion, without linking the actions to carefully specified rationales for state intervention. This thesis specifies four plausible rationales for government intervention: market failures, government goals in promoting a domestic industry for national security and the state role in international negotiations that might affect specific sectors, intervention driven by rent seeking behavior on the part of private-sector actors, and state intervention to address previous government policies in a particular market that may be seen as being inadequate or failures. It then empirically assesses the support for each of these claims in light of the evolution of the Indian software industry since its inception. In so doing, this work allows one to gauge the significant contributions of the state within a clear context of possible state roles. It also helps in understanding the software industry’s current challenges, and possible future role of the state in the industry.
APA, Harvard, Vancouver, ISO, and other styles
10

Sycofyllos, Nikolaos. "An Empirical Exploration in the Study of Software-Related Fatal Failures." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-31939.

Full text
Abstract:
This thesis investigates and explores the subject of software-related fatal failures. In our technology oriented society, deadly disasters due to software failures are not that uncommon as we might think. During recent years there has been a large amount of software-related fatal failures documented, although there have not been as far as we are aware of, any research studies trying to put those failures in the context of a wider evidence. That fact motivated us to answer two research questions: how many lives have been lost through failures of software and what is the nature of the main cause of software-related fatal failures. The aim of this thesis is to explore these questions and provide some empirical answers and also contribute to the knowledge of these failures.Our goal is to provide an empirical and conceptual basis for investigating fatal software failures that will attempt to place these failure examples in a wider record. A similar study has been conducted by Donald MacKenzie in the area of computer related failures but it is not directly answering our questions of interest and it is somehow outdated. Computer scientist Peter Neumann has done a lot of research in computer safety and is also the author of a wide collection of computer failure cases named “Risks to the public in computer and related systems” also called “RISKS” Reports. Those reports were the main source of our investigation and answers were given out of data collected from those reports. The methodology used in this research was an exploratory systematic review study. Starting off by defining Software-Related Fatal Failures (SRFF) and the inclusion criteria for the cases to be investigated, allowed us to avoid misinterpretations and collect the data in a better way. We searched through the “RISKS” reports and collected cases according to our criteria. The final collected data was reviewed and analyzed. Finally the results were illustrated and presented in terms of tables, plots, charts and descriptive statistics. We found out that in the “RISKS” reports, over 2600 people have lost their lives due to software-related failures and the majority of those failures had been caused by problematic user-software interaction. While answering our research questions we observed based on the information related to fatal software failures that the topic of SRFF is poorly investigated. Our research provides agood basis for future investigation and aims to trigger further research in the subject ofsoftware-related fatal failures.
APA, Harvard, Vancouver, ISO, and other styles
11

Bowring, James Frederick. "Modeling and Predicting Software Behaviors." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/19754.

Full text
Abstract:
Software systems will eventually contribute to their own maintenance using implementations of self-awareness. Understanding how to specify, model, and implement software with a sense of self is a daunting problem. This research draws inspiration from the automatic functioning of a gimbal---a self-righting mechanical device that supports an object and maintains the orientation of this object with respect to gravity independently of its immediate operating environment. A software gimbal exhibits a self-righting feature that provisions software with two auxiliary mechanisms: a historical mechanism and a reflective mechanism. The historical mechanism consists of behavior classifiers trained on statistical models of data that are collected from executions of the program that exhibit known behaviors of the program. The reflective mechanism uses the historical mechanism to assess an ongoing or selected execution. This dissertation presents techniques for the identification and modeling of program execution features as statistical models. It further demonstrates how statistical machine-learning techniques can be used to manipulate these models and to construct behavior classifiers that can automatically detect and label known program behaviors and detect new unknown behaviors. The thesis is that statistical summaries of data collected from a software program's executions can model and predict external behaviors of the program. This dissertation presents three control-flow features and one value-flow feature of program executions that can be modeled as stochastic processes exhibiting the Markov property. A technique for building automated behavior classifiers from these models is detailed. Empirical studies demonstrating the efficacy of this approach are presented. The use of these techniques in example software engineering applications in the categories of software testing and failure detection are described.
APA, Harvard, Vancouver, ISO, and other styles
12

Rayas, Giancarlo. "Determinism in power signatures of electronics for health monitoring." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Grottke, Michael [Verfasser]. "Modeling Software Failures during Systematic Testing : The Influence of Environmental Factors / Michael Grottke." Aachen : Shaker, 2003. http://d-nb.info/1170541119/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bihina, Bella Madeleine. "A near-miss analysis model for improving the forensic investigation of software failures." Thesis, University of Pretoria, 2014. http://hdl.handle.net/2263/56106.

Full text
Abstract:
The increasing complexity of software applications can lead to operational failures that have disastrous consequences. In order to prevent the recurrence of such failures, a thorough post-mortem investigation is required to identify the root causes involved. This root-cause analysis must be based on reliable digital evidence to ensure its objectivity and accuracy. However, current approaches to software failure analysis do not promote the collection of digital evidence for causal analysis. This leaves the system vulnerable to the reoccurrence of a similar failure. A promising alternative is offered by the field of digital forensics. Digital forensics uses proven scientific methods and principles of law to determine the cause of an event based on forensically sound evidence. However, being a reactive process, digital forensics can only be applied after the occurrence of costly failures. This limits its effectiveness as volatile data that could serve as potential evidence may be destroyed or corrupted after a system crash. In order to address this limitation of digital forensics, it is suggested that the evidence collection be started at an earlier stage, before the software failure actually unfolds, so as to detect the high-risk conditions that can lead to a major failure. These forerunners to failures are known as near misses. By alerting system users of an upcoming failure, the detection of near misses provides an opportunity to collect at runtime failure-related data that is complete and relevant. The detection of near misses is usually performed through electronic near-miss management systems (NMS). An NMS that combines near-miss analysis and digital forensics can contribute significantly to the improvement of the accuracy of the failure analysis. However, such a system is not available yet and its design still presents several challenges due to the fact that neither digital forensics nor near-miss analysis is currently used to investigate software failures and their existing methodologies and processes are not directly applicable to failure analysis. This research therefore presents the architecture of an NMS specifically designed to address the above challenges in order to facilitate the accurate forensic investigation of software failures. The NMS focuses on the detection of near misses at runtime with a view to maximising the collection of appropriate digital evidence of the failure. The detection process is based on a mathematical model that was developed to formally define a near miss and calculate its risk level. A prototype of the NMS has been implemented and is discussed in the thesis.
Thesis (PhD)--University of Pretoria, 2014.
tm2016
Computer Science
PhD
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
15

Dickman, Peter William. "Distributed object management in a non-small graph of autonomous networks with few failures." Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pascoulis, Christacis. "Understanding the successes, failures and limitations of adopting SSM in a software development environment." Thesis, University of Hertfordshire, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hamadi, Hussein. "Fault-tolerant control of a multirotor unmanned aerial vehicle under hardware and software failures." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2555.

Full text
Abstract:
Le but de ce travail est de proposer des mécanismes pour les drones multirotors qui permettent d'une part de tolérer des fautes sur le drone, et d'autre part de prendre en compte les effets du vent en extérieur. Les fautes visées comportent des fautes d'actionneurs, de capteurs, mais également des fautes logicielles sur les algorithmes de fusion de données. Dans nos travaux, nous avons développé un contrôleur robuste et un observateur des perturbations extérieures capables de coopérer avec la méthode de reconfiguration des commandes, pour tolérer de façon simultanée les défaillances de moteurs et les perturbations extérieures du vent par des techniques de tolérance aux fautes active. Egalement, nous avons proposé une nouvelle technique de tolérance aux fautes des actionneurs pour un drone octorotor coaxial. Cette technique est basée sur une loi de commande robuste avec des gains reconfigurables "self tuning sliding mode control (STSMC)", où les gains de contrôle sont réajustés en fonction de l'erreur détectée afin de maintenir la stabilité du système. Des expériences à l'intérieur ont été menées pour montrer et comparer notre solution avec deux autres techniques de tolérances aux fautes. L'efficacité et le comportement de chaque méthode ont été étudiés après des injections de fautes successives dans les actionneurs. Les principaux avantages et inconvénients de chaque méthode sont déduits en analysant les résultats obtenus. En outre, nous proposons une approche pour la tolérance aux fautes des capteurs et mécanismes logiciels de fusion de données du drone. Cette approche est basée sur la redondance des capteurs et la diversification des composants logiciels
The aim of this work is to propose mechanisms for multirotor drones that allow, on the one hand, to tolerate faults on the drone, and on the other hand to take into account the effects of the wind outdoors. The faults targeted include fault in actuators, sensors, but also software faults on the data fusion algorithms. ln our work, we have developed a robust controller and an exterior disturbance observer capable of cooperating with th, contrai reconfiguration method, to simultaneously tolerate motor failures and exterior wind disturbances through active fault tolerance techniques… We have also proposed a new technique for tolerating actuator faults for a coaxial octorotor drone. This technique is based on a robust command law with reconfigurable "self tuning sliding mode control (STSMC)" gains, where the control gains are readjusted according to the detected error in order to maintain the stability of the system. lndoor experiments are conducted to show and compare our solution with two other fault tolerance techniques. The efficiency and behavior of each method are studied after successive fault injections into the actuators. The main advantages and disadvantages of each method are deduced by analyzing the results obtained. Additionally, we provide an approach for fault tolerance of drone data fusion sensors and software mechanisms. This approach is based on the redundancy of sensors and the diversification of software components
APA, Harvard, Vancouver, ISO, and other styles
18

Walia, Gursimran Singh. "Using error modeling to improve and control software quality an empirical investigation /." Diss., Mississippi State : Mississippi State University, 2009. http://library.msstate.edu/etd/show.asp?etd=etd-04032009-070637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Walia, Gursimran Singh. "Empirical Validation of Requirement Error Abstraction and Classification: A Multidisciplinary Approach." MSSTATE, 2006. http://sun.library.msstate.edu/ETD-db/theses/available/etd-05152006-151903/.

Full text
Abstract:
Software quality and reliability is a primary concern for successful development organizations. Over the years, researchers have focused on monitoring and controlling quality throughout the software process by helping developers to detect as many faults as possible using different fault based techniques. This thesis analyzed the software quality problem from a different perspective by taking a step back from faults to abstract the fundamental causes of faults. The first step in this direction is developing a process of abstracting errors from faults throughout the software process. I have described the error abstraction process (EAP) and used it to develop error taxonomy for the requirement stage. This thesis presents the results of a study, which uses techniques based on an error abstraction process and investigates its application to requirement documents. The initial results show promise and provide some useful insights. These results are important for our further investigation.
APA, Harvard, Vancouver, ISO, and other styles
20

Williamson, Christopher Loyal. "A formal application of safety and risk assessmen in software systems." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Sep%5FWilliamson%5FPhD.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mayo, Quentin R. "Detection of Generalizable Clone Security Coding Bugs Using Graphs and Learning Algorithms." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1404548/.

Full text
Abstract:
This research methodology isolates coding properties and identifies the probability of security vulnerabilities using machine learning and historical data. Several approaches characterize the effectiveness of detecting security-related bugs that manifest as vulnerabilities, but none utilize vulnerability patch information. The main contribution of this research is a framework to analyze LLVM Intermediate Representation Code and merging core source code representations using source code properties. This research is beneficial because it allows source programs to be transformed into a graphical form and users can extract specific code properties related to vulnerable functions. The result is an improved approach to detect, identify, and track software system vulnerabilities based on a performance evaluation. The methodology uses historical function level vulnerability information, unique feature extraction techniques, a novel code property graph, and learning algorithms to minimize the amount of end user domain knowledge necessary to detect vulnerabilities in applications. The analysis shows approximately 99% precision and recall to detect known vulnerabilities in the National Institute of Standards and Technology (NIST) Software Assurance Metrics and Tool Evaluation (SAMATE) project. Furthermore, 72% percent of the historical vulnerabilities in the OpenSSL testing environment were detected using a linear support vector classifier (SVC) model.
APA, Harvard, Vancouver, ISO, and other styles
22

Baekken, Jon Swane. "A fault model for pointcuts and advice in AspectJ programs." Online access for everyone, 2006. http://www.dissertations.wsu.edu/Thesis/summer2006/J%5FBaekken%5F073106.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gerardi, Marcelin, and Miki Namsrai. "A software system for variables comparison of a paper machine for improved performance." Thesis, Högskolan Dalarna, Energiteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:du-28781.

Full text
Abstract:
Today paper is to find everywhere, and the production factories always need to increase the productivity if they want to stay competitive. Stora Enso Kvarnsveden has one of the biggest magazine paper machines in the world, which produces around 1900 meters of paper per minute. The production process is highly automatized, which reduces the number of operators that work on the machine. Still, process variations can cause brakes in the paper web and lead to loss of income, energy and paper production. It may also have a direct impact on the paper quality. This report is focusing the following question: How to keep the Paper Machine production process under controlled conditions? To make a data analysis fully relevant, we need to use the most important variables of the machine. By analyzing these data some unexpected behavior and variation of process values can be pointed out. The analyzing tool needs to be fast and portable, and therefore a software system has been developed. By comparing process data with reference data this software can make a powerful analysis. The created software is intended to be used either by operators or engineers. The most important results are collected in a file. In this text file, the comparison function gives the results which are stored in a CSV-format. Furthermore, an auto-update function allows the users to run it automatically. Graphical presentations are supporting the interpretation of the results.
APA, Harvard, Vancouver, ISO, and other styles
24

Burke, Patrick William. "A New Look at Retargetable Compilers." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc699988/.

Full text
Abstract:
Consumers demand new and innovative personal computing devices every 2 years when their cellular phone service contracts are renewed. Yet, a 2 year development cycle for the concurrent development of both hardware and software is nearly impossible. As more components and features are added to the devices, maintaining this 2 year cycle with current tools will become commensurately harder. This dissertation delves into the feasibility of simplifying the development of such systems by employing heterogeneous systems on a chip in conjunction with a retargetable compiler such as the hybrid computer retargetable compiler (Hy-C). An example of a simple architecture description of sufficient detail for use with a retargetable compiler like Hy-C is provided. As a software engineer with 30 years of experience, I have witnessed numerous system failures. A plethora of software development paradigms and tools have been employed to prevent software errors, but none have been completely successful. Much discussion centers on software development in the military contracting market, as that is my background. The dissertation reviews those tools, as well as some existing retargetable compilers, in an attempt to determine how those errors occurred and how a system like Hy-C could assist in reducing future software errors. In the end, the potential for a simple retargetable solution like Hy-C is shown to be very simple, yet powerful enough to provide a very capable product in a very fast-growing market.
APA, Harvard, Vancouver, ISO, and other styles
25

Zumalde, Alex Ander Javarotti. "Avaliação comparativa entre técnicas de programação defensiva aplicadas a um sistema crítico simulado." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-05082011-142444/.

Full text
Abstract:
A introdução de software em sistemas de aplicações críticas traz consigo questões relacionadas à segurança (safety) que, durante muito tempo recaíram predominantemente sobre o desenvolvimento do hardware que compunha tais sistemas. Atualmente, padrões relacionados à segurança de software avaliam qualitativamente o impacto do seu uso sobre sistemas suscetíveis a falhas de natureza randômica. A pesquisa aqui desenvolvida visa, em complemento a outras investigações já realizadas, avaliar quantitativamente diversas técnicas de programação defensiva em função de sua representatividade no quesito segurança de sistemas de aplicação crítica tolerantes a erros. Como objetivo essencial, buscou-se avaliar o comportamento adquirido por um sistema tolerante a erros quando submetido a um processo de injeção de falhas por software. A tolerância a erros do sistema de aplicação crítica em estudo é alcançada, através de técnicas de programação defensiva aplicadas ao software original. Foram aplicadas diversas técnicas de programação defensiva e diversas combinações entre elas, de modo que foi possível avaliar quantitativamente e identificar possíveis padrões de níveis de segurança adquiridos em cada caso.
The introduction of software systems for critical applications raises safety issues that have long fell predominantly on the development of the hardware composing such systems. Currently, standards related to safety software qualitatively assess the impact of their use on systems sensitive to random errors. The research developed here seeks, in addition to other previous investigations, to quantitatively evaluate different techniques of defensive programming in function of their safety level in fault-tolerant safety critical systems. As a key objective, we sought to evaluate the behavior acquired by a fault-tolerant system when subjected to a software fault injection process. The fault-tolerance system, in a typical critical application under study, is achieved through the application of defensive programming techniques over the original software. Many defensive programming techniques and various combinations among them were applied, hence making it possible to quantitatively assess and identify possible patterns of safety levels acquired in each case.
APA, Harvard, Vancouver, ISO, and other styles
26

Kruger, Wandi. "Addressing application software package project failure : bridging the information technology gap by aligning business processes and package functionality." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/17868.

Full text
Abstract:
Thesis (MComm)--Stellenbosch University, 2011.
ENGLISH ABSTRACT: An application software package implementation is a complex endeavour, and as such it requires the proper understanding, evaluation and redefining of the current business processes to ensure that the project delivers on the objectives set at the start of the project. Numerous factors exist that may contribute to the unsuccessful implementation of application software package projects. However, the most significant contributor to the failure of an application software package project lies in the misalignment of the organisation’s business processes with the functionality of the application software package. Misalignment is attributed to a gap that exists between the business processes of an organisation and what functionality the application software package has to offer to translate the business processes of an organisation into digital form when implementing and configuring an application software package. This gap is commonly referred to as the information technology (IT) gap. The purpose of this assignment is to examine and discuss to what degree a supporting framework such as the Projects IN Controlled Environment (PRINCE2) methodology assists in the alignment of the organisation’s business processes with the functionality of the end product; as so many projects still fail even though the supporting framework is available to assist organisations with the implementation of the application software package. This assignment proposes to define and discuss the IT gap. Furthermore this assignment will identify shortcomings and weaknesses in the PRINCE2 methodology which may contribute to misalignment between the business processes of the organisation and the functionality of the application software package. Shortcomings and weaknesses in the PRINCE2 methodology were identified by: • Preparing a matrix table summarising the reasons for application software package failures by conducting a literature study; Mapping the reasons from the literature study to those listed as reasons for project failure by the Office of Government Commerce (the publishers of the PRINCE2 methodology); • Mapping all above reasons to the PRINCE2 methodology to determine whether the reasons identified are adequately addressed in the PRINCE2 methodology. This assignment concludes by proposing recommendations for aligning the business processes with the functionality of the application software package (addressing the IT gap) as well as recommendations for addressing weaknesses identified in the PRINCE2 methodology. By adopting these recommendations in conjunction with the PRINCE2 methodology the proper alignment between business processes and the functionality of the application software package may be achieved. The end result will be more successful application software package project implementations.
AFRIKAANSE OPSOMMING: Toepassingsprogrammatuurpakket implementering is komplekse strewe en vereis daarom genoegsame kennis, evaluasie en herdefiniëring van die huidige besigheidsprosesse om te verseker dat die projek resultate lewer volgens die doelwitte wat aan die begin van die projek neergelê is. Daar bestaan talryke faktore wat kan bydrae tot die onsuksesvolle implementering van toepassingsprogrammatuurpakket projekte. Die grootste bydrae tot die mislukking van toepassingsprogrammatuurpakket lê egter by die wanbelyning van die organisasie se besigheidsprosesse met die funksionaliteit van die toepassingsprogrammatuurpakket. Wanbelyning spruit uit gaping tussen die besigheidsprosesse van `n organisasie en die funksionaliteit wat die toepassingsprogrammatuur kan aanbied om die besigheidsprosesse van 'n organisasie om te skakel in digitale formaat wanneer `n toepassingsprogrammatuurpakket geimplementeer en gekonfigureer word. Daar word gewoonlik na hierdie gaping verwys as die informasie tegnologie (IT) gaping. Die doel van hierdie opdrag is om te evalueer en bespreek in watter mate ondersteunende raamwerk soos die PRojects IN Controlled Environment (PRINCE2) metodologie kan help om die organisasie se besigheidsprosesse in lyn te bring met die funksionaliteit van die eindproduk; aangesien so baie projekte steeds misluk ten spyte van die ondersteunende raamwerke wat beskikbaar is om organisasies by te staan met die implementering. Die opdrag beoog om die IT gaping te definieer en te bepreek. Verder sal hierdie opdrag die swakhede in die PRINCE2 metodologie, wat moontlik die volbringing van behoorlike belyning tussen die besigheidsprosesse en die funksionaliteit van die toepassingsprogrammatuurpakket belemmer, identifiseer. Swakhede en tekortkominge in die PRINCE2 metodologie is as volg geïdentifiseer: • Voorbereiding van matriks-tabel wat die redes vir toepassingsprogrammatuurpakket mislukking deur middel van die uitvoering van literatuurstudie opsom • Koppeling van die redes bekom deur middel van die literatuurstudie met die redes vir projek mislukking geidentifiseer deur die Office of Government Commerce (uitgewers van die PRINCE2 metodologie) • Koppeling van al die bogenoemde redes na die PRINCE2 metodologie om vas te stel of die redes wat geïdentifiseer is voldoende deur die PRINCE2 metodologie aangespreek word. Die opdrag sluit af met aanbevelings om die besigheidsprosesse in lyn te bring met die funksionaliteit van die toepassingsprogrammatuurpakket en aanbevelings vir swakhede wat in die PRINCE2 metodologie geïdentifiseer is aan te spreek. Behoorlike belyning tussen besigheidsprosesse en die funksionaliteit van toepassingsprogrammatuurpakket kan behaal word indien hierdie aanbevelings aangeneem word en tesame met die PRINCE2 metodologie gebruik word. Die eindresultaat is meer suksesvolle implementering van toepassingsprogrammatuurpakket projekte.
APA, Harvard, Vancouver, ISO, and other styles
27

Rossetto, Anubis Graciela de Moraes. "Impact FD : an unreliable failure detector based on process relevance and confidence in the system." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/150037.

Full text
Abstract:
Detectores de falhas não confiáveis tradicionais são oráculos disponíveis localmente para processos deumsistema distribuído que fornecem uma lista de processos suspeitos de terem falhado. Este trabalho propõe um novo e flexível detector de falhas não confiável, chamado Impact FD, que fornece como saída um valor trust level que é o grau de confiança no sistema. Ao expressar a relevância de cada processo por um valor de fator de impacto, bem como por uma margem de falhas aceitáveis do sistema, o Impact FD permite ao usuário ajustar a configuração do detector de falhas de acordo com os requisitos da aplicação: em certos cenários, o defeito de umprocesso de baixo impacto ou redundante não compromete a confiança no sistema, enquanto o defeito de um processo de alto fator de impacto pode afetá-la seriamente. Assim, pode ser adotada uma estragégia de monitoramento com maior ou menor rigor. Em particular, definimos algumas propriedades de flexibilidade que caracterizam a capacidade do Impact FD para tolerar uma certa margem de falhas ou falsas suspeitas, ou seja, a sua capacidade de fornecer diferentes conjuntos de respostas que levam o sistema a estados confiáveis. O Impact FD é adequado para sistemas que apresentam redundância de nodos, heterogeneidade de nodos, recurso de agrupamento e permite uma margem de falhas que não degrada a confiança no sistema. Nós também mostramos que algumas classes do Impact FD são equivalentes a § e ­, que são detectores de falhas fundamentais para contornar a impossibilidade de resolver o problema do consenso em sistemas de transmissão de mensagens assíncronas na presença de falhas. Adicionalmente, com base em pressupostos de sincronia e nas abordagens baseada em tempo e padrão de mensagem, apresentamos três algoritmos que implementam o Impact FD. Os resultados da avaliação de desempenho usando traces reais do PlanetLab confirmam o grau de aplicabilidade flexível do nosso detector de falhas e, devido à margem aceitável de falhas, o número de falsas respostas ou suspeitas pode ser tolerado quando comparado a tradicionais detectores de falhas não confiáveis.
Traditional unreliable failure detectors are per process oracles that provide a list of processes suspected of having failed. This work proposes a new and flexible unreliable failure detector (FD), denoted the Impact FD, that outputs a trust level value which is the degree of confidence in the system. By expressing the relevance of each process by an impact factor value as well as a margin of acceptable failures of the system, the Impact FD enables the user to tune the failure detection configuration in accordance with the requirements of the application: in some scenarios, the failure of low impact or redundant processes does not jeopardize the confidence in the system, while the crash of a high impact process may seriously affect it. Either a softer or stricter monitoring strategy can be adopted. In particular, we define some flexibility properties that characterize the capacity of the Impact FD to tolerate a certain margin of failures or false suspicions, i.e., its capacity of providing different sets of responses that lead the system to trusted states. The Impact FD is suitable for systems that present node redundancy, heterogeneity of nodes, clustering feature, and allow a margin of failures which does not degrade the confidence in the system. We also show that some classes of the Impact FD are equivalent to ­ and § which are fundamental FDs to circumvent the impossibility of solving the consensus problem in asynchronous message-passing systems in presence of failures. Additionally, based on different synchrony assumptions and message-pattern or timer-based approaches, we present three algorithms which implement the Impact FD. Performance evaluation results using real PlanetLab traces confirmthe degree of flexible applicability of our failure detector and, due to the accepted margin of failures, that false responses or suspicions may be tolerated when compared to traditional unreliable failure detectors.
APA, Harvard, Vancouver, ISO, and other styles
28

Satin, Ricardo Francisco de Pierre. "Um estudo exploratório sobre o uso de diferentes algoritmos de classificação, de seleção de métricas, e de agrupamento na construção de modelos de predição cruzada de defeitos entre projetos." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/2552.

Full text
Abstract:
Predizer defeitos em projetos de software é uma tarefa complexa, especialmente para aqueles projetos que estão em fases iniciais do desenvolvimento por, frequentemente, disponibilizarem de poucos dados para que modelos de predição sejam criados. A utilização da predição cruzada de defeitos entre projetos é indicada em tal situação, pois permite reaproveitar dados de projetos similares. Este trabalho propõe um estudo exploratório sobre o uso de diferentes algoritmos de classificação, seleção de métricas, e de agrupamento na construção de um modelo de predição cruzada de defeitos entre projetos. Esse modelo foi construído com o uso de uma medida de desempenho, obtida com a aplicação de algoritmos de classificação, como forma de encontrar e agrupar projetos semelhantes. Para tanto, foi estudada a aplicação conjunta de 8 algoritmos de classificação, 6 de seleção de atributos, e um de agrupamento em um conjunto de dados com 1283 projetos, resultando na construção de 61584 diferentes modelos de predição. Os algoritmos de classificação e de seleção de atributos tiveram seus desempenhos avaliados por meio de diferentes testes estatísticos que mostraram que: o Naive Bayes foi o classificador de melhor desempenho, em comparação com os outros 7 algoritmos; o par de algoritmos de seleção de atributos que apresentou melhor desempenho foi o formado pelo avaliador de atributos CFS e método de busca Genetic Search, em comparação com outros 6 pares. Considerando o algoritmo de agrupamento, a presente proposta parece ser promissora, uma vez que os resultados obtidos mostram evidências de que as predições usando agrupamento foram melhores que as predições realizadas sem qualquer agrupamento por similaridade, além de mostrar a diminuição do custo de treino e teste durante o processo de predição.
To predict defects in software projects is a complex task, especially for those projects that are in early stages of development by, often, providing few data for prediction models. The use of cross-project defect prediction is indicated in such a situation because it allows reuse data of similar projects. This work proposes an exploratory study on the use of different classification algorithms, of selection metrics, and grouping to build cross-project defect predictions models. This model was built using a performance measure, obtained by applying classification algorithms aim to find and group similar projects. Therefore, it was studied the application of 8 classification algorithms, 6 feature selection, and a cluster in a data set with 1283 projects, resulting in the construction of 61584 different prediction models. The classification algorithms and feature selection had their performance evaluated through different statistical tests showed that: the Naive Bayes was the best performance classifier, as compared with other 7 algorithms; the pair of feature selection algorithms that performed better was formed by CFS attribute evaluator and search method Genetic Search, compared with 6 other pairs. Considering the clustering algorithm, this proposal seems to be promising, since the results shows evidence that the predictions were best grouping using the predictions performed without any similarity clustering, and shows the decrease in training cost and testing during the prediction process.
APA, Harvard, Vancouver, ISO, and other styles
29

Meros, Jader Elias. "Priorização de testes de sistema automatizados por meio de grafos de chamadas." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/1849.

Full text
Abstract:
Com a necessidade cada vez maior de agilizar a entrega de novos desenvolvimentos ao cliente e de diminuir o tempo de desenvolvimento das aplicações, a priorização de casos de teste possibilita a detecção das falhas presentes na aplicação mais rapidamente por meio da ordenação dos casos de teste a serem executados. E, com isso, possibilita também que a correção destas falhas inicie o mais brevemente possível. Entretanto, quando os casos de teste a serem priorizados são testes automatizados de sistema, critérios tradicionais utilizados na literatura como cobertura de código ou modelos do sistema deixam de ser interessantes, dada a característica inerente deste tipo de teste na qual a organização e a modelagem adotadas são ignoradas por se tratarem de testes de caixa preta. Considerando a hipótese de que casos de teste automatizados grandes testam mais partes da aplicação e que casos de teste similares podem estar testando a mesma área da aplicação, parece válido crer que a execução dos casos de teste de sistema priorizando os testes mais complexos pode alcançar resultados positivos quando comparada à execução não ordenada dos casos de teste. É neste cenário que este trabalho propõe o uso dos grafos de chamadas dos próprios casos de teste como critério para priorização destes, priorizando assim a execução dos casos de teste com a maior quantidade de nós no seu grafo. A abordagem proposta neste trabalho mostrou, por meio de dois estudos de caso, ser capaz de melhorar a taxa de detecção de falhas em relação à execução não ordenada dos casos de teste. Além disso, a abordagem proposta obteve resultados semelhantes as abordagens tradicionais de priorização utilizando cobertura de código da aplicação.
With the increasing need to streamline the delivery of new developments to the customer and reduce application development time, test case prioritization allows a quicker detection of faults present in the application through the ordering of test cases to be executed. Besides that, a quicker detection enables also the correction of these faults to start as soon as possible. However, when the test cases to be prioritized are automated system tests, traditional criteria used in the literature like code coverage or system models become uninteresting, given that this type of test case, classified as black box test, ignores how the application was coded or modeled. Considering the hypothesis that bigger automated test cases verify more parts of the application and that similar test cases may be testing the same application areas, it seems valid to believe that giving a higher priority to more complex test cases to be executed first can accomplish positive results when compared to the unordered execution of test cases. It is on this scenario that this project studies the usage of call graphs from test cases as the criterion to prioritize them, increasing the priority of the execution of test cases with the higher number of nodes on the graph. The approach proposed in this document showed through two case studies that it is capable of improving fault detection rate compared to unordered test cases. Furthermore, the proposed approach achieved similar results when compared to a traditional prioritization approach using code coverage of the application.
APA, Harvard, Vancouver, ISO, and other styles
30

Whittington, William Grant. "Cooperative control of systems with variable network topologies." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49107.

Full text
Abstract:
Automation has become increasingly prevalent in all forms of society. Activities that are too difficult for a human or to dangerous can be done by machines which do not share those downsides. In addition, tasks can be scheduled more precisely and accurately. Increases in the autonomy have allowed for a new level of tasks which are completed by teams of automated agents rather than a single one, called cooperative control. This has many benefits; but comes at the cost of increased complexity and coordination. The main thrust of research in this field is problem based, considering communication issues as a secondary feature. There is a gap considering problems in which many changes occur as rapidly as communication and the issues that arise as a result. This is the main motivation. This research presents an approach to cooperative control in highly variable systems and tackles some of the issues present in such a system. One of the most important issues is the communication network itself, which is used as an indicator for how healthy the system is an how well it may react to future changes. Therefore using the network as an input to control allows the system to navigate between conservative and aggressive techniques to improve performance while still maintaining robustness. Results are based on a test bed designed to simulate a wide variety of problem types based on: network type; numbers of actors; frequency of changes; impact of changes and method of change. The developed control method is compared to the baseline case ignoring cooperation as well as an idealized case assuming perfect system knowledge. The baseline represents sacrifices coordination to achieve a high level of robustness at reduced performance while the idealized case represents the best possible performance. The control techniques developed give a performance at least as good as the baseline case if not better for all simulations.
APA, Harvard, Vancouver, ISO, and other styles
31

Bolchoz, John Manning. "The identification of software failure regions." Thesis, Monterey, California: Naval Postgraduate School, 1990. http://hdl.handle.net/10945/27720.

Full text
Abstract:
Approved for public release; distribution is unlimited.
In these days of spiralling software costs and the proliferation of computers, software testing during development is now recognized as a critical aspect of the software engineering process, an aspect that must be improved in terms of cost and timeliness. This thesis describes one method that may guide software testing by analyzing the regions of input associated with each fault as it is detected. These software failure regions are defined and a method of failure region analysis is described in detail. The thesis describes how this analysis may be used to detect non-obviously redundant test cases. A preliminary examination of the manual analysis method is performed with a set of programs from a prior reliability experiment. Based on faults discovered during the previous experiment, this thesis defines the reachability conditions, the error generation conditions, and the conditions in which an error is not masked by later processing. The manual analysis of failure regions can be a difficult process, with difficulty dependent on program size, program complexity, and the size of the input data space. Program constructs and events that simplify the analysis process are also described. The thesis explains variable communication and the effects of vertical and horizontal contamination. The thesis also describes the indirect benefits of performing failure region analysis. Finally, there are several open questions raised by this research, and these questions are presented as ideas for future research.
APA, Harvard, Vancouver, ISO, and other styles
32

Mutha, Chetan V. "Software Fault Propagation And Failure Analysis For UML Based Software Design." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1404305866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wei, Yuan. "A study of software input failure propagation mechanisms." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/4250.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Reliability Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
34

Gardiner, J. "Delayed failure of software components using stochastic testing." Thesis, Cranfield University, 2012. http://dspace.lib.cranfield.ac.uk/handle/1826/7301.

Full text
Abstract:
The present research investigates the delayed failure of software components and addresses the problem that the conventional approach to software testing is unlikely to reveal this type of failure. Delayed failure is defined as a failure that occurs some time after the condition that causes the failure, and is a consequence of long-latency error propagation. This research seeks to close a perceived gap between academic research into software testing and industrial software testing practice by showing that stochastic testing can reveal delayed failure, and supporting this conclusion by a model of error propagation and failure that has been validated by experiment. The focus of the present research is on software components described by a request-response model. Within this conceptual framework, a Markov chain model of error propagation and failure is used to derive the expected delayed failure behaviour of software components. Results from an experimental study of delayed failure of DBMS software components MySQL and Oracle XE using stochastic testing with random generation of SQL are consistent with expected behaviour based on the Markov chain model. Metrics for failure delay and reliability are shown to depend on the characteristics of the chosen experimental profile. SQL mutation is used to generate negative as well as positive test profiles. There appear to be few systematic studies of delayed failure in the software engineering literature, and no studies of stochastic testing related to delayed failure of software components, or specifically to delayed failure of DBMS. Stochastic testing is shown to be an effective technique for revealing delayed failure of software components, as well as a suitable technique for reliability and robustness testing of software components. These results provide a deeper insight into the testing technique and should lead to further research. Stochastic testing could provide a dependability benchmark for component-based software engineering.
APA, Harvard, Vancouver, ISO, and other styles
35

Hu, Stanley 1978. "Fast failure detection in distributed software radio applications." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Salako, Kizito Oluwaseun. "Extension to models of coincident failure in multiversion software." Thesis, City University London, 2012. http://openaccess.city.ac.uk/1302/.

Full text
Abstract:
Fault-tolerant architectures for software-based systems have been used in various practical applications, including Right control systems for commercial airliners (e.g. AIRBUS A340, A310) as part of an aircraft's so-called fiy-bY-'win: Right control system [1], the control systems for autonomous spacecrafts (e.g. Cassini-Huygens Saturn orbiter and probe) [2], rail interlocking systems [3] and nuclear reactor safety systems [4, 5]. The use of diverse, independently developed, functionally equivalent software modules in a fault-tolerant con- figura tion has been advocated as a means of achieving highly reliable systems from relatively less reliable system components [6, 7, 8, 9]. In this regard it had been postulated that [6] "The independence of programming efforts will greatly reduce the probability of identical softuiare faults occurring 'in two 01' more versions of the proqram." Experimental evaluation demonstrated that despite the independent creation of such versions positive failure correlation between the versions can be expected in practice [10, 11]. The conceptual models of Eckhardt et al [12] and Littlewood et al [13], referred to as the EL model and LM model respectively, were instrumental in pointing out sources of uncertainty that determine both the size and sign of such failure correlation. In particular, there are two important sources of uncertainty: The process of developing software: given sufficiently complex system requirements, the particular software version that will be produced from such a process is not knqwn with certainty. Consequently, complete knowledge of what the failure behaviour of the software will be is also unknown; The occurrence of demands during system operation: during system operation it may not be certain which demand 11 system will receive next from the environment. To explain failure correlation between multiple software versions the EL model introduced lite notion of difficulty: that is, given a demand that could occur during system operation there is a chance that a given software development team will develop a software component that fails when handling such a demand as part of the system. A demand with an associated high probability of developed software failing to handle it correctly is considered to be a "difficult" demand for a development team: a low probability of failure would suggest an "easy" demand. In the EL model different development. teams, even when isolated from each other, are identical in how likely they are to make mistakes while developing their respective software versions. Consequently, despite the teams possibly creating software versions that fail on different demands, in developing their respective versions the teams find the same demands easy, and the same demands difficult. The implication of this is the versions developed by the teams do not fail independently; if one observes t.he failure-of one team's version this could indicate that the version failed on a difficult. demand, thus increasing one's expectation that the second team's version will also fail on that demand. Succinctly put, due to correlated "difficulties" between the teams across the demands, "independently developed software cannot be expected to fail independently". The LM model takes this idea a step further by illustrating, under rather general practical conditions, that negative failure correlation is also possible; possible, because the teams may be sufficiently diverse in which demands they find "difficult". This in turn implies better reliability than would be expected under naive assumptions of failure independence between software modules built by the respective teams. Although these models provide such insight they also pose questions yet to be answered.
APA, Harvard, Vancouver, ISO, and other styles
37

Shu, Gang. "Statistical Estimation of Software Reliability and Failure-causing Effect." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1405509796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Špinar, Marek. "Ověření provozní výkonnosti a optimalizace FVE." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241950.

Full text
Abstract:
The Master´s thesis deals with issues of operational performance of two photovoltaic power plants. In the thesis is stated the history of photovoltaics, description of photovoltaic effect, used materials and production technology of the most used material in PV industry – Silicon. The basic parts and parameters of photovoltaic power plant are described. Thesis also solves, how could be done the first and periodically control due to relevant directives. The ways of diagnostics potentional failures, methods of measuring and the exam of monitoring system are stated. Practice part is focused on measuring and comparing operational performance of FVE Kurdějov and FVE Šakvice II. Operational performance was calculated from exported data for years 2014 and 2015. The thesis also contains measuring of each string connected to inventors, which are installed on the power plant. The result is an identification of strings with decreased operational performance. Based on that was created recommendations for optimalization and increase of the performance. The last part is software for simulation of photovoltaic power plant. This SW calculates potentional energy, which could be produced in a day with available data export. The calculation is defined by parameters, which are assigned.
APA, Harvard, Vancouver, ISO, and other styles
39

Hashmi, Mazhar Tajammal. ""High I.T. Failure Rate : A Management Prospect"." Thesis, Blekinge Tekniska Högskola, Sektionen för management, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5255.

Full text
Abstract:
Software industry is growing day by day and software is going more complex and diverse with increase in cost and rate of failure. This increase in size and complexity of software projects has negative impact on the software productivity, schedule and effort. Organizations are demanding high quality products to increase their productivity and profits. It is common that they are facing some serious problems even after spending a large sum of money. So, its alarming situation and the concerned parties should take effective steps to resolve software project failure problem. Above all this, we are facing a high rate of software failure putting software industry on stake. This study revolves around the core issue of finding the root causes of software project failure with respect to organizational factors. In this, I have tried to find the organizational factors contributing towards the failure of software projects. I have done this study with the help of literature review and questionnaire survey. There could be one or several factors responsible for the software projects failure, which are mentioned in chapter two. I have slightly touched the Information Technology for digging deep into the failure and for understanding this phenomenon. Software failure is the biggest challenge faced by IT as well as business people. There is strong need to find the root causes of software project failure and mitigate them. For controlling this failure problem management can perform its role and I have discussed the role of management in defining, measuring, controlling and implementation of software projects. A project is considered failure when it is not able to show the anticipated results and it is happened when team is not able to fulfill the requirements of the project e.g. overruns time, overruns resources, lack of conformance with initial requirements specifications. I have tried to find out the answers of my research questions through literature review and empirical study. Root causes of software project failure are presented and validated through literature review, data analysis, discussion, and findings. A comprehensive analysis of empirical data and discussion will give you the insight into the problem and my effort to sort out them in a precise way. For the purpose of knowing the solution of this study, I will refer you towards the conclusion and recommendation. The concerned or interested people can get benefit from this research study and definitely it will help them to avoid software project failure. The contribution of the research is twofold. First, it will be helpful for the software making professionals/companies and secondly, it will be helpful for decision makers/users (Organizations), when they are going to buy or implement a software project for enhancing their productivity.
This study revolves around the core issue of finding the root causes of software project failure with respect to organizational factors. In this study, I have tried to find the organizational factors contributing towards the failure of software projects. Study is comprises of literature review and questionnaire survey. There could be one or several factors responsible for the software projects failure. I have finalized some important causes of software failure on the basis of literature review and empirical study in chapter two. Further these finalized causes of software project failure are again validated with the help of questionnaire survey in chapter four. I have presented a comprehensive analysis of the gathered data from respondents. For avoiding any aspect of the analysis, I have further added a detailed discussion on data gathered through survey. I have slightly touched the Information Technology with respect to management’s role in software project development. Information Technology is playing a very vital role in today’s organizations for competing on world level. Software failure is the biggest challenge faced by IT as well as business people. In this way, software failure is very important issue for software development firms as well as buyer and user firms. There is strong need to find the root causes of software project failure and mitigate them. In currant age, the effective use of IT is a success factor for any organization. It is only possible if we link IT with organizational goals. Business and IT managers need to learn that how they can measure, manage and justify technology as a business matter. The example of ideal organization is that which gives value to the collaboration, openness, and communication. The insight gain through this research is the basis for describing the solution for software failure problem and it is presented in chapter six (Conclusion and Recommendations) briefly. The concerned parties will be able to get the benefits from this study to avoid the failure problem. The contribution of this research is twofold. First, it will be helpful for the software making professionals/companies and secondly, it will be helpful for decision makers/users (Organizations). Especially, when they are going to buy or implementing a software project for enhancing their productivity.
APA, Harvard, Vancouver, ISO, and other styles
40

Hilaris, Alexander E. "An empirical approach to logical clustering of software failure regions." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA279863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Camara, Louis Richard. "Statistical modeling and assessment of software reliability." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Mutha, Chetan V. "Software fault failure and error analysis at the early design phase with UML." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1296597871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Rößler, Sebastian Jeremias [Verfasser], and Andreas [Akademischer Betreuer] Zeller. "From software failure to explanation / Sebastian Jeremias Rößler. Betreuer: Andreas Zeller." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2013. http://d-nb.info/1053634641/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ginn, Lelon Levoy. "An empirical approach to analysis of similarities between software failure regions." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Saxton, Dominic Martinelli. "Relationship Between Software Development Team Structure, Ambiguity, Volatility, and Project Failure." ScholarWorks, 2018. https://scholarworks.waldenu.edu/dissertations/6277.

Full text
Abstract:
Complex environments like the United States Air Force's advanced weapon systems are highly reliant on externally developed software, which is often delivered late, over budget, and with fewer benefits than expected. Grounded in Galbraith's organizational information processing theory, the purpose of this correlational study was to examine the relationship between software development team structure, ambiguity, volatility and software project failure. Participants included 23 members of the Armed Forces Communications and Electronics Association in the southeastern United States who completed 4 project management surveys. Results of multiple regression analysis indicated the model as a whole was able to predict software project failure, F(3,19) = 10.838, p < .001, R2 = 0.631. Software development team structure was the only statistically significant predictor, t = 2.762, p = .012. Implications for positive social change include the potential for software development company owners and military leaders to understand the factors that influence software project success and to develop strategies to enhance software development team structure.
APA, Harvard, Vancouver, ISO, and other styles
46

Scott, Hanna. "Towards a Framework for Fault and Failure Prediction and Estimation." Licentiate thesis, Karlskrona : Department of Systems and Software Engineering, School of Engineering, Blekinge Institute of Technology, 2008. http://www.bth.se/fou/Forskinfo.nsf/allfirst2/46bd1c549ac32f74c12574c100299f82?OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Noor, Tanzeem Bin. "A Similarity-based Test Case Quality Metric using Historical Failure Data." IEEE, 2015. http://hdl.handle.net/1993/31045.

Full text
Abstract:
A test case is a set of input data and expected output, designed to verify whether the system under test satisfies all requirements and works correctly. An effective test case reveals a fault when the actual output differs from the expected output (i.e., the test case fails). The effectiveness of test cases is estimated using quality metrics, such as code coverage, size, and historical fault detection. Prior studies have shown that previously failing test cases are highly likely to fail again in the next releases; therefore, they are ranked higher. However, in practice, a failing test case may not be exactly the same as a previously failed test case, but quite similar. In this thesis, I have defined a metric that estimates test case quality using its similarity to the previously failing test cases. Moreover, I have evaluated the effectiveness of the proposed test quality metric through detailed empirical study.
February 2016
APA, Harvard, Vancouver, ISO, and other styles
48

Andrews, Michael McMillan. "Knowledge-based debugging : matching program behaviour against known causes of failure." Thesis, University of Kent, 2003. https://kar.kent.ac.uk/14017/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Dodd, Sean. "The role and value of ethical frameworks in software development." Thesis, Brunel University, 2003. http://bura.brunel.ac.uk/handle/2438/5305.

Full text
Abstract:
Software development is notorious for failure, typically defined as over budget, late delivery and/or poor quality of new information systems (IS) on project completion. The consequences of such failure can be enormous, particularly financially. As such, there is consensus by practitioners and academics alike that this practice is unacceptable. Yet with a variety of accepted development methods and tools available for use by software developers and project managers, there is still no significant reduction in the size or frequency of failure reported. In an attempt to understand the conflicts which arise in the development environment in which developers and project managers must operate, the research area is the role and value of ethics in the development of managed software projects. A definition of ethics in this context was provided by the IEEE/ACM Code of Ethics. Research was additionally conducted to understand how other professions and business areas define and enforce ethics in their respective working environments. These were (UK) Law, Finance, Retail and, law practice in the European Union. Interpretive research was then conducted to enable software development practices to be understood from the view of developers and project managers in industry. Unethical practices were then identified in a large IT company based in west London via a single, six month in-depth case study, with the data collected analysed via a series of repertory grids. Analysis and triangulation of the data collected via interviews, document analysis and observations led to an improved understanding of the causes of the unethical practices found. Conclusions and recommendations are then provided relating to implications for (a) the company participating in the research, (b) the application of the IEEE/ACM Code in industry (c) theory for ethicists.
APA, Harvard, Vancouver, ISO, and other styles
50

Persinger, Arnold Ralph. "A prototype industrial maintenance software system to apply a proactive approach to equipment failure." [Denver, Colo.] : Regis University, 2005. http://165.236.235.140/lib/APersinger2005.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography