To see the other types of publications on this topic, follow the link: Software failures.

Journal articles on the topic 'Software failures'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Software failures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yakovyna, V. S. "Software failures prediction using RBF neural network." Odes’kyi Politechnichnyi Universytet. Pratsi, no. 2 (June 15, 2015): 111–18. http://dx.doi.org/10.15276/opu.2.46.2015.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Munson, John C. "Software faults, software failures and software reliability modeling." Information and Software Technology 38, no. 11 (November 1996): 687–99. http://dx.doi.org/10.1016/0950-5849(96)01117-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Peaker, B. "Review: Software Development Failures." Computer Bulletin 46, no. 4 (July 1, 2004): 30–31. http://dx.doi.org/10.1093/combul/46.4.30-c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Neil, Martin. "Are software failures deterministic?" Information and Software Technology 39, no. 3 (March 1997): 217–19. http://dx.doi.org/10.1016/s0950-5849(96)01146-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hatton, L. "Software failures: follies and fallacies." IEE Review 43, no. 2 (March 1, 1997): 49–52. http://dx.doi.org/10.1049/ir:19970201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

SIMPSON, ROY L. "Learning from Software Development Failures." Nursing Management (Springhouse) 23, no. 9 (September 1992): 30???32. http://dx.doi.org/10.1097/00006247-199209000-00017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Perrow, Charles. "Software Failures, Security, and Cyberattacks." TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis 20, no. 3 (November 1, 2011): 41–46. http://dx.doi.org/10.14512/tatup.20.3.41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhu, Mengmeng, and Hoang Pham. "A Novel System Reliability Modeling of Hardware, Software, and Interactions of Hardware and Software." Mathematics 7, no. 11 (November 4, 2019): 1049. http://dx.doi.org/10.3390/math7111049.

Full text
Abstract:
In the past few decades, a great number of hardware and software reliability models have been proposed to address hardware failures in hardware subsystems and software failures in software subsystems, respectively. The interactions between hardware and software subsystems are often neglected in order to simplify reliability modeling, and hence, most existing reliability models assumed hardware subsystems and software subsystem are independent of each other. However, this may not be true in reality. In this study, system failures are classified into three categories, which are hardware failures, software failures, and hardware-software interaction failures. The main contribution of our research is that we further classify hardware-software interaction failures into two groups: software-induced hardware failures and hardware-induced software failures. A Markov-based unified system reliability modeling incorporating all three categories of system failures is developed in this research, which provides a novel and practical perspective to define system failures and further improve reliability prediction accuracy. Comparison of system reliability estimation between the reliability models with and without considering hardware-software interactions is elucidated in the numerical example. The impacts on system reliability prediction as the changes of transition parameters are also illustrated by the numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Yifan, Hong-Zhong Huang, and Tingyu Zhang. "Reliability Analysis of C4ISR Systems Based on Goal-Oriented Methodology." Applied Sciences 11, no. 14 (July 8, 2021): 6335. http://dx.doi.org/10.3390/app11146335.

Full text
Abstract:
Hard-and-software integrated systems such as command and control systems (C4ISR systems) are typical systems that are comprised of both software and hardware, the failures of such devices result from complicated common cause failures and common (or shared) signals that make classical reliability analysis methods will be not applicable. To this end, this paper applies the Goal-Oriented (GO) methodology to detailed analyze the reliability of a C4ISR system. The reliability as well as the failure probability of the C4ISR system, are reached based on the GO model constructed. At the component level, the reliability of units of the C4ISR system is computed. Importance analysis of failures of such a system is completed by the qualitative analysis capability of the GO model, by which critical failures of hardware failures like communication module failures and motherboard module failures as well as software failures like network module application software failures and decompression module software failures are ascertained. This method of this paper contributes to the reliability analysis of all hard-and-software integrated systems.
APA, Harvard, Vancouver, ISO, and other styles
10

SCHNEIDEWIND, NORMAN. "APPLYING NEURAL NETWORKS TO SOFTWARE RELIABILITY ASSESSMENT." International Journal of Reliability, Quality and Safety Engineering 17, no. 04 (August 2010): 313–29. http://dx.doi.org/10.1142/s0218539310003834.

Full text
Abstract:
We adapt concepts from the field of neural networks to assess the reliability of software, employing cumulative failures, reliability, remaining failures, and time to failure metrics. In addition, the risk of not achieving reliability, remaining failure, and time to failure goals are assessed. The purpose of the assessment is to compare a criterion, derived from a neural network model, for estimating the parameters of software reliability metrics, with the method of maximum likelihood estimation. To our surprise the neural network method proved superior for all the reliability metrics that were assessed by virtue of yielding lower prediction error and risk. We also found that considerable adaptation of the neural network model was necessary to be meaningful for our application – only inputs, functions, neurons, weights, activation units, and outputs were required to characterize our application.
APA, Harvard, Vancouver, ISO, and other styles
11

Kim, Youn Su, Kwang Yoon Song, Hoang Pham, and In Hong Chang. "A Software Reliability Model with Dependent Failure and Optimal Release Time." Symmetry 14, no. 2 (February 8, 2022): 343. http://dx.doi.org/10.3390/sym14020343.

Full text
Abstract:
In the past, because computer programs were restricted to perform only simple functions, the dependence on software was not large, resulting in relatively small losses after a failure. However, with the development of the software market, the dependence on software has increased considerably, and software failures can cause significant social and economic losses. Software reliability studies were previously conducted under the assumption that software failures occur independently. However, as software systems become more complex and extremely large, software failures are becoming frequently interdependent. Therefore, in this study, a software reliability model is developed under the assumption that software failures occur in a dependent manner. We derive the software reliability model through the number of software failure and fault detection rate assuming point symmetry. The proposed model proves good performance compared with 21 previously developed software reliability models using three datasets and 11 criteria. In addition, to find the optimal release time, a cost model using the developed software reliability model was presented. To determine this release time, four parameters constituting the software reliability model were changed by 10%. By comparing the change in the cost model and the optimal release time, it was found that parameter b had the greatest influence.
APA, Harvard, Vancouver, ISO, and other styles
12

Poston, R. M., and M. W. Bruen. "Counting Down to Zero Software Failures." IEEE Software 4, no. 5 (September 1987): 54–61. http://dx.doi.org/10.1109/ms.1987.231774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Savor, T., and R. E. Seviora. "Toward automatic detection of software failures." Computer 31, no. 8 (1998): 68–74. http://dx.doi.org/10.1109/2.707619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Xie, Min. "A shock model for software failures." Microelectronics Reliability 27, no. 4 (January 1987): 717–24. http://dx.doi.org/10.1016/0026-2714(87)90018-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cui, Yunhe, Qing Qian, Guowei Shen, Chun Guo, and Saifei Li. "REVERT: A Network Failure Recovery Method for Data Center Networks." Electronics 9, no. 8 (July 23, 2020): 1187. http://dx.doi.org/10.3390/electronics9081187.

Full text
Abstract:
As a repository that holds computing facilities, storage facilities, network facilities and other facilities, the Software Defined Data Center (SDDC) can provide computing and storage resources for users. For a SDDC, it is important to provide continuous services for users. Hence, in order to achieve high reliability in Software Defined Data Center Networks (SDDCNs), a network failure recovery method for software defined data center networks (REVERT) is proposed to recover failures in SDDCNs. In REVERT, the network failures that occurred in SDDCNs are classified into three types, which are switch failure, failure of links among switches and failure of links between switches and servers. Specially, except recovering the switch failure and failure of links between switches, REVERT can also recover the failures of links between the switches and servers. To achieve that, a failure preprocessing method used to classify the network failures, a data structure for storing and finding the affected flows, a server cluster agent for communicating with the server clustering algorithm and a routing path calculation method are designed in REVERT. Meanwhile, REVERT has been implemented and evaluated on RYU controller and Mininet using three routing algorithms. Compared with the link usage before recovering the network failures, when there are more than 200 flows in the network, the mean link usages only slightly increase at about 1.83 percent. More importantly, the evaluation results also demonstrate that except recovering switch failures, intra-topo link failures, REVERT has the ability of recovering failures of links between servers and edge switches successfully.
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Chunxiu, Xin Li, Ke Li, Jiafu Huang, Zhansheng Feng, Shanzhi Chen, Hong Zhang, and Yulong Shi. "Relationship-Oriented Software Defined AS-Level Fast Rerouting for Multiple Link Failures." Mathematical Problems in Engineering 2015 (2015): 1–15. http://dx.doi.org/10.1155/2015/838340.

Full text
Abstract:
Large-scale deployments of mission-critical services have led to stringent demands on Internet routing, but frequently occurring network failures can dramatically degrade the network performance. However, Border Gateway Protocol (BGP) can not react quickly to recover from them. Although extensive research has been conducted to deal with the problem, the multiple failure scenarios have never been properly addressed due to the limit of distributed control plane. In this paper, we propose a local fast reroute approach to effectively recover from multiple link failures in one administrative domain. The principle of Software Defined Networking (SDN) is used to achieve the software defined AS-level fast rerouting. Considering AS relationships, efficient algorithms are proposed to automatically and dynamically find protection paths for multiple link failures; then OpenFlow forwarding rules are installed on routers to provide data forwarding continuity. Our approach is able to ensure applicability to ASes with flexibility and adaptability to multiple link failures, contributing toward improving the network performance. Through experimental results, we show that our proposal provides effective failure recovery and does not introduce significant control overhead to the network.
APA, Harvard, Vancouver, ISO, and other styles
17

Johnson, Chris. "Forensic software engineering: are software failures symptomatic of systemic problems?" Safety Science 40, no. 9 (December 2002): 835–47. http://dx.doi.org/10.1016/s0925-7535(01)00086-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

DAMODARAN, D., B. RAVIKUMAR, and VELIMUTHU RAMACHANDRAN. "BAYESIAN SOFTWARE RELIABILITY MODEL COMBINING TWO PRIORS AND PREDICTING TOTAL NUMBER OF FAILURES AND FAILURE TIME." International Journal of Reliability, Quality and Safety Engineering 21, no. 06 (December 2014): 1450031. http://dx.doi.org/10.1142/s0218539314500314.

Full text
Abstract:
Reliability statistics is divided into two mutually exclusive camps and they are Bayesian and Classical. The classical statistician believes that all distribution parameters are fixed values whereas Bayesians believe that parameters are random variables and have a distribution of their own. Bayesian approach has been applied for the Software Failure data and as a result of that several Bayesian Software Reliability Models have been formulated for the last three decades. A Bayesian approach to software reliability measurement was taken by Littlewood and Verrall [A Bayesian reliability growth model for computer software, Appl. Stat. 22 (1973) 332–346] and they modeled hazard rate as a random variable. In this paper, a new Bayesian software reliability model is proposed by combining two prior distributions for predicting the total number of failures and the next failure time of the software. The popular and realistic Jelinski and Moranda (J&M) model is taken as a base for bringing out this model by applying Bayesian approach. It is assumed that one of the parameters of JM model N, number of faults in the software follows uniform prior distribution and another failure rate parameter Φi follows gama prior distribution. The joint prior p(N, Φi) is obtained by combining the above two prior distributions. In this Bayesian model, the time between failures follow exponential distribution with failure rate parameter with stochastically decreasing order on successive failure time intervals. The reasoning for the assumption on the parameter is that the intention of the software tester to improve the software quality by the correction of each failure. With Bayesian approach, the predictive distribution has been arrived at by combining exponential Time between Failures (TBFs) and joint prior p(N, Φi). For the parameter estimation, maximum likelihood estimation (MLE) method has been adopted. The proposed Bayesian software reliability model has been applied to two sets of act. The proposed model has been applied to two sets of actual software failure data and it has been observed that the predicted failure times as per the proposed model are closer to the actual failure times. The predicted failure times based on Littlewood–Verall (LV) model is also computed. Sum of square errors (SSE) criteria has been used for comparing the actual time between failures and predicted time between failures based on proposed model and LV model.
APA, Harvard, Vancouver, ISO, and other styles
19

KNAFL, G. J., J. A. MORGAN, R. L. FOLLENWEIDER, and R. M. KARCICH. "SOFTWARE FAILURE DATA ANALYSIS USING THE LEAST SQUARES APPROACH AND THE TIME PER FAILURE CONCEPT." International Journal of Reliability, Quality and Safety Engineering 02, no. 02 (June 1995): 161–76. http://dx.doi.org/10.1142/s0218539395000137.

Full text
Abstract:
We adapt data analytic techniques to the software reliability setting. We develop an evaluation procedure based on scatterplots of transformed data, crossvalidation using the predicted residual sum of squares (PRESS) criterion, residual plots, and normal plots. We analyze a software failure data set collected at Storage Technology Corporation utilizing this evaluation technique. We identify a new model which, for this data set, outperforms several established software reliability models, including the delayed S-shaped, exponential, inverse linear, logarithmic, power, and log power models. The failure intensity, and hence the reliability, for this model at any point in time is a function of the time per failure, that is, the ratio of cumulative time divided by cumulative failures, a quantity that agrees with the mean time between failures for time points at which failures occur.
APA, Harvard, Vancouver, ISO, and other styles
20

Molnár, Vince, and István Majzik. "Model Checking-based Software-FMEA: Assessment of Fault Tolerance and Error Detection Mechanisms." Periodica Polytechnica Electrical Engineering and Computer Science 61, no. 2 (April 24, 2017): 132. http://dx.doi.org/10.3311/ppee.9755.

Full text
Abstract:
Failure Mode and Effects Analysis (FMEA) is a systematic technique to explore the possible failure modes of individual components or subsystems and determine their potential effects at the system level. Applications of FMEA are common in case of hardware and communication failures, but analyzing software failures (SW-FMEA) poses a number of challenges. Failures may originate in permanent software faults commonly called bugs, and their effects can be very subtle and hard to predict, due to the complex nature of programs. Therefore, a behavior-based automatic method to analyze the potential effects of different types of bugs is desirable. Such a method could be used to automatically build an FMEA report about the fault effects, or to evaluate different failure mitigation and detection techniques. This paper follows the latter direction, demonstrating the use of a model checking-based automated SW-FMEA approach to evaluate error detection and fault tolerance mechanisms, demonstrated on a case study inspired by safety-critical embedded operating systems.
APA, Harvard, Vancouver, ISO, and other styles
21

BASTANI, FAROKH B., ING-RAY CHEN, and TA-WEI TSAO. "A SOFTWARE RELIABILITY MODEL FOR ARTIFICIAL INTELLIGENCE PROGRAMS." International Journal of Software Engineering and Knowledge Engineering 03, no. 01 (March 1993): 99–114. http://dx.doi.org/10.1142/s0218194093000057.

Full text
Abstract:
In this paper we develop a software reliability model for Artificial Intelligence (AI) programs. We show that conventional software reliability models must be modified to incorporate certain special characteristics of AI programs, such as (1) failures due to intrinsic faults, e.g., limitations due to heuristics and other basic AI techniques, (2) fuzzy correctness criterion, i.e., difficulty in accurately classifying the output of some AI programs as correct or incorrect, (3) planning-time versus execution-time tradeoffs, and (4) reliability growth due to an evolving knowledge base. We illustrate the approach by modifying the Musa-Okumoto software reliability growth model to incorporate failures due to intrinsic faults and to accept fuzzy failure data. The utility of the model is exemplified with a robot path-planning problem.
APA, Harvard, Vancouver, ISO, and other styles
22

Dalcher, Darren, and Colin Tully. "Learning from Failures." Software Process: Improvement and Practice 7, no. 2 (June 2002): 71–89. http://dx.doi.org/10.1002/spip.156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Song, Chang, and Pham. "NHPP Software Reliability Model with Inflection Factor of the Fault Detection Rate Considering the Uncertainty of Software Operating Environments and Predictive Analysis." Symmetry 11, no. 4 (April 10, 2019): 521. http://dx.doi.org/10.3390/sym11040521.

Full text
Abstract:
The non-homogeneous Poisson process (NHPP) software has a crucial role in computer systems. Furthermore, the software is used in various environments. It was developed and tested in a controlled environment, while real-world operating environments may be different. Accordingly, the uncertainty of the operating environment must be considered. Moreover, predicting software failures is commonly an important part of study, not only for software developers, but also for companies and research institutes. Software reliability model can measure and predict the number of software failures, software failure intervals, software reliability, and failure rates. In this paper, we propose a new model with an inflection factor of the fault detection rate function, considering the uncertainty of operating environments and analyzing how the predicted value of the proposed new model is different than the other models. We compare the proposed model with several existing NHPP software reliability models using real software failure datasets based on ten criteria. The results show that the proposed new model has significantly better goodness-of-fit and predictability than the other models.
APA, Harvard, Vancouver, ISO, and other styles
24

Rind, Touqeer Ali, Ashfaque Ahmed Jhatial, Abdul Razzaque Sandhu, Imtiaz Ali Bhatti, and Sajeel Ahmed. "Fatigue and Rutting Analysis of Asphaltic Pavement Using “KENLAYER” Software." Journal of Applied Engineering Sciences 9, no. 2 (December 1, 2019): 177–82. http://dx.doi.org/10.2478/jaes-2019-0024.

Full text
Abstract:
Abstract Rutting and Fatigue are taken as main premature failures among all distresses, as these distresses have wide effect on performance of pavement. Sudden variation of heavy axle loaded vehicles, improper mix design and traditional design methodologies used in pavement design industries are major factors behind these failures. For proper performance and good serviceability, these premature distresses should be resisted. Thus, there is a need of using a Mechanistic based design methodology like KENPAVE software, so that traditional design errors should be overcome. KENLAYER is a part of KENPAVE software. KENLYER software tool is utilized to calculated accurately stresses and strains in asphaltic pavement that are ultimately used in calculating allowance for rutting and fatigue failure utilizing Asphalt Institute design models or formulas. Resistance to Rutting failure is checked by calculating vertical compressive stress at the top of soil sub-grade layer, while resistance to fatigue failure is checked by calculating horizontal tensile strain at the bottom of asphaltic layer using KENLAYER software tool. Thus, the object of this research study is to analyze a flexible pavement with respect to rutting and fatigue distresses using KENLAYER software tool. For achieving that objective NHA (N-55) section of road in Sehwan Pakistan was taken as a reference pavement. Pavement was analyzed by altering the thicknesses of bituminous courses by ± 25 percent. From that we obtained total 20 cross-sections to be analyzed using KENLAYER software in terms of Rutting and Fatigue premature failures.
APA, Harvard, Vancouver, ISO, and other styles
25

LI, DAWEI, JIE WU, DAJIN WANG, and JIAYIN WANG. "Software-Defined Networking Switches for Fast Single-Link Failure Recovery." Journal of Interconnection Networks 18, no. 04 (December 2018): 1850014. http://dx.doi.org/10.1142/s0219265918500147.

Full text
Abstract:
In this paper, we consider IP fast recovery from single-link failures in a given network topology. The basic idea is to replace some existing routers with a designated switch. When a link fails, the affected router will send all the affected traffic to the designated switch (through pre-configured IP tunnels), which will deliver the affected traffic to its destination without using the failed link. The goal of the approach is to achieve faster failure recovery than traditional routing protocols that employ reactive computing upon link failures. Software-Defined Networking (SDN) switches can serve as the designated switches because they can flexibly redirect affected traffic to other routes, instead of only to the shortest paths in the network. However, SDN switches are very expensive. Our objective is to minimize the number of SDN switches needed and to guarantee that the network can still recover from any single-link failure. For networks with uniform link costs, we show that using normal non-SDN switches with IP tunneling capability as designated switches can guarantee recovery from any single-link failure. For networks with general link costs, we find that not all single-link failures can be recovered by using non-SDN switches as designated switches; by using SDN switches only when necessary, we can reduce the total number of SDN switches needed compared to an existing work. We conduct extensive simulations to verify our proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
26

Fakhrolmobasheri, Sharifeh, Ehsan Ataie, and Ali Movaghar. "Modeling and Evaluation of Power-Aware Software Rejuvenation in Cloud Systems." Algorithms 11, no. 10 (October 18, 2018): 160. http://dx.doi.org/10.3390/a11100160.

Full text
Abstract:
Long and continuous running of software can cause software aging-induced errors and failures. Cloud data centers suffer from these kinds of failures when Virtual Machine Monitors (VMMs), which control the execution of Virtual Machines (VMs), age. Software rejuvenation is a proactive fault management technique that can prevent the occurrence of future failures by terminating VMMs, cleaning up their internal states, and restarting them. However, the appropriate time and type of VMM rejuvenation can affect performance, availability, and power consumption of a system. In this paper, an analytical model is proposed based on Stochastic Activity Networks for performance evaluation of Infrastructure-as-a-Service cloud systems. Using the proposed model, a two-threshold power-aware software rejuvenation scheme is presented. Many details of real cloud systems, such as VM multiplexing, migration of VMs between VMMs, VM heterogeneity, failure of VMMs, failure of VM migration, and different probabilities for arrival of different VM request types are investigated using the proposed model. The performance of the proposed rejuvenation scheme is compared with two baselines based on diverse performance, availability, and power consumption measures defined on the system.
APA, Harvard, Vancouver, ISO, and other styles
27

Yi, Qiuping, Zijiang Yang, Jian Liu, Chen Zhao, and Chao Wang. "Explaining Software Failures by Cascade Fault Localization." ACM Transactions on Design Automation of Electronic Systems 20, no. 3 (June 24, 2015): 1–28. http://dx.doi.org/10.1145/2738038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Grottke, Michael, Dong Seong Kim, Rajesh Mansharamani, Manoj Nambiar, Roberto Natella, and Kishor S. Trivedi. "Recovery From Software Failures Caused by Mandelbugs." IEEE Transactions on Reliability 65, no. 1 (March 2016): 70–87. http://dx.doi.org/10.1109/tr.2015.2452933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Knight, J. C., and N. G. Leveson. "Correlated Failures in Multi-Version Software 1." IFAC Proceedings Volumes 18, no. 12 (October 1985): 159–65. http://dx.doi.org/10.1016/s1474-6670(17)60100-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Fenton, Norman E., and Martin Neil. "Software metrics: successes, failures and new directions." Journal of Systems and Software 47, no. 2-3 (July 1999): 149–57. http://dx.doi.org/10.1016/s0164-1212(99)00035-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Pfening, András, Sachin Garg, Antonio Puliafito, Miklós Telek, and Kishor S. Trivedi. "Optimal software rejuvenation for tolerating soft failures." Performance Evaluation 27-28 (October 1996): 491–506. http://dx.doi.org/10.1016/s0166-5316(96)90042-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Pfening, A. "Optimal software rejuvenation for tolerating soft failures." Performance Evaluation 27-28, no. 1 (October 1996): 491–506. http://dx.doi.org/10.1016/0166-5316(96)00038-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Anderson, Paul. "Software Failures Reduction with Static Analysis Tools." ATZelektronik worldwide 10, no. 3 (May 30, 2015): 4–9. http://dx.doi.org/10.1007/s38314-015-0523-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Salem, Ahmed M., Kamel Rekab, and James A. Whittaker. "Prediction of software failures through logistic regression." Information and Software Technology 46, no. 12 (September 2004): 781–89. http://dx.doi.org/10.1016/j.infsof.2003.10.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chhillar, Dheeraj, and Kalpana Sharma. "Proposed T-Model to cover 4S quality metrics based on empirical study of root cause of software failures." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 2 (April 1, 2019): 1122. http://dx.doi.org/10.11591/ijece.v9i2.pp1122-1130.

Full text
Abstract:
<span>There are various root causes of software failures. Few years ago, software used to fail mainly due to functionality related bugs. That used to happen due to requirement misunderstanding, code issues and lack of functional testing. A lot of work has been done in past on this and software engineering has matured over time, due to which software’s hardly fail due to functionality related bugs. To understand the most recent failures, we had to understand the recent software development methodologies and technologies. In this paper we have discussed background of technologies and testing progression over time. A survey of more than 50 senior IT professionals was done to understand root cause of their software project failures. It was found that most of the softwares fail due to lack of testing of non-functional parameters these days. A lot of research was also done to find most recent and most severe software failures. Our study reveals that main reason of software failures these days is lack of testing of non-functional requirements. Security and Performance parameters mainly constitute non-functional requirements of software. It has become more challenging these days due to lots of development in the field of new technologies like Internet of things (IoT), Cloud of things (CoT), Artificial Intelligence, Machine learning, robotics and excessive use of mobile and technology in everything by masses. Finally, we proposed a software development model called as T-model to ensure breadth and depth of software is considered while designing and testing of software. </span>
APA, Harvard, Vancouver, ISO, and other styles
36

Shailly, Ms. "A critical review based on Fault Tolerance in Software Defined Networks." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 2 (April 11, 2021): 456–61. http://dx.doi.org/10.17762/turcomat.v12i2.849.

Full text
Abstract:
SDN (Software-Defined Networks) is an incipient architecture of decoupling control plane and data plane involved in dynamic management of network. SDN is being installed in production based networks which ultimately lead to the need of secure and fault tolerant SDN. In the present investigation, we are discussing about the kind of failures with label happen in SDN. A critical survey based on the recently proposed mechanisms for handling failures in SDN. Initially, we discussed with the help of tabular data involving mechanism of data plane failure. We also discussed the various mechanisms for handling misconfiguration of drift able of switches and control plane failure handling mechanisms. We also epitomize issues with both data and control plane mechanism that are discussed earlier. In the end, we are stating that there is need of build much efficient and secure mechanism for SDN networks.
APA, Harvard, Vancouver, ISO, and other styles
37

BERZTISS, ALFS T. "SAFETY-CRITICAL SOFTWARE: A RESEARCH AGENDA." International Journal of Software Engineering and Knowledge Engineering 04, no. 02 (June 1994): 165–81. http://dx.doi.org/10.1142/s021819409400009x.

Full text
Abstract:
A system is safety-critical if failure of the system would result in loss of human life, personal injury, or significant material loss. Software that supports or supplants human control of safety-critical systems has to be highly reliable, but much research remains to be done before all the reliability-related issues of safety-critical software are fully understood. We discuss a research agenda under the following headings: completeness of requirements, readability of specifications, validation tools, validation of responsiveness, verification of implementations, software robustness, common-cause failures, reuse of reliable software, the software process, and ethical issues.
APA, Harvard, Vancouver, ISO, and other styles
38

SAWADA, KIYOSHI, and HIROAKI SANDOH. "A SUMMARY OF SOFTWARE RELIABILITY DEMONSTRATION TESTING MODELS." International Journal of Reliability, Quality and Safety Engineering 06, no. 01 (March 1999): 65–80. http://dx.doi.org/10.1142/s0218539399000085.

Full text
Abstract:
This paper summarizes models for software reliability demonstration testing (SRDT). The models are briefly classified into three types: (1) continuous models, (2) discrete models and (3) models considering damage sizes of software failures. Under the continuous models, the software product of interest is tested for time t and is accepted if the number of software failures in the test does not exceed a prespecified integer s. The values of design variables, t and s are determined based on (i) the concept of a statistical test (a statistical model) and (ii) the Kullback–Leibler information (a K–L model). The K–L model has less parameters to be prespecified than the statistical model. Under the discrete models for SRDT, the software of interest is tested with n input data sets and is accepted if the number of input data sets causing software failures in the test does not exceed a prespecified integer c. A statistical model as well as a K–L model is described for the discrete models. Neither continuous nor discrete models in the above take the damage size of software failures into consideration. The third type of models are continuous and discrete models which consider the cumulative damage size caused by software failures as well as the number of software failures in the test.
APA, Harvard, Vancouver, ISO, and other styles
39

Wee, Nam-Sook. "Optimal Maintenance Schedules of Computer Software." Probability in the Engineering and Informational Sciences 4, no. 2 (April 1990): 243–55. http://dx.doi.org/10.1017/s026996480000156x.

Full text
Abstract:
We present a decision procedure to determine the optimal maintenance intervals of a computer software throughout its operational phase. Our model accounts for the average cost per each maintenance activity and the damage cost per failure with the future cost discounted. Our decision policy is optimal in the sense that it minimizes the expected total cost. Our model assumes that the total number of errors in the software has a Poisson distribution with known mean λ and each error causes failures independently of other errors at a known constant failure rate. We study the structures of the optimal policy in terms of λ and present efficient numerical algorithms to compute the optimal maintenance time intervals, the optimal total number of maintenances, and the minimal total expected cost throughout the maintenance phase.
APA, Harvard, Vancouver, ISO, and other styles
40

Tokuno, Koichi, and Shigeru Yamada. "Markovian Availability Measurement and Assessement for Hardware-Software System." International Journal of Reliability, Quality and Safety Engineering 04, no. 03 (September 1997): 257–68. http://dx.doi.org/10.1142/s0218539397000187.

Full text
Abstract:
It is important to take into account the trade-off between hardware and software systems when total computer-system reliability/performance are evaluated and assessed. We develop an availability model for a hardware-software system. The system treated here consists of one hardware subsystem and one software subsystem and it is assumed that the system is down and restored whenever a hardware or a software failure occurs. Especially, for the software subsystem, it is supposed that (i) the restoration actions are not always performed perfectly, (ii) the restoration times for later software failures become longer and (iii) reliability growth occurs in the perfect restoration action. The hardware and the software failure-occurrence phenomena are respectively described by constant and geometrically decreasing hazard rates. The time-dependent behavior of the system, which alternately repeats the operational state that a system is operating without failures and the restoration state that a system is inoperable and restored, is described by a Markov process. Useful expressions for several quantitative measures of system performance are derived from this model. Finally, numerical examples are presented for illustration of system availability measurement and assessment.
APA, Harvard, Vancouver, ISO, and other styles
41

Medapati, Jagadeesh, Anand Chandulal Jasti, and T. V. Rajinikanth. "A robust software reliability growth model for accurate detection of software failures." International Journal of Software Engineering, Technology and Applications 1, no. 1 (2022): 1. http://dx.doi.org/10.1504/ijseta.2022.10044820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Sawada, Kiyoshi, and Hiroaki Sandoh. "Software reliability demonstration testing with consideration of damage size of software failures." Electronics and Communications in Japan (Part III: Fundamental Electronic Science) 82, no. 5 (May 1999): 10–21. http://dx.doi.org/10.1002/(sici)1520-6440(199905)82:5<10::aid-ecjc2>3.0.co;2-m.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

WOHLIN, CLAES, ANDERS WESSLÉN, and PER RUNESON. "SOFTWARE RELIABILITY ESTIMATIONS THROUGH USAGE ANALYSIS OF SPECIFICATIONS AND DESIGNS." International Journal of Reliability, Quality and Safety Engineering 03, no. 02 (June 1996): 101–17. http://dx.doi.org/10.1142/s0218539396000089.

Full text
Abstract:
This paper presents a method proposal for estimation of software reliability before the implementation phase. The method is based upon that a formal specification technique is used and that it is possible to develop a tool performing dynamic analysis, i.e., locating semantic faults in the design. The analysis is performed with both applying a usage profile as input as well as doing a full analysis, i.e., locate all faults that the tool can find. The tool must provide failure data in terms of time since the last failure was detected. The mapping of the dynamic failures to the failures encountered during statistical usage testing and operation is discussed. The method can be applied either on the software specification or as a step in the development process by applying it on the software design. The proposed method allows for software reliability estimations that can be used both as a quality indicator, and for planning and controlling resources, development times, etc. at an early stage in the development of software systems.
APA, Harvard, Vancouver, ISO, and other styles
44

Verma, Vibha, Sameer Anand, and Anu Gupta Aggarwal. "Software warranty cost optimization under imperfect debugging." International Journal of Quality & Reliability Management 37, no. 9/10 (October 31, 2019): 1233–57. http://dx.doi.org/10.1108/ijqrm-03-2019-0088.

Full text
Abstract:
Purpose The purpose of this paper is to identify and quantify the key components of the overall cost of software development when warranty coverage is given by a developer. Also, the authors have studied the impact of imperfect debugging on the optimal release time, warranty policy and development cost which signifies that it is important for the developers to control the parameters that cause a sharp increase in cost. Design/methodology/approach An optimization problem is formulated to minimize software development cost by considering imperfect fault removal process, faults generation at a constant rate and an environmental factor to differentiate the operational phase from the testing phase. Another optimization problem under perfect debugging conditions, i.e. without error generation is constructed for comparison. These optimization models are solved in MATLAB, and their solutions provide insights to the degree of impact of imperfect debugging on the optimal policies with respect to software release time and warranty time. Findings A real-life fault data set of Radar System is used to study the impact of various cost factors via sensitivity analysis on release and warranty policy. If firms tend to provide warranty for a longer period of time, then they may have to bear losses due to increased debugging cost with more number of failures occurring during the warrantied time but if the warranty is not provided for sufficient time it may not act as sufficient hedge during field failures. Originality/value Every firm is fighting to remain in the competition and expand market share by offering the latest technology-based products, using innovative marketing strategies. Warranty is one such strategic tool to promote the product among masses and develop a sense of quality in the user’s mind. In this paper, the failures encountered during development and after software release are considered to model the failure process.
APA, Harvard, Vancouver, ISO, and other styles
45

WATKINS, A., E. M. HUFNAGEL, D. BERNDT, and L. JOHNSON. "USING GENETIC ALGORITHMS AND DECISION TREE INDUCTION TO CLASSIFY SOFTWARE FAILURES." International Journal of Software Engineering and Knowledge Engineering 16, no. 02 (April 2006): 269–91. http://dx.doi.org/10.1142/s021819400600277x.

Full text
Abstract:
This paper describes two laboratory experiments designed to evaluate a failure-pursuit strategy for system level testing. In the first experiment, two GAs are used to automatically generate test suites that are rich in failure-causing test cases. Their performance is compared to random generation. The resulting test suites are then used to train a series of decision trees, producing rules for classifying other test cases. Finally, the performance of the classification rules is evaluated empirically. The results indicate that the combination of GA-based test case generation and decision tree induction can produce rules with high-predictive accuracy that can assist human testers in diagnosing the cause of system failures.
APA, Harvard, Vancouver, ISO, and other styles
46

HECHT, MYRON, HERBERT HECHT, and XUEGAO AN. "USE OF COMBINED SYSTEM DEPENDABILITY AND SOFTWARE RELIABILITY GROWTH MODELS." International Journal of Reliability, Quality and Safety Engineering 09, no. 04 (December 2002): 289–303. http://dx.doi.org/10.1142/s0218539302000846.

Full text
Abstract:
This paper describes how MEADEP, a system level dependability prediction tool, and CASRE, a software reliability growth prediction tool can be used together to predict system reliability (probability of failure in a given time interval), availability (proportion of time service is available), and performability (reward-weighted availability). The system includes COTS hardware, COTS software, radar, and communication gateways. The performability metric also accounts for capacity changes as processors in a cluster fail and recover. The Littlewood Verall and Geometric model is used to predict reliability growth from software test data this prediction is integrated into a system level Markov model that incorporates hardware failures and recoveries, redundancy, coverage failures, and capacity. The results of the combined model can be used to predict the contribution of additional testing upon availability and a variety of other figures of merit that support management decisions.
APA, Harvard, Vancouver, ISO, and other styles
47

Satya Prasad, R., Bandla Srinivasa Rao, and R. R. L Kantham. "Assessing Software Reliability using Inter Failures Time Data." International Journal of Computer Applications 18, no. 7 (March 31, 2011): 1–3. http://dx.doi.org/10.5120/2300-2639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Da Hye, In Hong Chang, and Hoang Pham. "Software Reliability Model with Dependent Failures and SPRT." Mathematics 8, no. 8 (August 14, 2020): 1366. http://dx.doi.org/10.3390/math8081366.

Full text
Abstract:
Software reliability and quality are crucial in several fields. Related studies have focused on software reliability growth models (SRGMs). Herein, we propose a new SRGM that assumes interdependent software failures. We conduct experiments on real-world datasets to compare the goodness-of-fit of the proposed model with the results of previous nonhomogeneous Poisson process SRGMs using several evaluation criteria. In addition, we determine software reliability using Wald’s sequential probability ratio test (SPRT), which is more efficient than the classical hypothesis test (the latter requires substantially more data and time because the test is performed only after data collection is completed). The experimental results demonstrate the superiority of the proposed model and the effectiveness of the SPRT.
APA, Harvard, Vancouver, ISO, and other styles
49

Emam, Khaled El, and A. Günes Koru. "A Replicated Survey of IT Software Project Failures." IEEE Software 25, no. 5 (September 2008): 84–90. http://dx.doi.org/10.1109/ms.2008.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

DALCHER, DARREN. "Beyond Normal Failures: Dynamic Management of Software Projects." Technology Analysis & Strategic Management 15, no. 4 (December 2003): 421–39. http://dx.doi.org/10.1080/095373203000136024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography