Academic literature on the topic 'Failed software'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Failed software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Failed software"

1

Massey, David. "Liability for Failed Software." Journal of Information Systems Management 5, no. 4 (1988): 47–53. http://dx.doi.org/10.1080/07399018808962940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rost, J. "Political Reasons for Failed Software Projects." IEEE Software 21, no. 6 (2004): 104–3. http://dx.doi.org/10.1109/ms.2004.48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Herz, Jc. "Crumbling bridges: The failed economics of software maintenance." Cyber Security: A Peer-Reviewed Journal 8, no. 2 (2025): 150. http://dx.doi.org/10.69554/slrh2550.

Full text
Abstract:
This paper defines a microeconomic framework for understanding systemic failure in cyber security as market failure. In a marketplace with limited supply chain transparency on software quality in general and software maintenance in particular, rational actors — both software vendors and software buyers — will maximise economic returns by minimising software maintenance and security. As technical debt accrues, so does vulnerability and operational risk, as systems become more difficult to update. In this regard, the depreciation of resilience in software infrastructure is similar to the breakdown of physical infrastructure that is chronically undermaintained, but with the added element of adversarial profit. These problems cannot be solved at the computer science level that created them. They can only be solved as a business problem, as transparency requirements (eg software bill of materials [SBOMs]) and automation slash the cost of diligence, enable preferential selection of higher-quality software and continuous enforcement of terms and conditions for active maintenance.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Cong, and Rui Li Jiao. "Quick Task Queue Management Software Design." Applied Mechanics and Materials 519-520 (February 2014): 359–62. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.359.

Full text
Abstract:
In order to accelerate the transmission speed of a large number of documents in file transmission system, achieve fast recovery and restart the transmission in the case of accident, this paper presents a fast task queue management method. By establishing task file transmission queue quickly, and making use of fault recovery module, fast task queue management software not only recovers files effectively which transfer failed unexpectedly, but also realizes saving and restoring rapidly in file transfer system. The software has three functions: the file status real-time monitoring, reading the breakpoint recovery and system log. Test results show that the software improves the efficiency of task management and builds task lists quickly. It also recovers task queue to the breakpoints quickly which are failed to execute at the last time, and continues to perform the task when a fault occurs in the task queue. The software runs steadily , especially , the design idea of the software which has great value in engineering application is instructive.
APA, Harvard, Vancouver, ISO, and other styles
5

Ireland, Andrew, Gudmund Grov, Maria Teresa Llano, and Michael Butler. "Reasoned modelling critics: Turning failed proofs into modelling guidance." Science of Computer Programming 78, no. 3 (2013): 293–309. http://dx.doi.org/10.1016/j.scico.2011.03.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Frenette, Jean. "Why Enron's Auditors Failed." EDPACS 30, no. 1 (2002): 1–5. http://dx.doi.org/10.1201/1079/43284.30.1.20020701/37162.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Croucher, Ken. "Software - the Cinderella of I.T." ITNOW 31, no. 6 (1989): 19–21. https://doi.org/10.1093/combul/31.6.19.

Full text
Abstract:
Abstract Considerations of ‘quality’ in relation to software have become very prevalent, almost fashionable, in the last few months. The reasons for this are not hard to find. All large-scale computer users are realising the extent to which not only their success, but their very survival, depends on their systems working correctly. A number of highly publicized disasters and failed projects, in which software failure is the cause or may be implicated, have made the public aware of the issue. Product liability legislation will bring more pressure on software developers, who may be accountable for their products in previously unforeseen ways.
APA, Harvard, Vancouver, ISO, and other styles
8

Moe, Nils Brede, Darja Šmite, Geir Kjetil Hanssen, and Hamish Barney. "From offshore outsourcing to insourcing and partnerships: four failed outsourcing attempts." Empirical Software Engineering 19, no. 5 (2013): 1225–58. http://dx.doi.org/10.1007/s10664-013-9272-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Turnbull, Andrew Stuart. "The Constraints of Software." Emerging Library & Information Perspectives 2, no. 1 (2019): 6–29. http://dx.doi.org/10.5206/elip.v2i1.5928.

Full text
Abstract:
Computer software media has long had intrinsic similarities to books...so why may one be borrowed in a library and not the other? The answer lies in the context and history of how computer media came to be. In this essay I explore the early history of software distribution, where many different proposals fought to succeed. I provide an overview of the software industry’s early embrace of copy-protected floppy disks as a distribution medium, and how they harmed the notion of software as a borrowable medium. Lastly, I cover how CD-ROM materials were treated as books by publishers and libraries, yet failed to realize this premise with long-term success. I argue that a combination of industry actions and technological constraints over four decades caused computer software to fail to succeed as a tangible medium that can be borrowed like a book, lent, or resold at will.
APA, Harvard, Vancouver, ISO, and other styles
10

Chahal, Kuljit Kaur, and Munish Saini. "Open Source Software Evolution." International Journal of Open Source Software and Processes 7, no. 1 (2016): 28–48. http://dx.doi.org/10.4018/ijossp.2016010102.

Full text
Abstract:
This paper presents the results of a systematic literature review conducted to understand the Open Source Software (OSS) development process on the basis of evidence found in the empirical research studies. The study targets the OSS project evolution research papers to understand the methods and techniques employed for analysing the OSS evolution process. Our results suggest that there is lack of a uniform approach to analyse and interpret the results. The use of prediction techniques that just extrapolate the historic trends into the future should be a conscious task as it is observed that there are no long-term correlations in data of such systems. OSS evolution as a research area is still in nascent stage. Even after a number of empirical studies, the field has failed to establish a theory. There is need to formalize the field as a systematic and formal approach can produce better software.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Failed software"

1

Rahanu, Harjinder. "Development of a case-based reasoner as a tool to facilitate understanding of the ethical and professional issues invoked by failed information systems projects." Thesis, University of Wolverhampton, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

FERREIRA, FISCHER JONATAS. "AN EFFECTIVE ANALYSIS OF EXECUTABLE ASSERTIVES AS INDICATORS OF SOFTWARE FAILS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=25792@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>A confiabilidade absoluta do software é considerada inatingível, pois mesmo quando confeccionado seguindo regras rígidas de qualidade, o software não está livre da ocorrência de falhas durante a sua vida útil. O nível de confiabilidade do software está relacionado, entre outros, à quantidade de defeitos remanescentes que serão exercitados durante seu uso. Contendo menos defeitos remanescentes, espera-se que o software falhe menos frequentemente, embora muitos desses defeitos sejam exercitados nenhuma vez durante a vida útil do software. Mas desenvolvedores, além de redigir programas, utilizam cada vez mais bibliotecas e serviços remotos que muitas vezes possuem qualidade duvidosa. Na tentativa de tornar o software capaz de observar erros em tempo de execução, surge a hipótese que o uso dos Métodos Formais Leves, por meio do emprego sistemático de assertivas executáveis, pode ser eficaz e economicamente viável para assegurar a confiabilidade do software, tanto em tempo de teste como em tempo de uso. O objetivo principal desta pesquisa é avaliar a eficácia de assertivas executáveis para prevenção e observação de falhas em tempo de execução. As avaliações da eficácia foram feitas por intermédio de uma análise quantitativa utilizando experimentos. Estes, utilizam, implementações de estruturas de dados instrumentadas com assertivas executáveis, submetidas a testes baseados em mutações. Os resultados mostraram que todos os mutantes não equivalentes foram identificados pelas assertivas, embora os testes não foram capazes disso. Também é apresentada uma estimativa do custo computacional relativo ao uso de assertivas executáveis. Com base na infraestrutura criada para realização dos experimentos é proposta uma política de instrumentação de programas utilizando assertivas executáveis a serem mantidas ativas tanto durante os testes como durante o uso produtivo.<br>Absolute reliability of software is considered unattainable, because even when it is build following strict quality rules, software is not free of failure occurrences during its lifetime. Software s reliability level is related, among others, to the amount of remaining defects that will be exercised during its use. If software contains less remaining defects, it is expected that failures will occur less often, although many of these defects will never be exercised during its useful life. However, libraries and remote services of dubious quality are frequently used. In an attempt to enable software to check mistakes at runtime, hypothetically Lightweight Formal Methods, by means of executable assertions, can be effective and economically viable to ensure software s reliability both at test time as well as at run-time. The main objective of this research is to evaluate the effectiveness of executable assertions for the prevention and observation of run-time failures. Effectiveness was evaluated by means of experiments. We instrumented data structures with executable assertions, and subjected them to tests based on mutations. The results have shown that all non-equivalent mutants were detected by assertions, although several of them were not detected by tests using non-instrumented versions of the programs. Furthermore, estimates of the computational cost for the use of executable assertions are presented. Based on the infrastructure created for the experiments we propose an instrumentation policy using executable assertions to be used for testing and to safeguard run-time.
APA, Harvard, Vancouver, ISO, and other styles
3

Brasileiro, Francisco Vilar. "Constructing fail-controlled nodes for distributed systems : a software approach." Thesis, University of Newcastle Upon Tyne, 1995. http://hdl.handle.net/10443/1971.

Full text
Abstract:
Designing and implementing distributed systems which continue to provide specified services in the presence of processing site and communication failures is a difficult task. To facilitate their development, distributed systems have been built assuming that their underlying hardware components are Jail-controlled, i.e. present a well defined failure mode. However, if conventional hardware cannot provide the assumed failure mode, there is a need to build processing sites or nodes, and communication infra-structure that present the fail-controlled behaviour assumed. Coupling a number of redundant processors within a replicated node is a well known way of constructing fail-controlled nodes. Computation is replicated and executed simultaneously at each processor, and by employing suitable validation techniques to the outputs generated by processors (e.g. majority voting, comparison), outputs from faulty processors can be prevented from appearing at the application level. One way of constructing replicated nodes is by introducing hardwired mechanisms to couple replicated processors with specialised validation hardware circuits. Processors are tightly synchronised at the clock cycle level, and have their outputs validated by a reliable validation hardware. Another approach is to use software mechanisms to perform synchronisation of processors and validation of the outputs. The main advantage of hardware based nodes is the minimum performance overhead incurred. However, the introduction of special circuits may increase the complexity of the design tremendously. Further, every new microprocessor architecture requires considerable redesign overhead. Software based nodes do not present these problems, on the other hand, they introduce much bigger performance overheads to the system. In this thesis we investigate alternative ways of constructing efficient fail-controlled, software based replicated nodes. In particular, we present much more efficient order protocols, which are necessary for the implementation of these nodes. Our protocols, unlike others published to date, do not require processors' physical clocks to be explicitly synchronised. The main contribution of this thesis is the precise definition of the semantics of a software based Jail-silent node, along with its efficient design, implementation and performance evaluation.
APA, Harvard, Vancouver, ISO, and other styles
4

Petitjean, Sylvain. "Contributions au calcul géométrique effectif avec des objets courbes de faible degré." Habilitation à diriger des recherches, Institut National Polytechnique de Lorraine - INPL, 2007. http://tel.archives-ouvertes.fr/tel-00187348.

Full text
Abstract:
Le monde physique dans lequel nous vivons est essentiellement géométrique. Le calcul géométrique est une brique centrale de nombreux domaines, comme la conception assistée par ordinateur, le graphisme, la robotique, la vision artificielle et bien d'autres. Depuis plus de trois décennies, la géométrie algorithmique est la discipline dédiée à l'établissement de bases solides pour l'étude des algorithmes géométriques qui relèvent de ces applications. Elle s'est historiquement et traditionnellement concentrée sur le traitement d'objets linéaires. Pour de nombreuses applications, il est nécessaire de manipuler des objets généraux comme des courbes et des surfaces complexes. L'extension du répertoire de la géométrie algorithmique aux objets courbes pose de nombreuses difficultés: refonte des structures de données et algorithmes fondamentaux; irruption massive de questions algébriques; explosion du nombre de cas dégénérés...<br /><br />Cette thèse d'habilitation contribue à l'établissement d'un calcul géométrique effectif pour les objets courbes de faible degré. Elle reprend mes principales contributions sur le sujet ces dernières années. Mentionnons notamment: un algorithme exact, optimal et efficace pour le calcul du paramétrage de l'intersection de deux quadriques à coefficients entiers; la caractérisation des positions relatives de deux coniques projectives à l'aide de prédicats géométriques de faible degré, mis au jour grâce à la théorie des invariants algébriques; la caractérisation des dégénérescences du problème de tangentes réelles communes à quatre sphères; la convexité du cône des directions de droites perçant trois boules disjointes, et les conséquences importantes de ce résultat en théorie de transversales géométriques. Le manuscrit se conclue par un panel de directions de recherche poursuivant et étendant les résultats obtenus à ce jour.
APA, Harvard, Vancouver, ISO, and other styles
5

Nguena, Timo Omer Landry. "Synthèse pour une Logique Temps-Réel Faible." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2009. http://tel.archives-ouvertes.fr/tel-00440829.

Full text
Abstract:
Dans cette thèse, nous nous intéressons à la spécification et à la synthèse de contrôleurs des systèmes temps-réels. Les modèles pour ces systèmes sont des Event-recording Automata. Nous supposons que les contrôleurs observent tous les évènements se produisant dans le système et qu'ils peuvent interdirent uniquement des évènements contrôlables. Tous les évènements ne sont pas nécessairement contrôlables. Une première étude est faite sur la logique Event-recording Logic (ERL). Nous proposons des nouveaux algorithmes pour les problèmes de vérification et de satisfaisabilité. Ces algorithmes présentent les similitudes entre les problèmes de décision cités ci-dessus et les problèmes de décision similaires étudiés dans le cadre du $\mu$-calcul. Nos algorithmes corrigent aussi des algorithmes présents dans la litérature. Les similitudes relevées nous permettent de prouver l'équivalence entre les formules de ERL et les formules de ERL en forme normale disjonctive. La logique ERL n'étant pas suffisamment expressive pour décrire certaines propriétés des systèmes, en particulier des propriétés des contrôleurs, nous introduisons une nouvelle logique WTmu. La logique WTmu est une extension temps-réel faible du $\mu$-calcul. Nous proposons des algorithmes pour la vérification des systèmes lorsque les propriétés sont écrites en WTmu. Nous identifions un fragment de WTmu appelé WTmu pour le contrôle (C-WTmu). Nous proposons un algorithme qui permet de vérifier si une formule de C-WTmu possède un modèle. Cet algorithme n'a pas besoin de connaître les ressources (horloges et constante maximale comparée avec les horloges) des modèles. En utilisant C-WTmu comme langage de spécification des systèmes, nous proposons des algorithmes de décision pour le contrôle centralisé et le $\Delta$-contrôle centralisé. Ces algorithmes permettent aussi de construire des modèles de contrôleurs.
APA, Harvard, Vancouver, ISO, and other styles
6

Haghighitalab, Delaram. "Récepteur radio-logicielle hautement numérisé." Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066443.

Full text
Abstract:
Aujourd'hui, il y a une augmentation du nombre de normes étant intégré dans des appareils mobiles. Les problèmes principaux sont la durée de vie de la batterie et la taille de l'appareil. L'idée d'un Radio-Logiciel est de pousser le processus de numérisation aussi près que possible de l'antenne. Dans cette thèse, nous présentons la première mise en œuvre d'un récepteur radio-logiciel complet basé sur Sigma-Delta RF passe-bande, y compris un LNA à gain variable (VGLNA), un ADC Sigma-Delta RF sous-échantillonné, un mélangeur bas-conversion RF numérique et un filtre de décimation polyphasé multi-étage multi-taux. Le VGLNA élargit la gamme dynamique du récepteur multi-standard pour atteindre les exigences des trois normes sans fil ciblées. Aussi une architecture mixte, en utilisant à la fois Source-Coupled Logic (SCL) et des circuits CMOS, il est proposé d'optimiser la consommation des circuits RF numériques. Par ailleurs, nous proposons une architecture de filtre en peigne à plusieurs étages avec décomposition polyphase à réduire la consommation d'énergie. Le récepteur est mesuré pour trois normes différentes dans la bande de 2.4 GHz, la bande ISM. Les résultats des mesures montrent que le récepteur atteint 79 dB, 73 dB et 63 dB de plage dynamique pour les normes Bluetooth, ZigBee et WiFi respectivement. Le récepteur complet, mis en œuvre dans le procédé CMOS 130 nm, a une fréquence centrale accordable de 300 MHz et consomme 63 mW sous 1.2 V. Comparé à d'autres récepteurs, le circuit proposé consomme 30% moins d'énergie, la plage dynamique est de 21 dB supérieur, IIP3 est de 6 dB supérieur et le facteur de mérite est de 24 dB supérieur<br>Nowadays there is an increase in the number of standards being integrated in mobile devices. The main issues are battery life and the size of the device. The idea of a Software Defined Radio is to push the digitization process as close as possible to the antenna. Having most of the circuit in the digital domain allows it to be reconfigurable thus requiring less area and power consumption. In this thesis, we present the first implementation of a complete SDR receiver based on RF bandpass Sigma-Delta including a Variable-Gain LNA (VGLNA), an RF subsampled Sigma-Delta ADC, an RF digital down-conversion mixer and a polyphase multi-stage multi-rate decimation filter. VGLNA enlarges the dynamic range of the multi-standard receiver to achieve the requirements of the three targeted wireless standards. Also a mixed architecture, using both Source-Coupled Logic (SCL) and CMOS circuits, is proposed to optimize the power consumption of the RF digital circuits. Moreover, we propose a multi-stage comb filter architecture with polyphase decomposition to reduce the power consumption. The receiver is measured for three different standards in the 2.4 GHz ISM-band. Measurement results show that the receiver achieves 79 dB, 73 dB and 63 dB of dynamic range for the Bluetooth, ZigBee and WiFi standards respectively. The complete receiver, implemented in 130 nm CMOS process, has a 300 MHz tunable central frequency and consumes 63 mW under 1.2 V supply. Compared to other SDR receivers, the proposed circuit consumes 30% less power, the DR is 21 dB higher, IIP3 is 6 dB higher and the overall Figure of Merit is 24 dB higher
APA, Harvard, Vancouver, ISO, and other styles
7

Dagfalk, Johanna, and Ellen Kyhle. "Listening in on Productivity : Applying the Four Key Metrics to measure productivity in a software development company." Thesis, Uppsala universitet, Avdelningen för datalogi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-440147.

Full text
Abstract:
Software development is an area in which companies not only need to keep up with the latest technology, but they additionally need to continuously increase their productivity to stay competitive in the industry. One company currently facing these challenges is Storytel - one of the strongest players on the Swedish audiobook market - with about a fourth of all employees involved with software development, and a rapidly growing workforce. With the purpose of understanding how the Storytel Tech Department is performing, this thesis maps Storytel’s productivity defined through the Four Key Metrics - Deployment Frequency, Delivery Lead Time, Mean Time To Restore and Change Fail Rate. A classification is made into which performance category (Low, Medium, High, Elite) the Storytel Tech Department belongs to through a deep-dive into the raw system data existing at Storytel, mainly focusing on the case management system Jira. A survey of the Tech Department was conducted, to give insights into the connection between human and technical factors influencing productivity (categorized into Culture, Environment, and Process) and estimated productivity. Along with these data collections, interviews with Storytel employees were performed to gather further knowledge about the Tech Department, and to understand potential bottlenecks and obstacles. All Four Key Metrics could be determined based on raw system data, except the metric Mean Time To Restore which was complemented by survey estimates. The generalized findings of the Four Key Metrics conclude that Storytel can be minimally classified as a ‘medium’ performer. The factors, validated through factor analysis, found to have an impact on the Four Key Metrics were Generative Culture, Efficiency (Automation and Shared Responsibility) and Number of Projects. Lastly, the major bottlenecks found were related to Architecture, Automation, Time Fragmentation and Communication. The thesis contributes with interesting findings from an expanding, middle-sized, healthy company in the audiobook streaming industry - but the results can be beneficial for other software development companies to learn from as well. Performing a similar study with a greater sample size, and additionally enabling comparisons between teams, is suggested for future research.
APA, Harvard, Vancouver, ISO, and other styles
8

Haghighitalab, Delaram. "Récepteur radio-logicielle hautement numérisé." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066443.

Full text
Abstract:
Aujourd'hui, il y a une augmentation du nombre de normes étant intégré dans des appareils mobiles. Les problèmes principaux sont la durée de vie de la batterie et la taille de l'appareil. L'idée d'un Radio-Logiciel est de pousser le processus de numérisation aussi près que possible de l'antenne. Dans cette thèse, nous présentons la première mise en œuvre d'un récepteur radio-logiciel complet basé sur Sigma-Delta RF passe-bande, y compris un LNA à gain variable (VGLNA), un ADC Sigma-Delta RF sous-échantillonné, un mélangeur bas-conversion RF numérique et un filtre de décimation polyphasé multi-étage multi-taux. Le VGLNA élargit la gamme dynamique du récepteur multi-standard pour atteindre les exigences des trois normes sans fil ciblées. Aussi une architecture mixte, en utilisant à la fois Source-Coupled Logic (SCL) et des circuits CMOS, il est proposé d'optimiser la consommation des circuits RF numériques. Par ailleurs, nous proposons une architecture de filtre en peigne à plusieurs étages avec décomposition polyphase à réduire la consommation d'énergie. Le récepteur est mesuré pour trois normes différentes dans la bande de 2.4 GHz, la bande ISM. Les résultats des mesures montrent que le récepteur atteint 79 dB, 73 dB et 63 dB de plage dynamique pour les normes Bluetooth, ZigBee et WiFi respectivement. Le récepteur complet, mis en œuvre dans le procédé CMOS 130 nm, a une fréquence centrale accordable de 300 MHz et consomme 63 mW sous 1.2 V. Comparé à d'autres récepteurs, le circuit proposé consomme 30% moins d'énergie, la plage dynamique est de 21 dB supérieur, IIP3 est de 6 dB supérieur et le facteur de mérite est de 24 dB supérieur<br>Nowadays there is an increase in the number of standards being integrated in mobile devices. The main issues are battery life and the size of the device. The idea of a Software Defined Radio is to push the digitization process as close as possible to the antenna. Having most of the circuit in the digital domain allows it to be reconfigurable thus requiring less area and power consumption. In this thesis, we present the first implementation of a complete SDR receiver based on RF bandpass Sigma-Delta including a Variable-Gain LNA (VGLNA), an RF subsampled Sigma-Delta ADC, an RF digital down-conversion mixer and a polyphase multi-stage multi-rate decimation filter. VGLNA enlarges the dynamic range of the multi-standard receiver to achieve the requirements of the three targeted wireless standards. Also a mixed architecture, using both Source-Coupled Logic (SCL) and CMOS circuits, is proposed to optimize the power consumption of the RF digital circuits. Moreover, we propose a multi-stage comb filter architecture with polyphase decomposition to reduce the power consumption. The receiver is measured for three different standards in the 2.4 GHz ISM-band. Measurement results show that the receiver achieves 79 dB, 73 dB and 63 dB of dynamic range for the Bluetooth, ZigBee and WiFi standards respectively. The complete receiver, implemented in 130 nm CMOS process, has a 300 MHz tunable central frequency and consumes 63 mW under 1.2 V supply. Compared to other SDR receivers, the proposed circuit consumes 30% less power, the DR is 21 dB higher, IIP3 is 6 dB higher and the overall Figure of Merit is 24 dB higher
APA, Harvard, Vancouver, ISO, and other styles
9

Cachera, David. "Validation formelle des langages à parallélisme de données." Phd thesis, École normale supérieure de Lyon - ENS Lyon, 1998. http://tel.archives-ouvertes.fr/tel-00425390.

Full text
Abstract:
Le calcul massivement parallèle a connu durant ces deux dernières décennies un fort développement. Les efforts dans ce domaine ont d'abord surtout été orientés vers les machines, plutôt qu'à la définition de langages adaptés au parallélisme massif. Par la suite, deux principaux modèles de programmation ont émergé : le parallélisme de contrôle et le parallélisme de données. Le premier a connu un vif succès. Dans ce modèle cependant, les applications massivement parallèles s'avèrent difficiles à concevoir et peu fiables, compte tenu du grand nombre de processus envisagés. En revanche, le parallélisme de données paraît aujourd'hui être un bon compromis entre les besoins des utilisateurs et les contraintes imposées par les architectures parallèles. Dans cette thèse, nous nous sommes intéressé à la validation formelle des langages à parallélisme de données. L'idée est de tirer parti de la relative simplicité de ce modèle de programmation pour développer des méthodes semblables à celles déjà éprouvées dans le cadre des langages scalaires classiques. La première partie du travail effectué concerne un langage data-parallèle simple, de type impératif. Nous avons montré qu'il était possible de définir un système de preuve complet pour ce langage, inpiré de la logique de Hoare. L'étude théorique nous a permis en outre de définir une méthodologie pratique de preuve par annotations, semblable à celle utilisée pour les langages scalaires. Nous nous sommes ensuite tourné vers le langage d'équations récurrentes Alpha. Il s'avérait nécessaire de définir pour ce langage un cadre formel de validation, plus riche que le système de transformations existant ne permettant que des preuves par équivalence. Nous avons défini un modèle d'exécution par l'intermédiaire d'une sémantique opérationnelle, et une méthodologie de preuve. Celle-ci utilise des invariants qui sont raffinés à partir d'une traduction du programme dans un langage logique jusqu'à l'obtention de la propriété voulue.
APA, Harvard, Vancouver, ISO, and other styles
10

Harani, Yasmina. "Une approche multi-modèles pour la capitalisation des connaissances dans le domaine de la conception." Grenoble INPG, 1997. http://www.theses.fr/1997INPG0178.

Full text
Abstract:
Ce travail de these propose un outil d'aide a la conception dont le principal objectif est la capitalisation de la connaissance intervenant lors de la conception d'un produit a des fins de reutilisation. Nous nous sommes interesses, principalement mais pas exclusivement, a des cas de conceptions routinieres et a des cas de re-conceptions. Ainsi, les processus de conception sont consideres comme structures. Notre approche s'applique a la conception de produits de types differents (un mecanisme, un moteur electrique, un logiciel, etc. ). Nous avons mis en place une structure de donnees organisee en un modele de produit et un modele de processus de conception. Le modele de produit prend en charge la modelisation des parametres et des descriptions du produit a concevoir et structure l'ensemble des caracteristiques en points de vue (pour capitaliser la specification du produit), alors que le modele de processus de conception prend en charge la modelisation de la demarche et donc des differentes etapes de conception du produit selon les langages de workflow (pour capitaliser le savoir-faire des concepteurs). Nous avons construit ces deux modeles a partir d'un ensemble de concepts de base independants de tout domaine d'application : les modeles sont alors dits generiques. Pour ce faire, nous avons defini trois niveaux de modelisation : le niveau meta, le niveau specification et le niveau realisation. Afin d'illustrer notre propos, un prototype logiciel, qui traite de la mise en place de l'activite de conception dans le domaine du genie electrique et plus particulierement de la conception d'un moteur electrique asynchrone, a ete developpe. Le prototype est implemente sur pc a partir du langage de programmation oriente-objet smalltalk avec l'utilisation de son environnement visualworks 2. 5 pour le developpement de l'interface utilisateur.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Failed software"

1

American Bar Association. Section of Litigation., ed. Litigating failed software actions. American Bar Association], Section of Litigation, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Glass, Robert L. Computing calamities: Lessons learned from products, projects, and companies that failed. Prentice Hall, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

California. Bureau of State Audits. Enterprise licensing agreement: The State failed to exercise due diligence when contracting with Oracle, potentially costing taxpayers millions of dollars. Bureau of State Audits, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Glass, Robert L. Computing Calamities: Lessons Learned From Products, Projects, and Companies that Failed. Prentice Hall, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Glass, Robert L. Computing Calamities: Lessons Learned From Products, Projects, and Companies that Failed. Prentice Hall, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shah, Chirag D., and Maunak V. Rana. Advances in Dorsal Column Stimulation. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780190626761.003.0017.

Full text
Abstract:
Spinal cord stimulation (SCS) has been a long established therapy for various pain conditions including low back pain, failed back surgery syndrome, complex regional pain syndrome, and other neuropathic and nociceptive pain states. Since the first report of SCS in 1967 by Shealy, advances have occurred in the technology used to achieve clinical analgesia. Developments in both the hardware and software involved have led to significant improvements in functional specificity, as seen in dorsal root ganglion stimulation, along with increasing breadth and depth of the field of neuromodulation. The patient experience during the implantation of the systems, as well as post-procedurally has been enhanced with improvements in programming. These technological improvements have been validated in quality evidenced-based medicine: what was a static area now is a dynamic field, with neuromodulation poised to allow physicians and patients more viable options for better pain control for chronic painful conditions.
APA, Harvard, Vancouver, ISO, and other styles
7

Halpern, Joseph Y. Actual Causality. The MIT Press, 2017. http://dx.doi.org/10.7551/mitpress/9780262035026.001.0001.

Full text
Abstract:
Causality plays a central role in the way people structure the world; we constantly seek causal explanations for our observations. But what does it even mean that an event C “actually caused” event E? The problem of defining actual causation goes beyond mere philosophical speculation. For example, in many legal arguments, it is precisely what needs to be established in order to determine responsibility. The philosophy literature has been struggling with the problem of defining causality since Hume. In this book, Joseph Halpern explores actual causality, and such related notions as degree of responsibility, degree of blame, and causal explanation. The goal is to arrive at a definition of causality that matches our natural language usage and is helpful, for example, to a jury deciding a legal case, a programmer looking for the line of code that cause some software to fail, or an economist trying to determine whether austerity caused a subsequent depression. Halpern applies and expands an approach to causality that he and Judea Pearl developed, based on structural equations. He carefully formulates a definition of causality, and building on this, defines degree of responsibility, degree of blame, and causal explanation. He concludes by discussing how these ideas can be applied to such practical problems as accountability and program verification.
APA, Harvard, Vancouver, ISO, and other styles
8

Dunbar-Hester, Christina. Hacking Diversity. Princeton University Press, 2019. http://dx.doi.org/10.23943/princeton/9780691192888.001.0001.

Full text
Abstract:
Hacking, as a mode of technical and cultural production, is commonly celebrated for its extraordinary freedoms of creation and circulation. Yet surprisingly few women participate in it: rates of involvement by technologically skilled women are drastically lower in hacking communities than in industry and academia. This book investigates the activists engaged in free and open-source software to understand why, despite their efforts, they fail to achieve the diversity that their ideals support. The book shows that within this well-meaning volunteer world, beyond the sway of human resource departments and equal opportunity legislation, members of underrepresented groups face unique challenges. The book explores who participates in voluntaristic technology cultures, to what ends, and with what consequences. Digging deep into the fundamental assumptions underpinning STEM-oriented societies, the book demonstrates that while the preferred solutions of tech enthusiasts—their “hacks” of projects and cultures—can ameliorate some of the “bugs” within their own communities, these methods come up short for issues of unequal social and economic power. Distributing “diversity” in technical production is not equal to generating justice. The book reframes questions of diversity advocacy to consider what interventions might appropriately broaden inclusion and participation in the hacking world and beyond.
APA, Harvard, Vancouver, ISO, and other styles
9

Bonabeau, Eric, Marco Dorigo, and Guy Theraulaz. Swarm Intelligence. Oxford University Press, 1999. http://dx.doi.org/10.1093/oso/9780195131581.001.0001.

Full text
Abstract:
Social insects--ants, bees, termites, and wasps--can be viewed as powerful problem-solving systems with sophisticated collective intelligence. Composed of simple interacting agents, this intelligence lies in the networks of interactions among individuals and between individuals and the environment. A fascinating subject, social insects are also a powerful metaphor for artificial intelligence, and the problems they solve--finding food, dividing labor among nestmates, building nests, responding to external challenges--have important counterparts in engineering and computer science. This book provides a detailed look at models of social insect behavior and how to apply these models in the design of complex systems. The book shows how these models replace an emphasis on control, preprogramming, and centralization with designs featuring autonomy, emergence, and distributed functioning. These designs are proving immensely flexible and robust, able to adapt quickly to changing environments and to continue functioning even when individual elements fail. In particular, these designs are an exciting approach to the tremendous growth of complexity in software and information. Swarm Intelligence draws on up-to-date research from biology, neuroscience, artificial intelligence, robotics, operations research, and computer graphics, and each chapter is organized around a particular biological example, which is then used to develop an algorithm, a multiagent system, or a group of robots. The book will be an invaluable resource for a broad range of disciplines.
APA, Harvard, Vancouver, ISO, and other styles
10

Sutherland, Kathryn. Why Modern Manuscripts Matter. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780192856517.001.0001.

Full text
Abstract:
This is a study of the politics, commerce, and aesthetics of heritage culture in the shape of authors’ manuscripts. Draft manuscripts survive in quantity from the eighteenth century when, with the rise of print, readers learnt to value ‘the hand’ as index of individuality and the blotted page, criss-crossed by deletion and revision, as sign of genius. Since then, collectors have fought over manuscripts, libraries have curated them, the rich have stashed them away in investment portfolios, students have squeezed meaning from them, and we have all stared at them behind exhibition glass. Why do we trade, conserve, and covet manuscripts? Most, after all, are just the stuff left over after the novel or volume of poetry goes into print. This study explores manuscript’s expressive agency and its capacity to provoke passion—a capacity ever more to the fore in the twenty-first century when books are assembled via word-processing software and authors no longer leave in quantity paper trails behind them. It considers manuscripts as residues of meaning that print fails to capture, as fragment art, property, waste paper, and artefacts whose aesthetic dimension becomes apparent with time. It asks what it might mean to re-read print in the shadow of manuscript. Studies of Samuel Johnson, James Boswell, Walter Scott, Frances Burney, Jane Austen, writers from the first great period of manuscript survival, are interspersed with discussions of the Cairo genizah, Katie Paterson’s ‘Future Library’ project, Andy Warhol’s and Muriel Spark’s self-archiving, Cornelia Parker’s reclamation art, and more.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Failed software"

1

Lauesen, Soren. "Why the Electronic Land Registry Failed." In Requirements Engineering: Foundation for Software Quality. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-28714-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rajpal, Mark. "Lessons Learned from a Failed Attempt at Distributed Agile." In Agile Processes, in Software Engineering, and Extreme Programming. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-33515-5_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pieber, Bernhard, Kerstin Ohler, and Matthias Ehegötz. "University of Vienna’s U:SPACE Turning Around a Failed Large Project by Becoming Agile." In Agile Processes, in Software Engineering, and Extreme Programming. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-33515-5_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ádám, Zsófia, Dirk Beyer, Po-Chun Chien, Nian-Ze Lee, and Nils Sirrenberg. "Btor2-Cert: A Certifying Hardware-Verification Framework Using Software Analyzers." In Tools and Algorithms for the Construction and Analysis of Systems. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57256-2_7.

Full text
Abstract:
AbstractFormal verification is essential but challenging: Even the best verifiers may produce wrong verification verdicts. Certifying verifiers enhance the confidence in verification results by generating a witness for other tools to validate the verdict independently. Recently, translating the hardware-modeling language Btor2 to software, such as the programming language C or LLVM intermediate representation, has been actively studied and facilitated verifying hardware designs by software analyzers. However, it remained unknown whether witnesses produced by software verifiers contain helpful information about the original circuits and how such information can aid hardware analysis. We propose a certifying and validating framework Btor2-Cert to verify safety properties of Btor2 circuits, combining Btor2-to-C translation, software verifiers, and a new witness validator Btor2-Val, to answer the above open questions. Btor2-Cert translates a software violation witness to a Btor2 violation witness; As the Btor2 language lacks a format for correctness witnesses, we encode invariants in software correctness witnesses as Btor2 circuits. The validator Btor2-Val checks violation witnesses by circuit simulation and correctness witnesses by validation via verification. In our evaluation, Btor2-Cert successfully utilized software witnesses to improve quality assurance of hardware. By invoking the software verifier Cbmc on translated programs, it uniquely solved, with confirmed witnesses, 8 % of the unsafe tasks for which the hardware verifier ABC failed to detect bugs.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Liang, and Jianxin Zhao. "Testing Framework." In Architecture of Advanced Numerical Analysis Systems. Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8853-5_11.

Full text
Abstract:
AbstractEvery proper software requires testing, and so is Owl. All too often, we have found that testing can help us discover potential errors we failed to notice during development. In this chapter, we briefly introduce the philosophy of testing in Owl, the tool we use for conducting the unit test, and examples to demonstrate how to write unit tests. Issues such as using functors in tests and other things to notice in writing test code for Owl, etc. are also discussed in this chapter.
APA, Harvard, Vancouver, ISO, and other styles
6

Carroll, Noel, Finn Olav Bjørnson, Torgeir Dingsøyr, Knut-Helge Rolland, and Kieran Conboy. "Operationalizing Agile Methods: Examining Coherence in Large-Scale Agile Transformations." In Agile Processes in Software Engineering and Extreme Programming – Workshops. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58858-8_8.

Full text
Abstract:
Abstract Following the highly pervasive and effective use of agile methods for software development, attention has now turned to the much more difficult challenge of applying these methods in large scale, organization-wide development. However, identifying to what extent certain factors influence success and failure of sustaining large-scale agile transformations remains unclear and there is a lack of theoretical frameworks to guide such investigations. By adopting Normalization Process Theory and specifically ‘coherence’, we compare two large-scale agile transformation case studies and the different perspectives individuals and teams had when faced with the problem of operationalizing the agile method as part of their large-scale agile transformation. The key contributions of this work are: (i) this is a first attempt to present the results of a comparison between a successful and failed large-scale agile transformations; and (ii) we describe the challenges in understanding the rationale, differences, value, and roles associated with the methods to support the large-scale agile transformation. We also present future research for practitioners and academics on large-scale agile transformation.
APA, Harvard, Vancouver, ISO, and other styles
7

Rapp, Christian, Till Heilmann, and Otto Kruse. "Beyond MS Word: Alternatives and Developments." In Digital Writing Technologies in Higher Education. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-36033-6_3.

Full text
Abstract:
AbstractMicrosoft Word, the word processing software developed by Microsoft in 1983, established itself as the market leader in the 1990s and 2000s and remained the gold standard for many years. Despite its obvious benefits, it always faced criticism from various quarters. We address the persistent criticism that MS Word is overloaded with features and distracts from writing rather than facilitating it. Alternatives, mainly distraction-free editors and text editors for use with a markup language, are briefly reviewed and compared to MS Word. A serious challenger emerged in 2006 with Google Docs, a cloud-based writing software that has moved text production into the platform era, enabling files to be shared and creating collaborative writing spaces. Even though Google Docs failed to break the dominance of MS Word, it became the trend-setter in online writing. Microsoft and Apple soon followed by designing complex web environments for institutions and companies rather than individual writers. We give an overview of technologies that have evolved to challenge the supremacy of MS Word or compete for market share. By this, we hope to provide clues as to the future development of word processing.
APA, Harvard, Vancouver, ISO, and other styles
8

Schrijvers, Erik, Corien Prins, and Reijer Passchier. "Conclusions and Recommendations." In Research for Policy. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77838-5_5.

Full text
Abstract:
AbstractOn 24 June 2019, an hour-long outage hit the Dutch emergency number 112 and 0900–8844, the national police telephone line. It was also impossible to contact hospitals, municipalities, and companies for some time. The primary system of KPN – the telecom provider – was out of action while three back-up systems failed. The incident, which according to KPN was probably due to software error, once again revealed the vulnerability of facilities in the physical world to digital failures. It also underlined the report’s central message: the need to be better prepared for incidents involving a digital dimension. These incidents are all the more critical when they are not limited to the digital domain, but have potentially disruptive consequences in the physical world and for confidence in the core institutions of society.
APA, Harvard, Vancouver, ISO, and other styles
9

Metta, Ravindra, Raveendra Kumar Medicherla, and Hrishikesh Karmarkar. "VeriFuzz: Good Seeds for Fuzzing (Competition Contribution)." In Fundamental Approaches to Software Engineering. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99429-7_20.

Full text
Abstract:
AbstractWe present VeriFuzz 1.2 with two new enhancements: (1) unroll the given program to a short depth and use BMC to produce incomplete test inputs, which are extended into complete inputs, and (2) if BMC fails for this short unrolling, automatically identify the reason and rerun BMC with a corresponding remedial strategy.
APA, Harvard, Vancouver, ISO, and other styles
10

Pérez Pupo, Iliana, Pedro Y. Piñero Pérez, Roberto García Vacacela, Rafael Bello, and Luis Alvarado Acuña. "Discovering Fails in Software Projects Planning Based on Linguistic Summaries." In Rough Sets. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52705-1_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Failed software"

1

Haynes, Robert, Ghanashyam Joshi, and Natasha Bradley. "Constant Stress Amplitude Fatigue Crack Growth in Notch Pre-cracked Aluminum 7075-T6 Rivet Hole." In Vertical Flight Society 74th Annual Forum & Technology Display. The Vertical Flight Society, 2018. http://dx.doi.org/10.4050/f-0074-2018-12879.

Full text
Abstract:
Constant stress amplitude fatigue tests were conducted on the notch pre-cracked Aluminum 7075-T6 rivet hole dog-bone coupons. Monitoring of visible surface crack length by special surface engraving using digital microscope images and by ultrasonic sensors signals was carried out to yield fatigue crack length measurements in relation to number of fatigue cycles applied. The experimental results provided fatigue crack length validation for the ultrasonic sensor measurements. Fracto-graphic examination of failed fatigue surfaces yielded further confirmation of notch pre-crack length and crack growth marker bands. These experimental inputs were used in NASGRO and AFGROW software applications, for fatigue crack growth simulation. Comparative analysis of simulation and experimental fatigue crack growth results is presented in this paper. Opportunities for further work are discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Benson, Furdon E. (Ham). "Local Stresses in Fiberglass Pipe at Supports." In CORROSION 1999. NACE International, 1999. https://doi.org/10.5006/c1999-99398.

Full text
Abstract:
Abstract The installation of non-metallic piping systems to handle process fluids, utilities and waste streams is an every day occurrence in the chemical processing industry. Many of these installations replace existing metallic piping systems that have failed due to internal or external corrosion. The design of these non-metallic systems generally parallels the process used in metallic pipe design but with an increased factor of ignorance (safety factor). And, unfortunately, most of the design criteria specified for such systems is based on a "metal mind set", especially on retrofit or replacement type piping systems. There are various software packages that address stress analysis of piping systems based on physical properties and geometry input by the user. These packages do not calculate the local stresses at supports other than the general affects of longitudinal beam bending. This paper will review "effective length" design methods for the determination of local pipe stresses caused by externally applied support loads. Results will be compared with those predicted by one of the available finite element software packages.
APA, Harvard, Vancouver, ISO, and other styles
3

Roarty, D. H., W. T. Bogard, W. M. Cox, D. C. A. Moore, and G. P. Quirk. "Corrosion Surveillance Applications for Nuclear Power Plant Systems." In CORROSION 1993. NACE International, 1993. https://doi.org/10.5006/c1993-93192.

Full text
Abstract:
Abstract Conventional corrosion monitoring instrumentation does not meet the performance criteria demanded in modern nuclear power systems. In consequence, it has failed to find widespread use in process control instrumentation. Two recent technological advances are combining to improve corrosion control and plant service life. The first was the development of on-line plant diagnostics and monitoring software which reappraised control information to provide operators and maintenance engineers with actionable plant condition information. This development is enhanced by the incorporation of electrochemical corrosion data obtained by the use of modern corrosion instrumentation. The corrosion instrumentation offers improved capabilities, in terms of better sensitivity, capability to detect and characterise localized corrosion and ability to function in very low conductivity solutions (e.g. feedwater and condensates). The philosophy and approach employed in recent developments are presented. A summary of the operational, safety and cost benefits to be gained is also given. It is considered that the approach could mark the way for plant and materials performance improvements for the next 20 years and beyond.
APA, Harvard, Vancouver, ISO, and other styles
4

Papavinasam, Sankara, and Alex Doiron. "A 5-M Approach to Control External Pipeline Corrosion." In CORROSION 2010. NACE International, 2010. https://doi.org/10.5006/c2010-10062.

Full text
Abstract:
Abstract This paper presents a 5-M approach to the control of external pipeline corrosion. This approach includes: Mitigation, Modeling, Monitoring, Maintenance and Management: Mitigation:The pipeline coating is the first line of defence against external pipeline corrosion. If it fails, the cathodic protection (CP) system acts as a back up, protecting those areas where the coating has failed. The type of coating on the pipe has an effect on the formation of the environment that causes corrosion and stress-corrosion cracking (SCC). Based on the coating used, based on more than 175 standards, and based on the results obtained in those standard tests, the corrosion rate of a pipeline protected by the coating is projected.Model:Based on field operating conditions, the corrosion rate is adapted. Most of the data required in this process are the data required in the pre-assessment step of the NACE External Corrosion Direct Assessment (ECDA) and NACE Stress-Corrosion Cracking Direct Assessment (SCCDA) standards.Monitoring:Using the above-ground survey results, the corrosion rate is validated. Most of the data required in this process are the data required in the indirect assessment of ECDA and SCCDA or data required in the Canadian Energy Pipeline Association (CEPA) SCC recommended practice. This process in addition integrates the inline inspection (ILI) data, if available. Based on the below ground measurements, the corrosion rate is further verified.Maintenance:Proper maintenance of the pipeline prolongs its life expectancy. From the corrosion rate the remaining life of the pipeline is calculated as described in the post-assessment process of ECDA and SCCDA.Management:Freeware software to use this approach is available to integrate the processes as well as to manage the external corrosion of pipelines.
APA, Harvard, Vancouver, ISO, and other styles
5

John, Gareth, David Buxton, and Ian Cotton. "Polarisation Modelling as Part of Galvanic Anode Upgrade of a Hybrid Offshore Cathodic Protection System." In CORROSION 2007. NACE International, 2007. https://doi.org/10.5006/c2007-07081.

Full text
Abstract:
Abstract This paper describes the development of a computer model to evaluate the performance of replacement galvanic anodes for a 25-year old offshore North Sea platform. The platform had a hybrid cathodic protection system comprising impressed current anodes mounted around the structure as well as aluminum – zinc – indium galvanic anodes. Over a period of time a number of the impressed current anodes had failed which together with natural consumption of the galvanic anodes had started to result in under polarization for sections of the structure. Consequently a cathodic protection upgrade was started, incorporating retrofitting of new galvanic anodes to the structure. In order to determine the optimum location for the new anodes a computer model was developed utilizing an iterative approach with an industry standard earthing (grounding) software model and application of polarization data. The model was first set up with the as-found condition and calibrated against subsea survey data. Once calibrated the impact of the proposed galvanic anode locations was assessed as well as various other “what-if” scenarios, including failure of remaining impressed current anodes. The paper highlights problems encountered with the modeling exercise, in particular due to lack of reliable data in some areas, and how this was overcome.
APA, Harvard, Vancouver, ISO, and other styles
6

Bachwani, Rekha, Olivier Crameri, Ricardo Bianchini, Dejan Kostic, and Willy Zwaenepoel. "Sahara: Guiding the debugging of failed software upgrades." In 2011 IEEE 27th International Conference on Software Maintenance (ICSM). IEEE, 2011. http://dx.doi.org/10.1109/icsm.2011.6080793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hsueh, Chien-Hsin, Yung-Pin Cheng, and Wei-Cheng Pan. "Intrusive Test Automation with Failed Test Case Clustering." In 2011 18th Asia Pacific Software Engineering Conference. IEEE, 2011. http://dx.doi.org/10.1109/apsec.2011.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shu, Ting, Lei Wang, and Jinsong Xia. "Fault Localization Using a Failed Execution Slice." In 2017 International Conference on Software Analysis, Testing and Evolution (SATE). IEEE, 2017. http://dx.doi.org/10.1109/sate.2017.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Li, Bertrand Meyer, and Manuel Oriol. "Improving Counterexample Quality from Failed Program Verification." In 2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). IEEE, 2022. http://dx.doi.org/10.1109/issrew55968.2022.00078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sai Zhang, Cheng Zhang, and Michael D. Ernst. "Automated documentation inference to explain failed tests." In 2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2011. http://dx.doi.org/10.1109/ase.2011.6100145.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Failed software"

1

Kemper, Bart. Developing the Role of the System Software Integrator to Mitigate Digital Infrastructure Vulnerabilities. SAE International, 2023. http://dx.doi.org/10.4271/epr2023028.

Full text
Abstract:
&lt;div class="section abstract"&gt;&lt;div class="htmlview paragraph"&gt;Traditional physical infrastructure increasingly relies upon software. Yet, 75% of software projects fail in budget by 46% and schedule by 82%. While other systems generally have a “responsible-in-charge” (RIC) professional, the implementation of a similar system of accountability in software is not settled. This is a major concern, as the consequences of software failure can be a matter of life-or-death. Further, there has been a 742% average annual increase in software supply chain attacks on increasingly used open-source software over the past three years, which can cost up to millions of dollars per incident.&lt;/div&gt;&lt;div class="htmlview paragraph"&gt;&lt;b&gt;Developing the Role of the System Software Integrator to Mitigate Digital Infrastructure Vulnerabilities&lt;/b&gt; discusses the verification, validation, and uncertainty quantification needed to vet systems before implementation and the continued maintenance measures required over the lifespan of software-integrated assets. It also proposes a certified System Software Integrator role that would be responsible for public safety in traditional infrastructure.&lt;/div&gt;&lt;div class="htmlview paragraph"&gt;&lt;a href="https://www.sae.org/publications/edge-research-reports" target="_blank"&gt;Click here to access the full SAE EDGE&lt;/a&gt;&lt;sup&gt;TM&lt;/sup&gt;&lt;a href="https://www.sae.org/publications/edge-research-reports" target="_blank"&gt; Research Report portfolio.&lt;/a&gt;&lt;/div&gt;&lt;/div&gt;
APA, Harvard, Vancouver, ISO, and other styles
2

Leis, Brian. L51794A Failure Criterion for Residual Strength of Corrosion Defects in Moderate to High Toughness Pipe. Pipeline Research Council International, Inc. (PRCI), 2000. http://dx.doi.org/10.55274/r0011253.

Full text
Abstract:
This project extends the investigation of the remaining strength of blunt and sharp flaws in pipe to develop a new, simple equation, known as PCORRC, for predicting the remaining strength of corrosion defects in moderate- to high-toughness steels that fail by the mechanism of plastic collapse. This report summarizes the development of this criterion, which began with the enhancement of a special-purpose, analytical, finite-element-based software model (PCORR) for analyzing complex loadings on corrosion and other blunt defects. The analytical tool was then used to compare the influence of different variables on the behavior of blunt corrosion defects and to develop an equation to reliably and conservatively predict failure of corrosion defects in moderate- to high-toughness steels. The PCORR software and the PCORRC equation have been compared against the experimental database and have been shown to reduce excess conservatism in predicting failure of actual corrosion defects that were likely to have been controlled by the plastic collapse mechanism. Because of the general nature and theoretical foundation of these developments, both the software tool and the equation can be extended in future work to develop similar criteria for combinations of defects and loadings not addressed by this version of the PCORRC equation such as interaction of separated adjacent defects and axial loads on defects.
APA, Harvard, Vancouver, ISO, and other styles
3

Leis, B. N., and N. D. Ghadiali. L51720 Pipe Axial Flaw Failure Criteria - PAFFC Version 1.0 Users Manual and Software. Pipeline Research Council International, Inc. (PRCI), 1994. http://dx.doi.org/10.55274/r0011357.

Full text
Abstract:
In the early 1970's, the Pipeline Research Council International, Inc.(PRCI) developed a failure criterion for pipes that had a predominately empirical basis. This criterion was based on flaw sixes that existed prior to pressurization and did not address possible growth due to the pressure in service or in a hydrostatic test or during the hold time at pressure in a hydrotest. So long as that criterion was used within the scope of the underlying database and empirical calibration, the results of its predictions were reasonably accurate. However, with the advent of newer steels and the related increased toughness that supported significant stable flaw growth, it became evident that this criterion should be updated. This updating led to the PRCI ductile flaw growth model (DFGM) that specifically accounted for the stable growth observed at flaws controlled by the steel's toughness and a limit-states analysis that addressed plastic-collapse at the flaw. This capability provided an accurate basis to assess flaw criticality in pipelines and also the means to develop hydrotest plans on a pipeline specific basis. Unfortunately, this enhanced capability came at the expense of increased complexity that made this new capability difficult to use on a day-today basis. To counter this complexity, this capability has been recast in the form of a PC computer program. Benefit: This topical report contains the computer program and technical manual for a failure criterion that will predict the behavior of an axially oriented, partially through the wall flaw in a pipeline. The model has been given the acronym PAFFC which stands for Pipe Axial Flaw Failure Criteria. PAFFC is an extension of a previously developed ductile flaw growth model, L51543, and can account for both a flaw's time dependent growth under pressure as well as its unstable growth leading to failure. As part of the output, the user is presented with a graphical depiction of the flaw sizes in terms of combinations of flaw length and depth, that will fail (or survive) a given operating or test pressure. As compared to existing criteria, this model provides a more accurate prediction of flaw behavior for a broad range of pipeline conditions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography