To see the other types of publications on this topic, follow the link: Estimation software.

Dissertations / Theses on the topic 'Estimation software'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Estimation software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jhunjhunwala, Manish. "Software tool for reliability estimation." Morgantown, W. Va. : [West Virginia University Libraries], 2001. http://etd.wvu.edu/templates/showETD.cfm?recnum=1801.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2001.
Title from document title page. Document formatted into pages; contains x, 125 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 72-74).
APA, Harvard, Vancouver, ISO, and other styles
2

Park, In Kyoung, Dan C. Boger, and Michael G. Sovereign. "Software cost estimation through Bayesian inference of software size." Thesis, Monterey, California. Naval Postgraduate School, 1985. http://hdl.handle.net/10945/21547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Zhihua 1970. "Value estimation for software development processes." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=81576.

Full text
Abstract:
The management of software development processes is a continual challenge facing software development organizations. Previous studies used "flexible models" and empirical methods to optimize software development processes. In this thesis, the expected payoff is used to quantitatively evaluate processes. Payoff can be defined as the value of a team member's action, and the expected payoff combines the value of the payoff of a team member's action and the probability of taking that action. The mathematic models of a waterfall process and two flexible processes are evaluated in terms of total maximum expected payoff. The results show under which conditions which process is more valuable. An overview of this work and results will be presented in this seminar.
APA, Harvard, Vancouver, ISO, and other styles
4

Strike, Kevin D. "Software cost estimation with incomplete data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq64461.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fernández, Ramil Juan Carlos. "Continual resource estimation for evolving software." Thesis, Imperial College London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.404810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hamdan, Khaled. "An investigation into software estimation methods." Thesis, University of Sunderland, 2009. http://sure.sunderland.ac.uk/3557/.

Full text
Abstract:
There are currently no fully validated estimation approaches that can accurately predict the effort needed for developing a software system (Kitchenham, et al, 1995). Information gathered at the early stages of system development is not enough to provide precise effort estimates, even though similar software systems may have been developed in the past. Where similar systems have been developed, there are often inherent differences in the features of these systems and in the development process used. These differences are often sufficient to significantly reduce estimation accuracy. Historically, cost estimation focuses on project effort and duration. There are many estimation techniques, but none is consistently ‘best’ (Shepperd, 2003). Software project management has become a crucial field of research due to the increasing role of software in today’s world. Improving the functions of project management is a main concern in software development organisation. The purpose of this thesis is to develop a new model which incorporates cultural and leadership factors in the cost estimation model, and is based on Case-Based Reasoning. The thesis defines a new knowledge representation “ontology” to provide a common understanding of project parameters. The associated system uses a statistically simulated bootstrap method, which helps in tuning the analogy approach before application to real projects. This research also introduces a new application of Profile Theory, which takes a formal approach to the measurement of leadership capabilities. A pilot study was performed in order to understand the approaches used for cost estimation in the Gulf region. Based on this initial study, a questionnaire was further refined and tested. Consequently, further surveys were conducted in the United Arab Emirates. It was noticed that most of the software development projects failed in terms of cost estimate. This was due to the lack of a precise software estimation model. These studies also highlighted the importance of leadership and culture in software cost estimation. Effort was estimated using regression and analogy. The Bootstrap method was used to refine the estimate of effort based on analogy, with correction for bias. Due to the very different nature of the core and support systems, a separate model was developed for each of them. As a result of the study, a new model for identifying and analysing was developed. The model was then evaluated, and conclusions were drawn. These show the importance of the model and the factors of organisational culture and leadership in software project development and in cost estimation. Potential areas for future research were identified.
APA, Harvard, Vancouver, ISO, and other styles
7

Shepherd, Kristen Piggott. "A Comparison of Coalescent Estimation Software." Diss., CLICK HERE for online access, 2002. http://contentdm.lib.byu.edu/ETD/image/etd145.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Oyamada, Marcio Seiji. "Software performance estimation in MPSoC design." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2007. http://hdl.handle.net/10183/12674.

Full text
Abstract:
Atualmente, novas metodologias de projeto são necessárias devido a crescente complexidade dos sistemas embarcados. Metodologias no nível de sistema são propostas para auxiliar o projetista a lidar com a crescente complexidade, iniciando o projeto em um nível de abstração mais alto que o nível de transferência de registradores. Ferramentas de estimativa de desempenho são uma importante parte das metodologias no nível de sistema, visto que as mesmas auxiliam a exploração do espaço de projeto desde os estágios iniciais. O objetivo desta tese é definir uma metodologia integrada para estimativa de desempenho do software. Atualmente, nota-se a crescente utilização de software embarcado, inclusive utilizando múltiplos processadores, visando atender os requisitos de flexibilidade, desempenho e potência consumida. O desenvolvimento de estimadores de desempenho de software não é trivial, devido à utilização de processadores embarcados com arquiteturas avançadas. Para auxiliar a seleção do processador no nível da especificação do sistema, um novo modelo de estimador do desempenho do software baseado em redes neurais é proposto. Redes neurais mostraram-se uma solução adequada para uma rápida estimativa de desempenho em um estágio inicial do projeto. Para realizar a análise do desempenho do software no nível funcional do barramento, onde o mapeamento do hardware e software já está definido, é utilizado um modelo global de simulação, chamado de protótipo virtual. A metodologia de análise de desempenho proposta neste trabalho é integrada a um ambiente para refinamento de interfaces de hardware e software chamada ROSES. A metodologia proposta é avaliada através de um estudo de caso de uma arquitetura multiprocessada de um codificador MPEG4.
Nowadays, embedded system complexity requires new design methodologies. System-level methodologies are proposed to cope with this complexity, starting the design above the register-transfer level. Performance estimation tools are an important piece of system-level design methodologies, since they are used to aid design space exploration at an early design stage. The goal of this thesis is to define an integrated methodology for software performance estimation. Currently, embedded software usage is increasing, becoming multiprocessor system-on-chip a common solution to cope with flexibility, performance, and power requirements. The development of accurate software performance estimators is not trivial, due to the increased complexity of embedded processors. To drive processor selection at specification level, a novel analytic software performance estimator based on neural networks is proposed. The neural network enables a fast estimation at an early design stage. To target the software performance analysis at bus functional level, where mapping of the hardware and software components is already established, we use a global simulation model supporting performance profiling. The proposed software performance estimation methodology is linked to a hardware and software interface refinement environment named ROSES. The proposed methodology is evaluated through a case study of a multiprocessor MPEG4 encoder.
APA, Harvard, Vancouver, ISO, and other styles
9

Henry, Troy Steven. "Architecture-Centric Project Estimation." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/32756.

Full text
Abstract:
In recent years studies have been conducted which suggest that taking an architecture first approach to managing large software projects can reduce a significant amount of the uncertainty present in project estimates. As the project progresses, more concrete information is known about the planned system and less risk is present. However, the rate at which risk is alleviated varies across the life-cycle. Research suggests that there exists a significant drop off in risk when the architecture is developed. Software risk assessment techniques have been developed which attempt to quantify the amount of risk that varying uncertainties convey to a software project. These techniques can be applied to architecture specific issues to show that in many cases, conducting an architecture centric approach to development will remove more risk than the cost of developing the architecture. By committing to developing the architecture prior to the formal estimation process, specific risks can be more tightly bounded, or even removed from the project. The premise presented here is that through the process of architecture-centric management, it is possible to remove substantial risk from the project. This decrease in risk exceeds that at other phases of the life-cycle, especially in comparison of the effort involved. Notably, at architecture, a sufficient amount knowledge is gained by which effort estimations may be tightly bounded, yet the project is early enough in the life-cycle for proper planning and scheduling. Thus, risk is mitigated through the increase in knowledge and the ability to maintain options at an early point. Further, architecture development and evaluation has been shown to incorporate quality factors normally insufficiently considered in the system design. The approach taken here is to consider specific knowledge gained through the architecting process and how this is reflected in parametric effort estimation models. This added knowledge is directly reflected in risk reduction. Drawing on experience of architecture researchers as well as project managers employing this approach, this thesis considers what benefits to the software development process are gained by taking this approach. Noting a strong reluctance of owners to incorporate solid software engineering practices, the thesis concludes with an outline for an experiment which goes about proving the reduction in risk at architecture exceeds the cost of that development.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Leinonen, J. (Juho). "Evaluating software development effort estimation process in agile software development context." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605221862.

Full text
Abstract:
This thesis studied effort estimation in software development, focusing on task level estimation that is done in Scrum teams. The thesis was done at Nokia Networks and the motivation for this topic came from the poor estimation accuracy that has been found to be present in software development. The aim of this thesis was to provide an overview of what is the current state of the art in effort estimation, survey the current practices present in Scrum teams working on LTE L2 software component at Nokia Networks Oulu, and then present suggestions for improvement based on the findings. On the basis of the literature review, three main categories of effort estimation methods were found: expert estimation, algorithmic models and machine learning. Universally there did not seem to be a single best method, but instead the differences come from the context of use. Algorithmic models and machine learning require data sets, whereas expert estimation methods rely on previous experiences and intuition of the experts. While model based methods have received a lot of research attention, the industry has largely relied on expert estimation. The current state of effort estimation at Nokia Networks was studied by conducting a survey. This survey was built based on previous survey studies that were found by conducting a systematic literature review. The questions found in the previous studies were formulated into a questionnaire, which was then used to survey the current effort estimation practices present in the participating teams. 41 people out of 100 in the participating teams participated in the survey. Survey results showed that like much of the software industry, the teams in LTE L2 relied on expert estimation methods. Most respondents had encountered overruns in the last sprint and the most often provided reason was that testing related effort estimation was hard. Forgotten subtasks were encountered frequently and requirements were found to be both unclear and to change often. Very few had had any training on effort estimation. There were no common practices for effort data collection and as such, it was mostly not done. By analyzing the survey results and reflecting them on the previous research, five suggestions for improvements were found. These were training in effort estimation, improving the information that is used during effort estimation by collaborating with specification personnel, improving testing related effort estimation by splitting acceptance testing into their own tasks, collecting and using effort data, and using Planning Poker as an effort estimation method, as it fit the context of estimation present in the teams. The study shed light on how effort estimation is done in software industry. Another contribution was the improvement suggestions, which could potentially improve the situation in the teams that participated in the survey. A third contribution was the questionnaire built during this study, as it could potentially be used to survey the current state of effort estimation in also other contexts.
APA, Harvard, Vancouver, ISO, and other styles
11

Nabi, Mina. "A Software Benchmarking Methodology For Effort Estimation." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614648/index.pdf.

Full text
Abstract:
Software project managers usually use benchmarking repositories to estimate effort, cost, and duration of the software development which will be used to appropriately plan, monitor and control the project activities. In addition, precision of benchmarking repositories is a critical factor in software effort estimation process which plays subsequently a critical role in the success of the software development project. In order to construct such a precise benchmarking data repository, it is important to have defined benchmarking data attributes and data characteristics and to have collected project data accordingly. On the other hand, studies show that data characteristics of benchmark data sets have impact on generalizing the studies which are based on using these datasets. Quality of data repository is not only depended on quality of collected data, but also it is related to how these data are collected. In this thesis, a benchmarking methodology is proposed for organizations to collect benchmarking data for effort estimation purposes. This methodology consists of three main components: benchmarking measures, benchmarking data collection processes, and benchmarking data collection tool. In this approach results of previous studies from the literature were used too. In order to verify and validate the methodology project data were collected in two middle size software organizations and one small size organization by using automated benchmarking data collection tool. Also, effort estimation models were constructed and evaluated for these projects data and impact of different characteristics of the projects was inspected in effort estimation models.
APA, Harvard, Vancouver, ISO, and other styles
12

Schneider, Gary David. "A requirements specification software cost estimation tool." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Usman, Muhammad. "Supporting Effort Estimation in Agile Software Development." Licentiate thesis, Karlskrona, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10961.

Full text
Abstract:
Background: In Agile Software Development (ASD) planning is valued more than the resulting plans. Planning and estimation are carried out at multiple levels in ASD. Agile plans and estimates are frequently updated to reflect the current situation. It supports shorter release cycles and flexibility to incorporate changing market and customer needs. Many empirical studies have been conducted to investigate effort estimation in ASD. However, the evidence on effort estimation in ASD has not been aggregated and organized. Objective: This thesis has two main objectives: First, to identify and aggregate evidence, from both literature and industry, on effort estimation in ASD. Second, to support research and practice on effort estimation in ASD by organizing the identified knowledge. Method: In this thesis we conducted a Systematic Literature Review (SLR), a systematic mapping study, a questionnaire based industrial survey and an interview based survey. Results: The SLR and survey results showed that agile teams estimate effort, mostly during release and iteration planning, using techniques that are based on experts' subjective assessments. During effort estimation team related cost drivers, such as team members’ expertise, are considered important. The results also highlighted that implementation and testing are the only activities that are accounted for in effort estimates by most agile teams. Our mapping study identified that taxonomies in SE are mostly designed and presented in an ad-hoc manner. To fill this gap we updated an existing method to design taxonomies in a systematic way. The method is then used to design taxonomy on effort estimation in ASD using the evidence identified in our SLR and survey as input. Conclusions: The proposed taxonomy is evaluated by characterizing effort estimation cases of selected agile projects reported in literature. The evaluation found that the reporting of the selected studies lacks information related to the context and predictors used during effort estimation in ASD. The taxonomy can be used in consistently reporting effort estimation studies in ASD to facilitate identification, aggregation and analysis of the evidence. The proposed taxonomy was also used to characterize the effort estimation activity of agile teams in three different software companies. The proposed taxonomy was found to be useful by interviewed agile practitioners in documenting important effort estimation related knowledge, which otherwise remain tacit in most cases.
APA, Harvard, Vancouver, ISO, and other styles
14

Jain, Achin. "Software defect content estimation: A Bayesian approach." Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/26932.

Full text
Abstract:
Software inspection is a method to detect errors in software artefacts early in the development cycle. At the end of the inspection process the inspectors need to make a decision whether the inspected artefact is of sufficient quality or not. Several methods have been proposed to assist in making this decision like capture recapture methods and Bayesian approach. In this study these methods have been analyzed and compared and a new Bayesian approach for software inspection is proposed. All of the estimation models rely on an underlying assumption that the inspectors are independent. However, this assumption of independence is not necessarily true in practical sense, as most of the inspection teams interact with each other and share their findings. We, therefore, studied a new Bayesian model where the inspectors share their findings, for defect estimate and compared it with the Bayesian model (Gupta et al. 2003), where inspectors examine the artefact independently. The simulations were carried out under realistic software conditions with a small number of difficult defects and a few inspectors. The models were evaluated on the basis of decision accuracy and median relative error and our results suggest that the dependent inspector assumption improves the decision accuracy (DA) over the previous Bayesian model and CR models.
APA, Harvard, Vancouver, ISO, and other styles
15

Alexander, Byron Vernon Terry. "Legacy system upgrade for software risk assessment." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA401409.

Full text
Abstract:
Thesis (M.S in Computer Science) Naval Postgtaduate School, December 2001.
Thesis Advisor(s): Berzins, Valdis ; Murrah, Michael. "December 2001." Includes bibliographical references (p. 91). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
16

Chu, Xiaoyuan. "Improving Estimation Accuracy using Better Similarity Distance in Analogy-based Software Cost Estimation." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-246116.

Full text
Abstract:
Software cost estimation nowadays plays a more and more important role in practical projects since modern software projects become more and more complex as well as diverse. To help estimate software development cost accurately, this research does a systematic analysis of the similarity distances in analogy-based software cost estimation and based on this, a new non-orthogonal space distance (NoSD) is proposed as a measure of the similarities between real software projects. Different from currently adopted measures like the Euclidean distance and so on, this non-orthogonal space distance not only considers the different features to have different importance for cost estimation, but also assumes project features to have a non-orthogonal dependent relationship which is considered independent to each other in Euclidean distance. Based on such assumptions, NoSD method describes the non-orthogonal angles between feature axes using feature redundancy and it represents the feature weights using feature relevance, where both redundancy and relevance are defined in terms of mutual information. It can better reveal the real dependency relationships between real life software projects based on this non-orthogonal space distance. Also experiments show that it brings a greatest of 13.1% decrease of MMRE and a 12.5% increase of PRED(0.25) on ISBSG R8 dataset, and 7.5% and 20.5% respectively on the Desharnais dataset. Furthermore, to make it better fit the complex data distribution of real life software projects data, this research leverages the particle swarm optimization algorithm for an optimization of the proposed non-orthogonal space distance and proposes a PSO optimized non-orthogonal space distance (PsoNoSD). It brings further improvement in the estimation accuracy. As shown in experiments, compared with the normally used Euclidean distance, PsoNoSD improves the estimation accuracy by 38.73% and 11.59% in terms of MMRE and PRED(0.25) on ISBSG R8 dataset. On the Desharnais dataset, the improvements are 23.38% and 24.94% respectively. In summary, the new methods proposed in this research, which are based on theoretical study as well as systematic experiments, have solved some problems of currently used techniques and they show a great ability of notably improving the software cost estimation accuracy.
APA, Harvard, Vancouver, ISO, and other styles
17

Valdés, Francisco. "Design of a fuzzy logic software estimation process." Mémoire, École de technologie supérieure, 2011. http://espace.etsmtl.ca/983/1/VALD%C3%89S_Francisco.pdf.

Full text
Abstract:
Cette recherche décrit la conception d'un processus avec logique floue pour l'estimation des projets de logiciels. Il y a des études qui montrent que la plupart des projets de logiciels excèdent leur budget ou dépassent leur calendrier prévu, et ce même si depuis des années les organisations font des efforts pour augmenter le taux de réussite des projets de logiciels en rendant le processus plus facile à gérer et, par conséquent, plus prévisible. L'estimation du projet est un enjeu important, car c'est la base pour quantifier, allouer et gérer les ressources nécessaires à un projet. Lorsque les estimations de projets logiciels ne sont pas effectuées correctement, les organisations font face un risque élevé dans leurs projets et cela peut mener à des pertes pour l'organisation au lieu des profits prévus et justifiant le démarrage des projets. Les estimations les plus importants doivent être effectuées au début du cycle de développement (i.e. à la phase de conceptualisation des projets): à ce moment là, l'information est disponible seulement à un niveau très élevé d'abstraction, et souvent elle est fondée sur un certain nombre d'hypothèses non vérifiables. L'approche généralement utilisée pour estimer les projets dans l'industrie du logiciel est celle basée sur l'expérience des employés dans l'organisation, aussi nommée l’appoche par ‘jugement d'experts’. Bien sûr, il y a un certain nombre de problématiques reliées à l’utilisation de ces jugements d’experts en estimation: par exemple, les hypothèses sont implicites et l'expérience est fortement liée aux experts et non pas à l'organisation. Le but de recherche de cette thèse était de concevoir un processus d'estimation de projets de logiciels capable de tenir compte du manque d'informations détaillées et quantitatives dans les premières phases du cycle de vie du développement logiciel. La stratégie choisie pour cette recherche tire partie des avantages de l'approche fondée sur l'expérience qui peut être utilisée dans les phases précoces de l'estimation de projets de logiciels, tout en tenant compte de certains des problèmes majeurs générés par cette méthode d'estimation par jugements d’experts. La logique floue a été proposée comme approche de recherche parce que c'est une façon formelle pour gérer l'incertitude et les variables linguistiques disponibles dans les premières phases d’un projet de développement d’un logiciel: un système à base de logique floue permet d’acquérir l'expérience de l'organisation par l'intermédiaire des experts et de leurs définitions de règles d'inférence. Les objectifs de recherche spécifiques à atteindre par ce processus d'estimation améliorée sont: A. Le processus d'estimation proposé doit utiliser des techniques pertinentes pour gérer l'incertitude et l'ambiguïté, comme le font les practiciens lorqu’ils utilisent leur ‘jugement d’experts’ en estimation de projets logiciel: le processus d'estimation proposé doit utiliser les variables utilisées par les praticiens. B. Le processus d'estimation proposé doit être utile à un stade précoce du processus de développement logiciel. C. Le processus d'estimation proposée doit préserver l'expérience (ou la base de connaissances) pour l'organisation et inclure un mécanisme facile pour définir l'expérience des experts. D. Le modèle proposé doit être utilisable par des personnes avec des compétences distinctes de celles des ‘experts’ qui définissent le contexte d'origine du modèle d’estimation proposé. E. Pour l'estimation dans le contexte des premières phases, un processus d'estimation fondé sur la logique floue a été proposée, soit : ‘Estimation of Projects in a Context of Uncertainty - EPCU’’. Une caractéristique importante de cette thèse est l’utilisation, pour fin d’expérimentation et de vérification, d’informations provenant de projets provenant de l’industrie au Mexique. La phase d'expérimentation comprend trois scénarios: Scénario A. Le processus d’estimation proposé doit utiliser les techniques pertinentes pour une gestion de l’incertitude et de l’ambiguité afin de faciliter la tache aux intéressés de réaliser ses estimations. Ce processus doit prende en compte les variables que les intéressés utilisent. Scénario B. Ce scénario est similaire au scénario A, sauf qu’il s’agit de projets en démarrage, et pour lesquels les valeurs finales de durée et de coûts ne sont pas disponibles pour fin de comparaison. Scénario C. Afin de remédier au manque d'informations par rapport au scénario B, le scénario C consiste en une expérience de simulation. Ces expérimentations ont permis de conclure que compte tenu des projets examinés dans les 3 scénarios, l'utilisation du processus d'estimation défini – EPCU - permet d’obtenir de meilleurs résultats que l'approche par opinions d'experts et peut être utilisée pour l'estimation précoce des projets de logiciels avec de bons résultats. Afin de gérer la quantité de calculs requis par le modèle d’estimation EPCU et pour l'enregistrement et la gestion des informations générées par ce modèle EPCU, un outil logiciel a été conçu et développé comme prototype de recherche pour effectuer les calculs nécessaires.
APA, Harvard, Vancouver, ISO, and other styles
18

Lermer, Toby, and University of Lethbridge Faculty of Arts and Science. "A software size estimation tool: Hellerman's complexity measure." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 1995, 1995. http://hdl.handle.net/10133/353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Yang. "Aspects of linking CAD and cost estimation software." Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52136.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2001.
ENGLISH ABSTRACT: This thesis describes a module that links AutoCAD and CeDeas (cost estimation software which was developed by Department of Mechanical Engineering, University of Stellenbosch). CeDeas is intended for estimating the direct manufacturing cost of simple welded assemblies in a batch production environment. It is aimed at use during late concept design or early detail design. The link module was developed in Borland C++ Builder. By using COM (Component Object Model) technology, the link module employs the methods and the properties of the AutoCAD automation interface to extract manufacturing information that is required by CeDeas. The link module prompts the user to pick objects in an AutoCAD drawing and then determines the values required by CeDeas to estimate the manufacturing cost. The user can choose between a "direct select method" (which uses the properties of geometric entities already in the drawing) and a "user define method" (whereby the user defmes temporary entities or combines aspects of existing entities in the AutoCAD drawing). With these results and some non-geometric inputs, the user can get a cost estimate of components and assemblies. After design changes, the link module can provide CeDeas with updated values with minimal user interaction in situations where the "direct select method" was used. The designer can therefore easily use the cost estimates to compare design alternatives to optimise the design. Validation studies demonstrated the numerical accuracy of the use of the link module. The link module can be regarded as an extension of CeDeas. At present it only supports AutoCAD R14, but can be extended to support AutoCAD 2000 and Mechanical Desktop.
AFRIKAANSE OPSOMMING: 'n Module wat dien as skakel tussen AutoCAD and CeDeas (kosteberamingsagteware ontwikkel deur die Departement van Meganiese Ingenieurswese, Universiteit van Stellenbosch) word in hierdie tesis beskryf. Die doel van CeDeas is om die direkte vervaardiginskoste van eenvoudige, gesweisde samestellings, in 'n lot-produksie omgewing, te beraam. Dit is gemik op gebruik tydens laat konsepontwerp en vroeë detailontwerp. Die skakelmodule is ontwikkel in Borland C++ Builder. Deur van COM (Component Object Model) tegnologie gebruik te maak, kry die skakelmodule toegang tot die funksies en eienskappe van AutoCAD se outomatisasie koppelvlak en kan sodoende die vervaardigingsinligting onttrek wat deur CeDeas benodig word. Die skakelmodule vra die gebruiker om voorwerpe in 'n AutoCAD tekening te kies en bepaal dan die waardes wat deur CeDeas benodig word om die vervaardigingskoste te skat. Die gebruiker kan kies tussen 'n "direkte keuse metode" (wat die eienskappe van geometriese entiteite wat reeds in die tekening is, gebruik) en 'n "gebruiker definieer metode" (waarin die gebruiker tydelike entiteite defmieer of kombinasies van aspekte van bestaande entiteite in die AutoCAD tekening gebruik). 'n Koste beraming van komponente of samestellings kan verkry word met hierdie inligting tesame met ander nie-geometriese inligting. Na ontwerpsveranderings, kan die skakelmodule hersiene waardes vir CeDeas voorsien met minimale gebruikers-interaksie in gevalle waar die "direkte keuse metode" gebruik is. Die gebruiker kan daarom maklik die kosteskattings gebruik om ontwerpsaltematiewe te vergelyk om die ontwerp te optimeer. Evalueringstudies het die numeriese akkuraatheid van die skakelmodule bevesting. Hierdie module kan as 'n uitbreiding van CeDeas beskou word. Tans werk die module slegs met AutoCAD R14, maar dit kan uitgebrei word om met AutoCAD 2000 en Mechanical Desktop te werk.
APA, Harvard, Vancouver, ISO, and other styles
20

Archibald, Colin J. "A software testing estimation and process control model." Thesis, Durham University, 1998. http://etheses.dur.ac.uk/4735/.

Full text
Abstract:
The control of the testing process and estimation of the resource required to perform testing is key to delivering a software product of target quality on budget. This thesis explores the use of testing to remove errors, the part that metrics and models play in this process, and considers an original method for improving the quality of a software product. The thesis investigates the possibility of using software metrics to estimate the testing resource required to deliver a product of target quality into deployment and also determine during the testing phases the correct point in time to proceed to the next testing phase in the life-cycle. Along with the metrics Clear ratio. Chum, Error rate halving. Severity shift, and faults per week, a new metric 'Earliest Visibility' is defined and used to control the testing process. EV is constructed upon the link between the point at which an error is made within development and subsequently found during testing. To increase the effectiveness of testing and reduce costs, whilst maintaining quality the model operates by each test phase being targeted at the errors linked to that test phase and the ability for each test phase to build upon the previous phase. EV also provides a measure of testing effectiveness and fault introduction rate by development phase. The resource estimation model is based on a gradual refinement of an estimate, which is updated following each development phase as more reliable data is available. Used in conjunction with the process control model, which will ensure the correct testing phase is in operation, the estimation model will have accurate data for each testing phase as input. The proposed model and metrics have been developed and tested on a large-scale (4 million LOC) industrial telecommunications product written in C and C++ running within a Unix environment. It should be possible to extend this work to suit other environments and other development life-cycles.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Chihui. "Development of a software tool for reliability estimation." Morgantown, W. Va. : [West Virginia University Libraries], 2009. http://hdl.handle.net/10450/10451.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2009.
Title from document title page. Document formatted into pages; contains xi, 138 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 136-138).
APA, Harvard, Vancouver, ISO, and other styles
22

Harry, Cyril Massey. "A thermodynamics-system approach to software resource estimation." Thesis, Imperial College London, 1991. http://hdl.handle.net/10044/1/46807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Crespo, Marques Elaine. "Sparse channels estimation applied in software defined radio." Electronic Thesis or Diss., Institut polytechnique de Paris, 2019. http://www.theses.fr/2019IPPAT004.

Full text
Abstract:
Les canaux de communication sont utilisés pour transmettre des signaux d'information. Cependant, ces canaux peuvent causer plusieurs distorsions sur le signal à transmettre, telles que l'atténuation, la perte par trajets multiples et le décalage Doppler, entre autres. Pour une meilleure récupération des messages, le récepteur peut estimer le canal et améliorer la fiabilité des systèmes de communication. Plusieurs systèmes de communication, tels que la télévision haute définition, le système mmWave, les large bande HF et les bandes ultra-large, disposent de canaux parcimonieux. Cette caractéristique peut être utilisée pour améliorer les performances de l'estimateur et réduire la taille de la séquence d'apprentissage, diminuant ainsi la puissance consommée et la bande passante. Cette thèse traite le problème de l'estimation du canal en explorant des méthodes qui exploitent sa parcimonie. L'étude de l'acquisition comprimée et de ses algorithmes a conduit à la proposition d'un nouvel algorithme appelé Matching Pursuit based Least Square (MPLS). L'utilisation de réseaux de neurones (NN) pour l'estimation de signaux parcimonieux a également été explorée. Les travaux ont été axés sur NN, inspirés d'algorithmes d'd'acquisition comprimée tels que Learned Iterative Shrinkage-Thresholding Algorithm (LISTA). Cela a abouti à deux approches qui améliorent les performances de LISTA ainsi qu'à un nouveau réseau de neurones adapté à l'estimation de signaux parcimonieux
Communication channels are used to transmit information signals. However, these channels can cause several distortions on the signal to be transmitted, such as attenuation, multipath loss and Doppler shift, among others. For a better message recovery, the receiver can estimate the channel and bring more reliability to the communications systems. Several communications systems, for example high-definition television, mmWave system, wideband HF and ultra-wideband have sparse channels. This characteristic can be used to improve the performance of the estimator and reduce the size of the training sequence so decreasing the consumption power and bandwidth. This thesis handles the channel estimation problem by investigating methods that exploit the sparsity of the channel. The study of Compressive Sensing and its sparse recovery algorithms led to the proposition of a new algorithm called Matching Pursuit based on Least Square (MPLS). The use of neural networks (NN) to sparse signals estimation was also explored. The work focused on NN inspired by sparse recovery algorithms such as Learned Iterative Shrinkage-Thresholding Algorithm (LISTA). This resulted in two approaches that improve LISTA performance as well as to a new neural network suitable to estimate sparse signals
APA, Harvard, Vancouver, ISO, and other styles
24

Mo, Lijia. "Examining the reliability of logistic regression estimation software." Diss., Kansas State University, 2010. http://hdl.handle.net/2097/7059.

Full text
Abstract:
Doctor of Philosophy
Department of Agricultural Economics
Allen M. Featherstone
Bryan W. Schurle
The reliability of nine software packages using the maximum likelihood estimator for the logistic regression model were examined using generated benchmark datasets and models. Software packages tested included: SAS (Procs Logistic, Catmod, Genmod, Surveylogistic, Glimmix, and Qlim), Limdep (Logit, Blogit), Stata (Logit, GLM, Binreg), Matlab, Shazam, R, Minitab, Eviews, and SPSS for all available algorithms, none of which have been previously tested. This study expands on the existing literature in this area by examination of Minitab 15 and SPSS 17. The findings indicate that Matlab, R, Eviews, Minitab, Limdep (BFGS), and SPSS provided consistently reliable results for both parameter and standard error estimates across the benchmark datasets. While some packages performed admirably, shortcomings did exist. SAS maximum log-likelihood estimators do not always converge to the optimal solution and stop prematurely depending on starting values, by issuing a ``flat" error message. This drawback can be dealt with by rerunning the maximum log-likelihood estimator, using a closer starting point, to see if the convergence criteria are actually satisfied. Although Stata-Binreg provides reliable parameter estimates, there is no way to obtain standard error estimates in Stata-Binreg as of yet. Limdep performs relatively well, but did not converge due to a weakness of the algorithm. The results show that solely trusting the default settings of statistical software packages may lead to non-optimal, biased or erroneous results, which may impact the quality of empirical results obtained by applied economists. Reliability tests indicate severe weaknesses in SAS Procs Glimmix and Genmod. Some software packages fail reliability tests under certain conditions. The finding indicates the need to use multiple software packages to solve econometric models.
APA, Harvard, Vancouver, ISO, and other styles
25

Azar, Danielle. "Using genetic algorithms to optimize software quality estimation models." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=84985.

Full text
Abstract:
Assessing software quality is fundamental in the software developing field. Most software quality characteristics cannot be measured before a certain period of use of the software product. However, they can be predicted or estimated based on other measurable quality attributes. Software quality estimation models are built and used extensively for this purpose. Most such models are constructed using statistical or machine learning techniques. However, in this domain it is very hard to obtain data sets on which to train such models; often such data sets are proprietary, and the publicly available data sets are too small, or not representative. Hence, the accuracy of the models often deteriorates significantly when they are used to classify new data.
This thesis explores the use of genetic algorithms for the problem of optimizing existing rule-based software quality estimation models. The main contributions of this work are two evolutionary approaches to this optimization problem. In the first approach, we assume the existence of several models, and we use a genetic algorithm to combine them, and adapt them to a given data set. The second approach optimizes a single model. The core concept of this thesis is to consider existing models that have been constructed on one data set and adapt them to new data. In real applications, this can be seen as adapting already existing software quality estimation models that have been constructed on data extracted from common domain knowledge to context-specific data. Our technique maintains the white-box nature of the models which can be used as guidelines in future software development processes.
APA, Harvard, Vancouver, ISO, and other styles
26

Andersson, Veronika, and Hanna Sjöstedt. "Improved effort estimation of software projects based on metrics." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5269.

Full text
Abstract:

Saab Ericsson Space AB develops products for space for a predetermined price. Since the price is fixed, it is crucial to have a reliable prediction model to estimate the effort needed to develop the product. In general software effort estimation is difficult, and at the software department this is a problem.

By analyzing metrics, collected from former projects, different prediction models are developed to estimate the number of person hours a software project will require. Models for predicting the effort before a project begins is first developed. Only a few variables are known at this state of a project. The models developed are compared to a current model used at the company. Linear regression models improve the estimate error with nine percent units and nonlinear regression models improve the result even more. The model used today is also calibrated to improve its predictions. A principal component regression model is developed as well. Also a model to improve the estimate during an ongoing project is developed. This is a new approach, and comparison with the first estimate is the only evaluation.

The result is an improved prediction model. There are several models that perform better than the one used today. In the discussion, positive and negative aspects of the models are debated, leading to the choice of a model, recommended for future use.

APA, Harvard, Vancouver, ISO, and other styles
27

Schofield, Christopher. "An empirical investigation into software effort estimation by analogy." Thesis, Bournemouth University, 1998. http://eprints.bournemouth.ac.uk/411/.

Full text
Abstract:
Most practitioners recognise the important part accurate estimates of development effort play in the successful management of major software projects. However, it is widely recognised that current estimation techniques are often very inaccurate, while studies (Heemstra 1992; Lederer and Prasad 1993) have shown that effort estimation research is not being effectively transferred from the research domain into practical application. Traditionally, research has been almost exclusively focused on the advancement of algorithmic models (e.g. COCOMO (Boehm 1981) and SLIM (Putnam 1978)), where effort is commonly expressed as a function of system size. However, in recent years there has been a discernible movement away from algorithmic models with non-algorithmic systems (often encompassing machine learning facets) being actively researched. This is potentially a very exciting and important time in this field, with new approaches regularly being proposed. One such technique, estimation by analogy, is the focus of this thesis. The principle behind estimation by analogy is that past experience can often provide insights and solutions to present problems. Software projects are characterised in terms of collectable features (such as the number of screens or the size of the functional requirements) and stored in a historical case base as they are completed. Once a case base of sufficient size has been cultivated, new projects can be estimated by finding similar historical projects and re-using the recorded effort. To make estimation by analogy feasible it became necessary to construct a software tool, dubbed ANGEL, which allowed the collection of historical project data and the generation of estimates for new software projects. A substantial empirical validation of the approach was made encompassing approximately 250 real historical software projects across eight industrial data sets, using stepwise regression as a benchmark. Significance tests on the results accepted the hypothesis (at the 1% confidence level) that estimation by analogy is a superior prediction system to stepwise regression in terms of accuracy. A study was also made of the sensitivity of the analogy approach. By growing project data sets in a pseudo time-series fashion it was possible to answer pertinent questions about the approach, such as, what are the effects of outlying projects and what is the minimum data set size? The main conclusions of this work are that estimation by analogy is a viable estimation technique that would seem to offer some advantages over algorithmic approaches including, improved accuracy, easier use of categorical features and an ability to operate even where no statistical relationships can be found.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Weikun. "Data-driven software performance engineering : models and estimation algorithms." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/61828.

Full text
Abstract:
The accurate performance measurement of computer applications is critical for service providers. For these providers, to ensure that the performance constraints signed by users can be respected, software performance models are used to provide quantitative predictions and evaluation of the applications such that timely adjustment of the architecture and fine tuning of the configurations can be achieved. For effective use of performance models in performance engineering, the fundamental problem is to assign realistic parameters to the models. In addition, the complexity of real-world application leads to dynamic behavior, caused by parallel computations with multiple CPUs or caching and shared data structures, which is challenging to capture for performance models to capture. Characterizing these changing effects, also known as load-dependent or queue-dependent application behaviors, is necessary for accurate prediction and evaluation of application performance. To enhance the model tractability and applicability, in this thesis we develop efficient algorithms to estimate the parameters of closed queueing networks, especially the resource demand of requests. To efficiently estimate these parameters, we introduce two classes of algorithms based on Markov Chain Monte Carlo (MCMC) algorithms and Maximum Likelihood Estimation (MLE) techniques. In particular, the MLE based approach can be generalized to load-dependent queueing networks, enhancing the applicability of the models. Moreover, we set out to resolve the problem of efficiently evaluating load-dependent and queue-dependent closed queueing network models. The complication of load-dependent or queue-dependent behavior makes the models challenging to analyze, a fact that discourages practitioners from characterizing workload dependencies. To solve this problem, we develop an algorithm for evaluating the performance of queue-dependent product-form closed queueing network. Our approach is based on approximate mean value analysis (AMVA) and is shown to be efficient, robust and easy to apply, thus enhancing the tractability of queue-dependent models.
APA, Harvard, Vancouver, ISO, and other styles
29

Moyer, Daniel Raymond. "Software development resource estimation in the 4th generation environment." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Agha, Shahrukh. "Software and hardware techniques for accelerating MPEG2 motion estimation." Thesis, Loughborough University, 2006. https://dspace.lboro.ac.uk/2134/33935.

Full text
Abstract:
The aim of this thesis is to accelerate the process of motion estimation (ME) for the implementation of real time, portable video encoding. To this end a number of different techniques have been considered and these have been investigated in detail. Data Level Parallelism (DLP) is exploited first, through the use of vector instruction extensions using configurable/re-configurable processors to form a fast System-On-Chip (SoC) video encoder capable of embedding both full search and fast ME methods. Further parallelism is then exploited in the form of Thread Level Parallelism (TLP), introduced into the ME process through the use of multiple processors incorporated onto a single Soc. A theoretical explanation of the results, obtained with these methodologies, is then developed for algorithmic optimisations. This is followed with the investigation of an efficient, orthogonal technique based on the use of a reduced number of bits (RBSAD) for the purposes of image comparison. This technique, which provides savings of both power and time, is investigated along with a number of criteria for its improvement to full resolution. Finally a VLSI layout of a low-power ME engine, capable of using this technique, is presented. The combination of DLP, TLP and RBSAD is found to reduce the clock frequency requirement by around an order of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
31

Subramanian, Sriram. "Software Performance Estimation Techniques in a Co-Design Environment." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1061553201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Shu, Gang. "Statistical Estimation of Software Reliability and Failure-causing Effect." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1405509796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kucukcoban, Sezgin. "Development Of A Software For Seismic Damage Estimation: Case Studies." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605087/index.pdf.

Full text
Abstract:
The occurrence of two recent major earthquakes, 17 August 1999 Mw = 7.4 Izmit and 12 November 1999 Mw = 7.1 Dü
zce, in Turkey prompted seismologists and geologists to conduct studies to predict magnitude and location of a potential earthquake that can cause substantial damage in Istanbul. Many scenarios are available about the extent and size of the earthquake. Moreover, studies have recommended rough estimates of risk areas throughout the city to trigger responsible authorities to take precautions to reduce the casualties and loss for the earthquake expected. Most of these studies, however, adopt available procedure by modifying them for the building stock peculiar to Turkey. The assumptions and modifications made are too crude and thus are believed to introduce significant deviations from the actual case. To minimize these errors and use specific damage functions and capacity curves that reflect the practice in Turkey, a study was undertaken to predict damage pattern and distribution in Istanbul for a scenario earthquake proposed by Japan International Cooperation Agency (JICA). The success of these studies strongly depends on the quality and validity of building inventory and site property data. Building damage functions and capacity curves developed from the studies conducted in Middle East Technical University are used. A number of proper attenuation relations are employed. The study focuses mainly on developing a software to carry out all computations and present results. The results of this study reveal a more reliable picture of the physical seismic damage distribution expected in Istanbul.
APA, Harvard, Vancouver, ISO, and other styles
34

Hjalmarsson, Alexander. "Software Development Cost Estimation Using COCOMO II Based Meta Model." Thesis, KTH, Industriella informations- och styrsystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123367.

Full text
Abstract:
Large amounts of software are running on what is considered to be legacy platforms. These systems are often business critical and cannot be phased out without a proper replacement. The generations of developers that have developed, maintained and supported these systems are leaving the workforce leaving an estimated shortfall of developers in the near time. Migration of these legacy applications can be troublesome due poor documentation and estimating the sizes of the projects is nontrivial. Expert estimates are the most common method of estimation when it comes to software projects but the method is heavily relying on the experience, knowledge and intuition of the estimator. The use of a complementary estimation method can increase the accuracy of the estimation. This thesis constructs a meta model that combines enterprise architecture concepts with the COCOMO II estimation model in order to utilize the benefits of architectural overviews and tested models with the purpose of supporting the migration process. The study proposes a method combining expert cost estimation with model based estimation which increases the estimation accuracy. The combination method on the four project samples resulted in a mean magnitude of relative error of 10%.
APA, Harvard, Vancouver, ISO, and other styles
35

Kelly, Michael A. "A methodology for software cost estimation using machine learning techniques." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from the National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA273158.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, September 1993.
Thesis advisor(s): Ramesh, B. ; Abdel-Hamid, Tarek K. "September 1993." Bibliography: p. 135. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
36

Hughes, Robert T. "An empirical investigation into the estimation of software development effort." Thesis, University of Brighton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362219.

Full text
Abstract:
Any guidance that might help to reduce the problems of accurately estimating software development effort could assist software producers to set more realistic budgets for software projects. This investigation attempted to make a contribution to this by documenting some of the practical problems with introducing structured effort estimation models at a site in the United Kingdom of an international supplier of telephone switching software. The theory of effort modelling was compared with actual practice by examining how the estimating experts at the telephone switching software producer currently carried out estimating. Two elements of the estimation problem emerged: judging the size of the job to be done and gauging the productivity of the development environment. Expert opinion was particularly important to the initial process, particularly when existing software was being enhanced. The study then identified development effort drivers and customised effort models applicable to real-time telecommunications applications. Many practical difficulties were found concerning the actual methods used to record past project data, although the issues surrounding these protocols appeared to be rarely dealt with explicitly in the research literature. The effectiveness of the models was trialled by forecasting the effort for some new projects and then comparing these estimates with the actual effort. The key research outcomes were, firstly the identification and validation of a set of relevant functional effort drivers applicable in a real-time telecommunications software development environment and the building of an effective effort model, and, secondly, the evaluation of alternative prediction approaches including analogy or case-based reasoning. While analogy was a useful tool, some methods of implementing analogy were flawed theoretically and did not consistently outperform 'traditional' model building techniques such as Least Squares Regression (LSR) in the environment under study. This study would, however, support analogy as a complementary technique to algorithmic modelling
APA, Harvard, Vancouver, ISO, and other styles
37

Hammoud, Wissam. "Attributes effecting software testing estimation; is organizational trust an issue?" Thesis, University of Phoenix, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3583478.

Full text
Abstract:

This quantitative correlational research explored the potential association between the levels of organizational trust and the software testing estimation. This was conducted by exploring the relationships between organizational trust, tester’s expertise, organizational technology used, and the number of hours, number of testers, and time-coding estimated by the software testers. The research conducted on a software testing department of a health insurance organization, employed the use of the Organizational Trust Inventory- Short Form (OTI-SF) developed by Philip Bromiley and Larry Cummings and revealed a strong relationship between organizational trust and software testing estimation. The research reviews historical theories of organizational trust and include a deep discussion about software testing practices and software testing estimation. By examining the significant impact of organizational trust on project estimating and time-coding in this research, software testing leaders can benefit from this research to improve project planning and managing process by improving the levels of trust within their organizations.

APA, Harvard, Vancouver, ISO, and other styles
38

SPINDEL, BERNARDO DA ROCHA. "ANALYSIS AND DEVELOPMENT OF A STAR-TREE MODEL ESTIMATION SOFTWARE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14099@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Na análise de séries temporais, os modelos lineares amplamente difundidos e utilizados, como regressões lineares e modelos auto-regressivos, não são capazes de capturar sua natureza muitas vezes não-linear,oferecendo resultados insatisfatórios. Séries financeiras, por exemplo, apresentam este tipo de comportamento. Ao longo dos últimos anos, houve o surgimento de muitos modelos não lineares para análise de séries temporais, tanto estatísticos como de inteligência computacional, baseados em redes neurais. Esta dissertação se propõe a analisar a performance do modelo STAR-Tree sob diversos cenários de conFiguração, parametrização e metodologias de estimação. Esta classe de modelos subdivide os dados de uma série temporal em regiões distintas que atendem critérios especificados em funções chamadas de pertinências. A cada região é atribuído um modelo linear auto-regressivo. Cada dado estimado pode estar em alguma das regiões com algum grau de pertinência determinado pelas funções fornecidas pelo modelo principal. Fatores como a proximidade das regiões, a suavidade das funções de pertinência e a falta de diversidade nos dados podem dificultar a estimação dos modelos. Para avaliar a qualidade das estimações sob os diversos cenários, foi construído um sistema capaz de gerar séries artificiais, importar séries externas, estimá-las sob a modelagem STAR-Tree, e gerar simulações de Monte Carlo que avaliam a qualidade da estimação de parâmetros e a capacidade de detecção das estruturas de árvore do modelo. Ele foi utilizado como ferramenta para realizar as análises presentes na dissertação, e permitiu que se testassem diferentes conFigurações de métodos e parametrizações com facilidade.
In time series analysis, linear models that have been broadly used, such as linear regressions and auto-regressive models, are not able to capture the some times non linear nature of some data, offering poor estimation results. Financial series, for instance, show that kind of behavior. Over the last years, a great number of non linear models have been developed in order to analyze time series, some of them statistical, others based on computational intelligence techniques such as neural networks. The purpose of this dissertation is to analyze the performance of the STAR-Tree model under distinct scenarios that differ in model specification, parameterization and estimation methodologies. This class of models splits time series data into individual regions which fulfill the criteria set up by functions called pertinences. A linear model then is selected for each one of those regions. Each estimated data point can belong to one of the mentioned regions with some degree of pertinence, supplied by the above mentioned pertinence functions. Aspects like the proximity between regions, the smoothness of the pertinence functions and the lack of diversity in real data can significantly affect the estimation of models. In order to evaluate the quality of the estimations under the different proposed scenarios, a software was developed with the capabilities of generating artificial time series, importing external series, estimating them under the STAR-Tree model, and generating Monte Carlo simulations that evaluate the quality of parameter estimation and the tree structure detection capability of the model. The software was used as the single tool to generate this dissertation’s analyses, and allowed that different model specifications and methods could be tested without difficulty.
APA, Harvard, Vancouver, ISO, and other styles
39

Clippingdale, Simon. "Multiresolution image modelling and estimation." Thesis, University of Warwick, 1988. http://wrap.warwick.ac.uk/109834/.

Full text
Abstract:
Multiresolution representations make explicit the notion of scale in images, and facilitate the combination of information from different scales. To date, however, image modelling and estimation schemes have not exploited such representations and tend rather to be derived from two- dimensional extensions of traditional one-dimensional signal processing techniques. In the causal case, autoregressive (AR) and ARMA models lead to minimum mean square error (MMSE) estimators which are two-dimensional variants of the well-established Kalman filter. Noncausal approaches tend to be transform-based and the MMSE estimator is the two- dimensional Wiener filter. However, images contain profound nonstationarities such as edges, which are beyond the descriptive capacity of such signal models, and defects such as blurring (and streaking in the causal case) are apparent in the results obtained by the associated estimators. This thesis introduces a new multiresolution image model, defined on the quadtree data structure. The model is a one-dimensional, first-order gaussian martingale process causal in the scale dimension. The generated image, however, is noncausal and exhibits correlations at all scales unlike those generated by traditional models. The model is capable of nonstationary behaviour in all three dimensions (two position and one scale) and behaves isomorphically but independently at each scale, in keeping with the notion of scale invariance in natural images. The optimal (MMSE) estimator is derived for the case of corruption by additive white gaussian noise (AWGN). The estimator is a one-dimensional, first-order linear recursive filter with a computational burden far lower than that of traditional estimators. However, the simple quadtree data structure leads to aliasing and 'block' artifacts in the estimated images. This could be overcome by spatial filtering, but a faster method is introduced which requires no additional multiplications but involves the insertion of some extra nodes into the quadtree. Nonstationarity is introduced by a fast, scale-invariant activity detector defined on the quadtree. Activity at all scales is combined in order to achieve noise rejection. The estimator is modified at each scale and position by the detector output such that less smoothing is applied near edges and more in smooth regions. Results demonstrate performance superior to that of existing methods, and at drastically lower computational cost. The estimation scheme is further extended to include anisotropic processing, which has produced good results in image restoration. An orientation estimator controls anisotropic filtering, the output of which is made available to the image estimator.
APA, Harvard, Vancouver, ISO, and other styles
40

Sapre, Alhad Vinayak. "Feasibility of Automated Estimation of Software Development Effort in Agile Environments." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345479584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Woodings, Terence Leslie. "Variation in project parameters as a measure of improvement in software process control." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2006. http://theses.library.uwa.edu.au/adt-WU2006.0084.

Full text
Abstract:
[Truncated abstract] The primary tool for software process control is the project plan, with divergence from the schedule usually being the first indication that there are difficulties. Thus the estimation of the schedule, particularly the effort parameter, is a central element of software engineering management. Regrettably, estimation methods are poorly used within the software industry and accuracy is lacking when compared with other engineering disciplines. There are many reasons for this. However, the need to predict project effort remains, particularly in situations of tendering for contracts. The broad objective of this research is the improvement of project control by means of better estimation. . . The error in the prediction of a project parameter is investigated as the result of the variation in two distinct (estimation and actual development) processes. Improvement depends upon the understanding, control and then reduction of that variation. A strategy for the systematic identification of the sources of greatest variation is developed - so that it may be reduced by appropriate software engineering practices. The key to the success of the approach is the statistical partitioning of the Mean Square Error (of the estimate) in order to identify the weakest area of project control. The concept is proven with a set of student projects, where the estimation error is significantly reduced. The conditions for its transfer to industry are discussed and a systematic reduction in error is demonstrated on five sets of commercial project data. The thesis concludes with a discussion of the linking of the approach to current estimation methods. It should also have implications for the statistical process control of other projects involving small sample sizes and multiple correlated parameters.
APA, Harvard, Vancouver, ISO, and other styles
42

Kanneganti, Alekhya. "Using Ensemble Machine Learning Methods in Estimating Software Development Effort." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20691.

Full text
Abstract:
Background: Software Development Effort Estimation is a process that focuses on estimating the required effort to develop a software project with a minimal budget. Estimating effort includes interpretation of required manpower, resources, time and schedule. Project managers are responsible for estimating the required effort. A model that can predict software development effort efficiently comes in hand and acts as a decision support system for the project managers to enhance the precision in estimating effort. Therefore, the context of this study is to increase the efficiency in estimating software development effort. Objective: The main objective of this thesis is to identify an effective ensemble method to build and implement it, in estimating software development effort. Apart from this, parameter tuning is also implemented to improve the performance of the model. Finally, we compare the results of the developed model with the existing models. Method: In this thesis, we have adopted two research methods. Initially, a Literature Review was conducted to gain knowledge on the existing studies, machine learning techniques, datasets, ensemble methods that were previously used in estimating Software Development Effort. Then a controlled Experiment was conducted in order to build an ensemble model and to evaluate the performance of the ensemble model for determining if the developed model has a better performance when compared to the existing models.   Results: After conducting literature review and collecting evidence, we have decided to build and implement stacked generalization ensemble method in this thesis, with the help of individual machine learning techniques like Support vector regressor (SVR), K-Nearest Neighbors regressor (KNN), Decision Tree Regressor (DTR), Linear Regressor (LR), Multi-Layer Perceptron Regressor (MLP) Random Forest Regressor (RFR), Gradient Boosting Regressor (GBR), AdaBoost Regressor (ABR), XGBoost Regressor (XGB). Likewise, we have decided to implement Randomized Parameter Optimization and SelectKbest function to implement feature section. Datasets like COCOMO81, MAXWELL, ALBERCHT, DESHARNAIS were used. Results of the experiment show that the developed ensemble model performs at its best, for three out of four datasets. Conclusion: After evaluating and analyzing the results obtained, we can conclude that the developed model works well with the datasets that have continuous, numeric type of values. We can also conclude that the developed ensemble model outperforms other existing models when implemented with COCOMO81, MAXWELL, ALBERCHT datasets.
APA, Harvard, Vancouver, ISO, and other styles
43

Rahhal, Silas. "An effort estimation model for implementing ISO 9001 in software organizations." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23290.

Full text
Abstract:
The adoption of software development principles and methodologies embodying best practices and standards is essential to achieving quality software products. Many organizations world-wide have implemented quality management systems that comply with the requirements of the ISO 9001 standard, or similar schemes, to ensure product quality. Meeting the requirement of ISO 9000 can be costly in time, effort and money. The effort primarily depends on the size of the organization, and the status of the quality management system. The focus of this thesis is on the development of an effort estimation model for the implementation of ISO 9001 in software organizations. In determining this effort, a survey of 1190 registered organizations was conducted in 1995, of which 63 were software organizations. A statistical regression model for predicting the effort was developed and validated. The proposed effort estimation model forms a foundation for building and comparing related effort estimation models for ISO 9000 and other process improvement frameworks.
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Qi. "Study of Channel Estimation in MIMO-OFDM for Software Defined Radio." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10128.

Full text
Abstract:

The aim of the thesis is to find out the most suitable channel estimation algorithms for the existing MIMO-OFDM SDR platform. Starting with the analysis of several prevalent channel estimation algorithms, MSE performance are compared under different scenarios. As a result of the hardware independent analysis, the complexvalued matrix computations involved in the algorithms are decomposed to real FLoating-point OPerations (FLOPs). Four feasible algorithms are selected for hardware dependent discussion based on the proposed hardware architecture. The computational latency is exposed as a manner of case study.

APA, Harvard, Vancouver, ISO, and other styles
45

Cannon, Christopher J. "Cost estimation of post production software support in ground combat systems." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion-image.exe/07Sep%5FCannon.pdf.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, September 2007.
Thesis Advisor(s): Nussbaum, Daniel ; Mislick, Gregory K. "September 2007." Description based on title screen as viewed on October 19, 2007. Includes bibliographical references (p. 67-70). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
46

Britto, Ricardo. "Knowledge Classification for Supporting Effort Estimation in Global Software Engineering Projects." Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10520.

Full text
Abstract:
Background: Global Software Engineering (GSE) has become a widely applied operational model for the development of software systems; it can increase profits and decrease time-to-market. However, there are many challenges associated with development of software in a globally distributed fashion. There is evidence that these challenges affect many process related to software development, such as effort estimation. To the best of our knowledge, there are no empirical studies to gather evidence on effort estimation in the GSE context. In addition, there is no common terminology for classifying GSE scenarios focusing on effort estimation. Objective: The main objective of this thesis is to support effort estimation in the GSE context by providing a taxonomy to classify the existing knowledge in this field. Method: Systematic literature review (to identify and analyze the state of the art), survey (to identify and analyze the state of the practice), systematic mapping (to identify practices to design software engineering taxonomies), and literature survey (to complement the states of the art and practice) were the methods employed in this thesis. Results: The results on the states of the art and practice show that the effort estimation techniques employed in the GSE context are the same techniques used in the collocated context. It was also identified that global aspects, e.g. time, geographical and social-cultural distances, are accounted for as cost drivers, although it is not clear how they are measured. As a result of the conducted mapping study, we reported a method that can be used to design new SE taxonomies. The aforementioned results were combined to extend and specialize an existing GSE taxonomy, for suitability for effort estimation. The usage of the specialized GSE effort estimation taxonomy was illustrated by classifying 8 finished GSE projects. The results show that the specialized taxonomy proposed in this thesis is comprehensive enough to classify GSE projects focusing on effort estimation. Conclusions: The taxonomy presented in this thesis will help researchers and practitioners to report new research on effort estimation in the GSE context; researchers and practitioners will be able to gather evidence, com- pare new studies and find new gaps in an easier way. The findings from this thesis show that more research must be conducted on effort estimation in the GSE context. For example, the way the cost drivers are measured should be further investigated. It is also necessary to conduct further research to clarify the role and impact of sourcing strategies on the effort estimates’ accuracies. Finally, we believe that it is possible to design an instrument based on the specialized GSE effort estimation taxonomy that helps practitioners to perform the effort estimation process in a way tailored for the specific needs of the GSE context.
APA, Harvard, Vancouver, ISO, and other styles
47

Hyberg, Martin. "Software Issue Time Estimation With Natural Language Processing and Machine Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-295202.

Full text
Abstract:
Time estimation for software issues is crucial to planning projects. Developers and experts have for many decades tried to estimate time requirements for issues as accurately as possible. The methods that are used today are often time-consuming and complex. This thesis investigates if the time estimation process can be done with natural language processing and machine learning. Three different word embeddings were used to represent the free text description, bag-of-words with tf-idf weighing, word2Vec and fastText. The different word embeddings were then fed into two types of machine learning approaches, classification and regression. The classification was binary and can be formulated as will the issue take more than three hours?. The goal of the regression problem was to predict an actual value for the time that the issue would take to complete. The classification models performance were measured with an F1-score, and the regression model was measured with an R2-score. The best F1- score for classification was 0.748 and was achieved with the word2Vec word embedding and an SVM classifier. The best score for the regression analysis was achieved with the bag-of-words word embedding, which achieved an R2- score of 0.380. Further evaluation of the results and a comparison to actual estimates made by the company show that humans only performs slightly better than the models given the binary classification defined above. The F1-score of the employees was 0.792, a difference of just 0.044 from the best F1-score made by the models. This thesis concludes that the models are not good enough to use in a professional setting. An F1-score of 0.748 could be used in other settings, but the classification question in this problem is too broad to be used for a real project. The results for the regression is also too low to be of any valuable use.
Tidsuppskattning för programvaruärenden är en avgörande del för planering av projekt. Utvecklare och experter har i många årtionden försökt uppskatta tiden ett ärende kommer ta så exakt som möjligt. Metoderna som används idag är ofta tidskrävande och komplexa. Denna avhandling undersöker om tidsuppskattningsprocessen kan göras med hjälp av språkteknologi och maskininlärning. De flesta programvaruärenden har en fritextbeskrivning av vad som är fel eller behöver läggas till. Tre olika ordinbäddningar användes för att representera fritextbeskrivningen, bag-of-word med tf-idf-viktning, word2Vec och fastText. De olika ordinbäddningarna matades sedan in i två typer av maskininlärningsmetoder, klassificering och regression. Klassificeringen var binär och frågan kan formuleras som tar ärendet mer än tre timmar?. Målet med regressionsproblemet var att förutsäga ett faktiskt värde för den tid som frågan skulle ta att slutföra. Klassificeringsmodellens prestanda mättes med en F1-poäng och regressionsmodellen mättes med en R2-poäng. Den bästa F1-poängen för klassificering var 0.748 och uppnåddes med en word2Vec-ordinbäddning och en SVM-klassificeringsmodell. Den bästa poängen för regressionsanalysen uppnåddes med en bag-of-words-inbäddning, som uppnådde en R2-poäng på 0.380. Vidare undersökning av resultaten och en jämförelse av faktiskta tidsestimat som gjorts av företaget visar att människor bara är lite bättre än modellerna givet klassificeringsfrågan beskriven ovan. F1-poängen för de anställda var 0.792, bara 0.044 bättre än det bästa F1-poängen för modellerna. Slutsatsen för denna avhandling är att modellerna inte är tillräckligt bra för att användas i en professionell miljö. En F1-poäng på 0.748 kan användas i andra situationer, men klassificeringsfrågan i detta problem är för bred för att användas för ett riktigt projekt. Resultatet för regressionen är också för lågt för att vara till någon värdefull användning.
APA, Harvard, Vancouver, ISO, and other styles
48

Court, Cliff. "A new estimation methodology for reusable component-based software development projects." Master's thesis, University of Cape Town, 1999. http://hdl.handle.net/11427/9708.

Full text
Abstract:
Bibliograhy: leaves 118-121.
Estimating the duration of software development projects is a difficult task. There are many factors that can derail software projects. However, estimation forms the fundamental part of planning and costing any project and is therefore very necessary. While several formal estimation methodologies exist, they all exhibit weaknesses in one form or another. The most established methodologies are based on early software development methods and it is questionable as to whether they can still address more modern development methods such as reusable component-based programming. Some researchers believe not and have proposed new methodologies that attempt to achieve this. Thus what is needed is a methodology that takes into account modern component-based development practices and, as a result, provides acceptable accuracy for the software organisation. This dissertation attempts to uniquely satisfy both of these requirements.
APA, Harvard, Vancouver, ISO, and other styles
49

Scott, Hanna. "Towards a Framework for Fault and Failure Prediction and Estimation." Licentiate thesis, Karlskrona : Department of Systems and Software Engineering, School of Engineering, Blekinge Institute of Technology, 2008. http://www.bth.se/fou/Forskinfo.nsf/allfirst2/46bd1c549ac32f74c12574c100299f82?OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Borg, Anton. "Decision Support for Estimation of the Utility of Software and E-mail." Licentiate thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00533.

Full text
Abstract:
Background: Computer users often need to distinguish between good and bad instances of software and e-mail messages without the aid of experts. This decision process is further complicated as the perception of spam and spyware varies between individuals. As a consequence, users can benefit from using a decision support system to make informed decisions concerning whether an instance is good or bad. Objective: This thesis investigates approaches for estimating the utility of e-mail and software. These approaches can be used in a personalized decision support system. The research investigates the performance and accuracy of the approaches. Method: The scope of the research is limited to the legal grey- zone of software and e-mail messages. Experimental data have been collected from academia and industry. The research methods used in this thesis are simulation and experimentation. The processing of user input, along with malicious user input, in a reputation system for software were investigated using simulations. The preprocessing optimization of end user license agreement classification was investigated using experimentation. The impact of social interaction data in regards to personalized e-mail classification was also investigated using experimentation. Results: Three approaches were investigated that could be adapted for a decision support system. The results of the investigated reputation system suggested that the system is capable, on average, of producing a rating ±1 from an objects correct rating. The results of the preprocessing optimization of end user license agreement classification suggested negligible impact. The results of using social interaction information in e-mail classification suggested that accurate spam detectors can be generated from the low-dimensional social data model alone, however, spam detectors generated from combinations of the traditional and social models were more accurate. Conclusions: The results of the presented approaches suggestthat it is possible to provide decision support for detecting software that might be of low utility to users. The labeling of instances of software and e-mail messages that are in a legal grey-zone can assist users in avoiding an instance of low utility, e.g. spam and spyware. A limitation in the approaches is that isolated implementations will yield unsatisfactory results in a real world setting. A combination of the approaches, e.g. to determine the utility of software, could yield improved results.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography