To see the other types of publications on this topic, follow the link: Effort estimation.

Dissertations / Theses on the topic 'Effort estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Effort estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tunalilar, Seckin. "Efes: An Effort Estimation Methodology." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613847/index.pdf.

Full text
Abstract:
The estimation of effort is at the heart of project tasks, since it is used for many purposes such as cost estimation, budgeting, monitoring, project planning, control and software investments. Researchers analyze problems of the estimation, propose new models and use new techniques to improve accuracy. However up to now, there is no comprehensive estimation methodology to guide companies in their effort estimation tasks. Effort estimation problem is not only a computational but also a managerial problem. It requires estimation goals, execution steps, applied measurement methods and updating mechanisms to be properly defined. Besides project teams should have motivation and responsibilities to build a reliable database. If such methodology is not defined, common interpretation will not be constituted among software teams of the company, and variances in measurements and divergences in collected information prevents to collect sufficient historical information for building accurate models. This thesis proposes a methodology for organizations to manage and execute effort estimation processes. The approach is based on the reported best practices, v empirical results of previous studies and solutions to problems &
conflicts described in literature. Five integrated processes: Data Collection, Size Measurement, Data Analysis, Calibration, Effort Estimation processes are developed with their artifacts, procedures, checklists and templates. The validation and applicability of the methodology is checked in a middle-size software company. During the validation of methodology we also evaluated some concepts such as Functional Similarity (FS) and usage of Base Functional Components (BFC) in effort model on a reliable dataset. By this way we evaluated whether these subjects should be a part of methodology or not. Besides in this study it is the first time that the COSMIC has been used for Artificial Neural Network models.
APA, Harvard, Vancouver, ISO, and other styles
2

Nabi, Mina. "A Software Benchmarking Methodology For Effort Estimation." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614648/index.pdf.

Full text
Abstract:
Software project managers usually use benchmarking repositories to estimate effort, cost, and duration of the software development which will be used to appropriately plan, monitor and control the project activities. In addition, precision of benchmarking repositories is a critical factor in software effort estimation process which plays subsequently a critical role in the success of the software development project. In order to construct such a precise benchmarking data repository, it is important to have defined benchmarking data attributes and data characteristics and to have collected project data accordingly. On the other hand, studies show that data characteristics of benchmark data sets have impact on generalizing the studies which are based on using these datasets. Quality of data repository is not only depended on quality of collected data, but also it is related to how these data are collected. In this thesis, a benchmarking methodology is proposed for organizations to collect benchmarking data for effort estimation purposes. This methodology consists of three main components: benchmarking measures, benchmarking data collection processes, and benchmarking data collection tool. In this approach results of previous studies from the literature were used too. In order to verify and validate the methodology project data were collected in two middle size software organizations and one small size organization by using automated benchmarking data collection tool. Also, effort estimation models were constructed and evaluated for these projects data and impact of different characteristics of the projects was inspected in effort estimation models.
APA, Harvard, Vancouver, ISO, and other styles
3

Usman, Muhammad. "Supporting Effort Estimation in Agile Software Development." Licentiate thesis, Karlskrona, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10961.

Full text
Abstract:
Background: In Agile Software Development (ASD) planning is valued more than the resulting plans. Planning and estimation are carried out at multiple levels in ASD. Agile plans and estimates are frequently updated to reflect the current situation. It supports shorter release cycles and flexibility to incorporate changing market and customer needs. Many empirical studies have been conducted to investigate effort estimation in ASD. However, the evidence on effort estimation in ASD has not been aggregated and organized. Objective: This thesis has two main objectives: First, to identify and aggregate evidence, from both literature and industry, on effort estimation in ASD. Second, to support research and practice on effort estimation in ASD by organizing the identified knowledge. Method: In this thesis we conducted a Systematic Literature Review (SLR), a systematic mapping study, a questionnaire based industrial survey and an interview based survey. Results: The SLR and survey results showed that agile teams estimate effort, mostly during release and iteration planning, using techniques that are based on experts' subjective assessments. During effort estimation team related cost drivers, such as team members’ expertise, are considered important. The results also highlighted that implementation and testing are the only activities that are accounted for in effort estimates by most agile teams. Our mapping study identified that taxonomies in SE are mostly designed and presented in an ad-hoc manner. To fill this gap we updated an existing method to design taxonomies in a systematic way. The method is then used to design taxonomy on effort estimation in ASD using the evidence identified in our SLR and survey as input. Conclusions: The proposed taxonomy is evaluated by characterizing effort estimation cases of selected agile projects reported in literature. The evaluation found that the reporting of the selected studies lacks information related to the context and predictors used during effort estimation in ASD. The taxonomy can be used in consistently reporting effort estimation studies in ASD to facilitate identification, aggregation and analysis of the evidence. The proposed taxonomy was also used to characterize the effort estimation activity of agile teams in three different software companies. The proposed taxonomy was found to be useful by interviewed agile practitioners in documenting important effort estimation related knowledge, which otherwise remain tacit in most cases.
APA, Harvard, Vancouver, ISO, and other styles
4

Vukovic, Divna, and Cecilia Wester. "Staff Prediction Analysis : Effort Estimation In System Test." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1739.

Full text
Abstract:
This master thesis is made in 2001 at Blekinge Institute of Technology and Symbian, which is a software company in Ronneby, Sweden. The purpose of the thesis is to find a suitable prediction and estimation model for the test effort. To do this, we have studied the State of the Art in cost/effort estimation and fault prediction. The conclusion of this thesis is that it is hard to make a general proposal, which is applicable for all organisations. For Symbian we have proposed a model based on use and test cases to predict the test effort.
APA, Harvard, Vancouver, ISO, and other styles
5

Sarro, Federica. "Search-based approaches for software development effort estimation." Doctoral thesis, Universita degli studi di Salerno, 2015. http://hdl.handle.net/10556/1969.

Full text
Abstract:
2011 - 2012
Effort estimation is a critical activity for planning and monitoring software project development and for delivering the product on time and within budget. Significant over or under-estimates expose a software project to several risks. As a matter of fact under-estimates could lead to addition of manpower to a late software project, making the project later (Brooks’s Law), or to the cancellation of activities, such as documentation and testing, negatively impacting on software quality and maintainability. Thus, the competitiveness of a software company heavily depends on the ability of its project managers to accurately predict in advance the effort required to develop software system. However, several challenges exists in making accurate estimates, e.g., the estimation is needed early in the software lifecycle, when few information about the project are available, or several factors can impact on project effort and these factor are usually specific for different production contexts. Several techniques have been proposed in the literature to support project manager in estimating software project development effort. In the last years the use of Search-Based (SB) approaches has been suggested to be employed as an effort estimation technique. These approaches include a variety of meta-heuristics, such as local search techniques (e.g., Hill Climbing, Tabu Search, Simulated Annealing) or Evolutionary Algorithms (e.g., Genetic Algorithms, Genetic Programming). The idea underlying the use of such techniques is based on the reformulation of software engineering problems as search or optimization problems whose goal is to find the most appropriate solutions which conform to some adequacy criteria (i.e., problem goals). In particular, the use of SB approaches in the context of effort estimation is twofold: they can be exploited to build effort estimation models or to enhance the use of existing effort estimation techniques. The usage reported in the literature of SB approaches for effort estimation have provided promising results that encourage further investigations. However, they can be considered preliminary studies. As a matter of fact, the capabilities of these approaches were not fully exploited, either the employed empirical analyses did not consider the more recent recommendations on how to carry out this kind of empirical assessment in the effort estimation and in the SBSE contexts. The main aim of the PhD dissertation is to provide an insight on the use of SB techniques for the effort estimation trying to highlight strengths and weaknesses of these approaches for both the uses above mentioned. [edited by Author]
XI n.s.
APA, Harvard, Vancouver, ISO, and other styles
6

Marshall, Ian Mitchell. "Evaluating courseware development effort estimation measures and models." Thesis, University of Abertay Dundee, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Azzeh, Mohammad Y. A. "Analogy-based software project effort estimation : contributions to projects similarity measurement, attribute selection and attribute weighting algorithms for analogy-based effort estimation." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4442.

Full text
Abstract:
Software effort estimation by analogy is a viable alternative method to other estimation techniques, and in many cases, researchers found it outperformed other estimation methods in terms of accuracy and practitioners' acceptance. However, the overall performance of analogy based estimation depends on two major factors: similarity measure and attribute selection & weighting. Current similarity measures such as nearest neighborhood techniques have been criticized that have some inadequacies related to attributes relevancy, noise and uncertainty in addition to the problem of using categorical attributes. This research focuses on improving the efficiency and flexibility of analogy-based estimation to overcome the abovementioned inadequacies. Particularly, this thesis proposes two new approaches to model and handle uncertainty in similarity measurement method and most importantly to reflect the structure of dataset on similarity measurement using Fuzzy modeling based Fuzzy C-means algorithm. The first proposed approach called Fuzzy Grey Relational Analysis method employs combined techniques of Fuzzy set theory and Grey Relational Analysis to improve local and global similarity measure and tolerate imprecision associated with using different data types (Continuous and Categorical). The second proposed approach presents the use of Fuzzy numbers and its concepts to develop a practical yet efficient approach to support analogy-based systems especially at early phase of software development. Specifically, we propose a new similarity measure and adaptation technique based on Fuzzy numbers. We also propose a new attribute subset selection algorithm and attribute weighting technique based on the hypothesis of analogy-based estimation that assumes projects that are similar in terms of attribute value are also similar in terms of effort values, using row-wise Kendall rank correlation between similarity matrix based project effort values and similarity matrix based project attribute values. A literature review of related software engineering studies revealed that the existing attribute selection techniques (such as brute-force, heuristic algorithms) are restricted to the choice of performance indicators such as (Mean of Magnitude Relative Error and Prediction Performance Indicator) and computationally far more intensive. The proposed algorithms provide sound statistical basis and justification for their procedures. The performance figures of the proposed approaches have been evaluated using real industrial datasets. Results and conclusions from a series of comparative studies with conventional estimation by analogy approach using the available datasets are presented. The studies were also carried out to statistically investigate the significant differences between predictions generated by our approaches and those generated by the most popular techniques such as: conventional analogy estimation, neural network and stepwise regression. The results and conclusions indicate that the two proposed approaches have potential to deliver comparable, if not better, accuracy than the compared techniques. The results also found that Grey Relational Analysis tolerates the uncertainty associated with using different data types. As well as the original contributions within the thesis, a number of directions for further research are presented. Most chapters in this thesis have been disseminated in international journals and highly refereed conference proceedings.
APA, Harvard, Vancouver, ISO, and other styles
8

Andersson, Veronika, and Hanna Sjöstedt. "Improved effort estimation of software projects based on metrics." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5269.

Full text
Abstract:

Saab Ericsson Space AB develops products for space for a predetermined price. Since the price is fixed, it is crucial to have a reliable prediction model to estimate the effort needed to develop the product. In general software effort estimation is difficult, and at the software department this is a problem.

By analyzing metrics, collected from former projects, different prediction models are developed to estimate the number of person hours a software project will require. Models for predicting the effort before a project begins is first developed. Only a few variables are known at this state of a project. The models developed are compared to a current model used at the company. Linear regression models improve the estimate error with nine percent units and nonlinear regression models improve the result even more. The model used today is also calibrated to improve its predictions. A principal component regression model is developed as well. Also a model to improve the estimate during an ongoing project is developed. This is a new approach, and comparison with the first estimate is the only evaluation.

The result is an improved prediction model. There are several models that perform better than the one used today. In the discussion, positive and negative aspects of the models are debated, leading to the choice of a model, recommended for future use.

APA, Harvard, Vancouver, ISO, and other styles
9

Schofield, Christopher. "An empirical investigation into software effort estimation by analogy." Thesis, Bournemouth University, 1998. http://eprints.bournemouth.ac.uk/411/.

Full text
Abstract:
Most practitioners recognise the important part accurate estimates of development effort play in the successful management of major software projects. However, it is widely recognised that current estimation techniques are often very inaccurate, while studies (Heemstra 1992; Lederer and Prasad 1993) have shown that effort estimation research is not being effectively transferred from the research domain into practical application. Traditionally, research has been almost exclusively focused on the advancement of algorithmic models (e.g. COCOMO (Boehm 1981) and SLIM (Putnam 1978)), where effort is commonly expressed as a function of system size. However, in recent years there has been a discernible movement away from algorithmic models with non-algorithmic systems (often encompassing machine learning facets) being actively researched. This is potentially a very exciting and important time in this field, with new approaches regularly being proposed. One such technique, estimation by analogy, is the focus of this thesis. The principle behind estimation by analogy is that past experience can often provide insights and solutions to present problems. Software projects are characterised in terms of collectable features (such as the number of screens or the size of the functional requirements) and stored in a historical case base as they are completed. Once a case base of sufficient size has been cultivated, new projects can be estimated by finding similar historical projects and re-using the recorded effort. To make estimation by analogy feasible it became necessary to construct a software tool, dubbed ANGEL, which allowed the collection of historical project data and the generation of estimates for new software projects. A substantial empirical validation of the approach was made encompassing approximately 250 real historical software projects across eight industrial data sets, using stepwise regression as a benchmark. Significance tests on the results accepted the hypothesis (at the 1% confidence level) that estimation by analogy is a superior prediction system to stepwise regression in terms of accuracy. A study was also made of the sensitivity of the analogy approach. By growing project data sets in a pseudo time-series fashion it was possible to answer pertinent questions about the approach, such as, what are the effects of outlying projects and what is the minimum data set size? The main conclusions of this work are that estimation by analogy is a viable estimation technique that would seem to offer some advantages over algorithmic approaches including, improved accuracy, easier use of categorical features and an ability to operate even where no statistical relationships can be found.
APA, Harvard, Vancouver, ISO, and other styles
10

Kanneganti, Alekhya. "Using Ensemble Machine Learning Methods in Estimating Software Development Effort." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20691.

Full text
Abstract:
Background: Software Development Effort Estimation is a process that focuses on estimating the required effort to develop a software project with a minimal budget. Estimating effort includes interpretation of required manpower, resources, time and schedule. Project managers are responsible for estimating the required effort. A model that can predict software development effort efficiently comes in hand and acts as a decision support system for the project managers to enhance the precision in estimating effort. Therefore, the context of this study is to increase the efficiency in estimating software development effort. Objective: The main objective of this thesis is to identify an effective ensemble method to build and implement it, in estimating software development effort. Apart from this, parameter tuning is also implemented to improve the performance of the model. Finally, we compare the results of the developed model with the existing models. Method: In this thesis, we have adopted two research methods. Initially, a Literature Review was conducted to gain knowledge on the existing studies, machine learning techniques, datasets, ensemble methods that were previously used in estimating Software Development Effort. Then a controlled Experiment was conducted in order to build an ensemble model and to evaluate the performance of the ensemble model for determining if the developed model has a better performance when compared to the existing models.   Results: After conducting literature review and collecting evidence, we have decided to build and implement stacked generalization ensemble method in this thesis, with the help of individual machine learning techniques like Support vector regressor (SVR), K-Nearest Neighbors regressor (KNN), Decision Tree Regressor (DTR), Linear Regressor (LR), Multi-Layer Perceptron Regressor (MLP) Random Forest Regressor (RFR), Gradient Boosting Regressor (GBR), AdaBoost Regressor (ABR), XGBoost Regressor (XGB). Likewise, we have decided to implement Randomized Parameter Optimization and SelectKbest function to implement feature section. Datasets like COCOMO81, MAXWELL, ALBERCHT, DESHARNAIS were used. Results of the experiment show that the developed ensemble model performs at its best, for three out of four datasets. Conclusion: After evaluating and analyzing the results obtained, we can conclude that the developed model works well with the datasets that have continuous, numeric type of values. We can also conclude that the developed ensemble model outperforms other existing models when implemented with COCOMO81, MAXWELL, ALBERCHT datasets.
APA, Harvard, Vancouver, ISO, and other styles
11

Hughes, Robert T. "An empirical investigation into the estimation of software development effort." Thesis, University of Brighton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362219.

Full text
Abstract:
Any guidance that might help to reduce the problems of accurately estimating software development effort could assist software producers to set more realistic budgets for software projects. This investigation attempted to make a contribution to this by documenting some of the practical problems with introducing structured effort estimation models at a site in the United Kingdom of an international supplier of telephone switching software. The theory of effort modelling was compared with actual practice by examining how the estimating experts at the telephone switching software producer currently carried out estimating. Two elements of the estimation problem emerged: judging the size of the job to be done and gauging the productivity of the development environment. Expert opinion was particularly important to the initial process, particularly when existing software was being enhanced. The study then identified development effort drivers and customised effort models applicable to real-time telecommunications applications. Many practical difficulties were found concerning the actual methods used to record past project data, although the issues surrounding these protocols appeared to be rarely dealt with explicitly in the research literature. The effectiveness of the models was trialled by forecasting the effort for some new projects and then comparing these estimates with the actual effort. The key research outcomes were, firstly the identification and validation of a set of relevant functional effort drivers applicable in a real-time telecommunications software development environment and the building of an effective effort model, and, secondly, the evaluation of alternative prediction approaches including analogy or case-based reasoning. While analogy was a useful tool, some methods of implementing analogy were flawed theoretically and did not consistently outperform 'traditional' model building techniques such as Least Squares Regression (LSR) in the environment under study. This study would, however, support analogy as a complementary technique to algorithmic modelling
APA, Harvard, Vancouver, ISO, and other styles
12

Sapre, Alhad Vinayak. "Feasibility of Automated Estimation of Software Development Effort in Agile Environments." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345479584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Rahhal, Silas. "An effort estimation model for implementing ISO 9001 in software organizations." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23290.

Full text
Abstract:
The adoption of software development principles and methodologies embodying best practices and standards is essential to achieving quality software products. Many organizations world-wide have implemented quality management systems that comply with the requirements of the ISO 9001 standard, or similar schemes, to ensure product quality. Meeting the requirement of ISO 9000 can be costly in time, effort and money. The effort primarily depends on the size of the organization, and the status of the quality management system. The focus of this thesis is on the development of an effort estimation model for the implementation of ISO 9001 in software organizations. In determining this effort, a survey of 1190 registered organizations was conducted in 1995, of which 63 were software organizations. A statistical regression model for predicting the effort was developed and validated. The proposed effort estimation model forms a foundation for building and comparing related effort estimation models for ISO 9000 and other process improvement frameworks.
APA, Harvard, Vancouver, ISO, and other styles
14

Britto, Ricardo. "Knowledge Classification for Supporting Effort Estimation in Global Software Engineering Projects." Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10520.

Full text
Abstract:
Background: Global Software Engineering (GSE) has become a widely applied operational model for the development of software systems; it can increase profits and decrease time-to-market. However, there are many challenges associated with development of software in a globally distributed fashion. There is evidence that these challenges affect many process related to software development, such as effort estimation. To the best of our knowledge, there are no empirical studies to gather evidence on effort estimation in the GSE context. In addition, there is no common terminology for classifying GSE scenarios focusing on effort estimation. Objective: The main objective of this thesis is to support effort estimation in the GSE context by providing a taxonomy to classify the existing knowledge in this field. Method: Systematic literature review (to identify and analyze the state of the art), survey (to identify and analyze the state of the practice), systematic mapping (to identify practices to design software engineering taxonomies), and literature survey (to complement the states of the art and practice) were the methods employed in this thesis. Results: The results on the states of the art and practice show that the effort estimation techniques employed in the GSE context are the same techniques used in the collocated context. It was also identified that global aspects, e.g. time, geographical and social-cultural distances, are accounted for as cost drivers, although it is not clear how they are measured. As a result of the conducted mapping study, we reported a method that can be used to design new SE taxonomies. The aforementioned results were combined to extend and specialize an existing GSE taxonomy, for suitability for effort estimation. The usage of the specialized GSE effort estimation taxonomy was illustrated by classifying 8 finished GSE projects. The results show that the specialized taxonomy proposed in this thesis is comprehensive enough to classify GSE projects focusing on effort estimation. Conclusions: The taxonomy presented in this thesis will help researchers and practitioners to report new research on effort estimation in the GSE context; researchers and practitioners will be able to gather evidence, com- pare new studies and find new gaps in an easier way. The findings from this thesis show that more research must be conducted on effort estimation in the GSE context. For example, the way the cost drivers are measured should be further investigated. It is also necessary to conduct further research to clarify the role and impact of sourcing strategies on the effort estimates’ accuracies. Finally, we believe that it is possible to design an instrument based on the specialized GSE effort estimation taxonomy that helps practitioners to perform the effort estimation process in a way tailored for the specific needs of the GSE context.
APA, Harvard, Vancouver, ISO, and other styles
15

Leinonen, J. (Juho). "Evaluating software development effort estimation process in agile software development context." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605221862.

Full text
Abstract:
This thesis studied effort estimation in software development, focusing on task level estimation that is done in Scrum teams. The thesis was done at Nokia Networks and the motivation for this topic came from the poor estimation accuracy that has been found to be present in software development. The aim of this thesis was to provide an overview of what is the current state of the art in effort estimation, survey the current practices present in Scrum teams working on LTE L2 software component at Nokia Networks Oulu, and then present suggestions for improvement based on the findings. On the basis of the literature review, three main categories of effort estimation methods were found: expert estimation, algorithmic models and machine learning. Universally there did not seem to be a single best method, but instead the differences come from the context of use. Algorithmic models and machine learning require data sets, whereas expert estimation methods rely on previous experiences and intuition of the experts. While model based methods have received a lot of research attention, the industry has largely relied on expert estimation. The current state of effort estimation at Nokia Networks was studied by conducting a survey. This survey was built based on previous survey studies that were found by conducting a systematic literature review. The questions found in the previous studies were formulated into a questionnaire, which was then used to survey the current effort estimation practices present in the participating teams. 41 people out of 100 in the participating teams participated in the survey. Survey results showed that like much of the software industry, the teams in LTE L2 relied on expert estimation methods. Most respondents had encountered overruns in the last sprint and the most often provided reason was that testing related effort estimation was hard. Forgotten subtasks were encountered frequently and requirements were found to be both unclear and to change often. Very few had had any training on effort estimation. There were no common practices for effort data collection and as such, it was mostly not done. By analyzing the survey results and reflecting them on the previous research, five suggestions for improvements were found. These were training in effort estimation, improving the information that is used during effort estimation by collaborating with specification personnel, improving testing related effort estimation by splitting acceptance testing into their own tasks, collecting and using effort data, and using Planning Poker as an effort estimation method, as it fit the context of estimation present in the teams. The study shed light on how effort estimation is done in software industry. Another contribution was the improvement suggestions, which could potentially improve the situation in the teams that participated in the survey. A third contribution was the questionnaire built during this study, as it could potentially be used to survey the current state of effort estimation in also other contexts.
APA, Harvard, Vancouver, ISO, and other styles
16

CORONA, ERIKA. "Web Framework Points: an Effort Estimation Methodology for Web Application Development." Doctoral thesis, Università degli Studi di Cagliari, 2013. http://hdl.handle.net/11584/266242.

Full text
Abstract:
Software effort estimation is one of the most critical components of a successful software project: completing the project on time and within budget is the classic challenge for all project managers. However, predictions made by project managers about their project are often inexact: software projects need, on average, 30-40% more effort than estimated. Research on software development effort and cost estimation has been abundant and diversified since the end of the Seventies. The topic is still very much alive, as shown by the numerous works existing in the literature. During these three years of research activity, I had the opportunity to go into the knowledge and to experiment some of the main software effort estimation methodologies existing in literature. In particular, I focused my research on Web effort estimation. As stated by many authors, the existing models for classic software applications are not well suited to measure the effort of Web applications, that unfortunately are not exempt from cost and time overruns, as traditional software projects. Initially, I compared the effectiveness of Albrecht's classic Function Points (FP) and Reifer's Web Objects (WO) metrics in estimating development effort for Web applications, in the context of an Italian software company. I tested these metrics on a dataset made of 24 projects provided by the software company between 2003 and 2010. I compared the estimate data with the real effort of each project completely developed, using the MRE (Magnitude of Relative Error) method. The experimental results showed a high error in estimates when using WO metric, which proved to be more effective than the FP metric in only two occurrences. In the context of this first work, it appeared evident that effort estimation depends not only on functional size measures, but other factors had to be considered, such as model accuracy and other challenges specific to Web applications; though the former represent the input that influences most the final results. For this reason, I revised the WO methodology, creating the RWO methodology. I applied this methodology to the same dataset of projects, comparing the results to those gathered by applying the FP and WO methods. The experimental results showed that the RWO method reached effort prediction results that are comparable to – and in 4 cases even better than – the FP method. Motivated by the dominant use of Content Management Framework (CMF) in Web application development and the inadequacy of the RWO method when used with the latest Web application development tools, I finally chose to focus my research on the study of a new Web effort estimation methodology for Web applications developed with a CMF. I proposed a new methodology for effort estimation: the Web CMF Objects one. In this methodology, new key elements for analysis and planning were identified; they allow to define every important step in the development of a Web application using a CMF. Following the RWO method approach, the estimated effort of a Web project stems from the sum of all elements, each of them weighted with its own complexity. I tested the whole methodology on 9 projects provided by three different Italian software companies, comparing the value of the effort estimate to the actual, final effort of each project, in man-days. I then compared the effort estimate both with values obtained from the Web CMF Objects methodology and with those obtained from the respective effort estimation methodologies of the three companies, getting excellent results: a value of Pred(0.25) equal to 100% for the Web CMF Objects methodology. Recently, I completed the presentation and assessment of Web CMF Objects methodology, upgrading the cost model for the calculation of effort estimation. I named it again Web Framework Points methodology. I tested the updated methodology on 19 projects provided by three software companies, getting good results: a value of Pred(0.25) equal to 79%. The aim of my research is to contribute to reducing the estimation error in software development projects developed through Content Management Frameworks, with the purpose to make the Web Framework Points methodology a useful tool for software companies.
APA, Harvard, Vancouver, ISO, and other styles
17

Arundachawat, Panumas. "The development of methods to estimate and reduce design rework." Thesis, Cranfield University, 2012. http://dspace.lib.cranfield.ac.uk/handle/1826/7932.

Full text
Abstract:
Design rework includes unnecessary repetition in design tasks to correct design problems. Resolving design matters in advance, through in-depth understanding of the design planning and rework issues and development of effective predictive tools could contribute to higher business profit margins and a faster product time-to-market. This research aims to develop three novel and structured methods to predict the design rework occurrence and effort at the very early design stage, which may otherwise remain undiscovered until the testing and refinement phase. The major contribution obtained from the Design Rework Probability of Occurrence Estimation method, DRePOE, is the development of design rework drivers. The developed drivers have been synthesised with data from interview results, direct observations, and archival records obtained from eleven world-class aerospace and automotive components manufacturers. To predict the probability of occurrence, the individual score of each driver was compared against historical records utilising the analogy-based method. The Design Rework Effort Estimation method, DREE, was developed to interconnect functional structures and identify failure relationships among components. A significant contribution of The DREE method is its capability to assess the design rework effort at the component level under the worst-case scenario. Next a Prioritisation Design by Design Rework Effort Based method, PriDDREB, was developed to provide a tool to forecast the maximum design rework given the constraint. This method provides a tool to determine and prioritise the components that may require a significant design rework effort. The three methods developed were validated with an automotive water pump, a turbocharger, and a McPherson strut suspension system in accordance with the validation square method. It is demonstrated that DRePOE, DREE, PriDDREB methods can offer the product design team a means to predict the probability of design rework occurrence and assess the required effort during the testing and refinement phase at the very early design phase.
APA, Harvard, Vancouver, ISO, and other styles
18

Roa-Ureta, Ruben, and n/a. "Estimation of fish biomass indices from catch-effort data : a likelihood approach." University of Otago. Department of Mathematics & Statistics, 2009. http://adt.otago.ac.nz./public/adt-NZDU20090818.150508.

Full text
Abstract:
Two dimensional stocks of fish can be assessed with methods that mimic the analysis of research survey data but that use commercial catch-effort data. This finite population approach has scarcely been used in fisheries science though it brings about very large sample sizes of local fish density with models of only moderate levels of complexity. The extracted information about the status of the stock can be interpreted as biomass indices. Statistical inference on finite populations has been the locus of a highly specialized branch of sampling-distribution inference, unique because observable variables are not considered as random variables. If statistical inference is defined as "the identification of distinct sets of plausible and implausible values for unobserved quantities using observations and probability theory" then it is shown that Godambe's paradox implies that the classical finite populations approach is inherently contradictory as a technique of statistical inference. The demonstration is facilitated by the introduction of an extended canonical form of an experiment of chance, that apart from the three components identified by Birnbaum, also contains the time at which the experiment is performed. Realization of the time random variable leaves the likelihood function as sole data-based mathematical tool for statistical inference, in contradiction with sampling-distribution inference and in agreement with direct-likelihood and Bayesian inference. A simple mathematical model is introduced for biomass indices in the spatial field defined by the fishing grounds. It contains three unknown parameters, the natural mortality rate, the probability of observing the stock in the area covered by the fishing grounds, and mean fish density in the sub-areas where the stock was present. A new theory for the estimation of mortality rates is introduced, using length frequency data, that is based on the population ecology analogue of Hamilton-Jacobi theory of classical mechanics. The family of equations require estimations of population growth, individual growth, and recruitment pattern. Well known or new techniques are used for estimating parameters of these processes. Among the new techniques, a likelihood-based geostatistical model to estimate fish density is proposed and is now in use in fisheries science (Roa-Ureta and Niklitschek, 2007, ICES Journal of Marine Science 64:1723-1734), as well as a new method to estimate individual growth parameters (Roa-Ureta, In Press, Journal of Agricultural, Biological, and Environmental Statistics). All inference is done only using likelihood functions and approximations to likelihood functions, as required by the Strong Likelihood principle and the direct-likelihood school of statistical inference. The statistical model for biomass indices is a hierarchical model with several sources of data, hyperparameters, and nuisance parameters. Even though the level of complexity is not low, a full Bayesian formulation is not necessary. Physical factors, mathematical manipulation, profile likelihoods and estimated likelihoods are used for the elimination of nuisance parameters. Marginal normal and multivariate normal likelihood functions, as well as the functional invariance property, are used for the hierarchical structure of estimation. In this manner most sources of information and uncertainty in the data are carried over up the hierarchy to the estimation of the biomass indices.
APA, Harvard, Vancouver, ISO, and other styles
19

Awan, Nasir Majeed, and Adnan Khadem Alvi. "Predicting software test effort in iterative development using a dynamic Bayesian network." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-6042.

Full text
Abstract:
It is important to manage iterative projects in a way to maximize quality and minimize cost. To achieve high quality, accurate project estimates are of high importance. It is challenging to predict the effort that is required to perform test activities in an iterative development. If testers put extra effort in testing then schedule might be delayed, however, if testers spend less effort then quality could be affected. Currently there is no model for test effort prediction in iterative development to overcome such challenges. This paper introduces and validates a dynamic Bayesian network to predict test effort in iterative software development. In this research work, the proposed framework is evaluated in a number of ways: First, the framework behavior is observed by considering different parameters and performing initial validation. Then secondly, the framework is validated by incorporating data from two industrial projects. The accuracy of the results has been verified through different prediction accuracy measurements and statistical tests. The results from the verification confirmed that the framework has the ability to predict test effort in iterative projects accurately.
APA, Harvard, Vancouver, ISO, and other styles
20

Ozkaya, Eren Aysegul. "A Method To Decrease Common Problems In Effort Data Collection In The Software Industry." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614691/index.pdf.

Full text
Abstract:
Efficient project planning and project management is crucial to complete the software projects in expected time and requirements. The most critical stage in project planning is estimation of the software size, time and budget. In this stage, effort data is used for benchmarking data sets, effort estimation, project monitoring and controlling. However, there are some problems related to effort data collection in the software industry. In this thesis, a pilot study and survey study are conducted to observe common practices and problems in effort data collection in the industry and results are analyzed. These problems are explained in terms of tool, process and people factors and solution suggestions are presented according to these problems. In accordance with the findings, a method and a tool which can facilitates to provide more accurate data are developed. A case study is performed in order to validate the method and applicability of the tool in the industry.
APA, Harvard, Vancouver, ISO, and other styles
21

Henrique, da Silva Aranha Eduardo. "Estimating test execution effort based on test specifications." Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/1406.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:49:48Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009
Em mercados competitivos como, por exemplo, o de celulares, empresas de software que liberam produtos com baixa qualidade podem rapidamente perder os seus clientes. A fim de evitar esse problema, essas empresas devem garantir que a qualidade dos produtos atenda a expectativa de seus clientes. Nesse contexto, testes é uma das atividades mais utilizadas para se tentar melhorar a qualidade de um software. Além disso, o resultado da atividade de teste está sendo considerado tão importante que em muitos casos é preferível alocar equipes exclusivamente para exercer atividades de teste. Essas equipes de teste devem ser capazes de estimar o esforço exigido para exercer as suas atividades dentro do prazo ou para solicitar mais recursos ou negociar prazos quando necessário. Na prática, as consequências de se ter estimativas ruins são onerosas para a organização: redução de escopo, atraso nas entregas ou horas extras de trabalho. O impacto dessas consequências é ainda maior em se tratando de execução manual de testes. Visando uma melhor forma de estimar esforço de execução manual de casos de teste funcionais, esta pesquisa propõe e valida uma medida de tamanho de teste e de complexidade de execução baseada nas próprias especificações dos testes, bem como um método de medição para a métrica proposta. Além disso, diversos estudos de caso, survey e experimentos foram realizados para avaliar o impacto desse trabalho. Durante esses estudos, verificamos uma melhoria significativa proporcionada por nossa abordagem na precisão das estimativas de esforço de execução de testes manuais. Também identificamos fatores de custo relacionados a atividades de execução manual de testes utilizando julgamento de especialistas. O efeito desses fatores foram investigados através da execução de experimentos controlados, onde pudemos constatar que apenas alguns dos fatores identificados tiveram efeito significativo. Por fim, diversas ferramentas de suporte foram desenvolvidas durante essa pesquisa, incluindo a automação das estimativas de esforço de execução de testes a partir de especificações de testes escritas em linguagem natural
APA, Harvard, Vancouver, ISO, and other styles
22

Khan, Khalid. "The Evaluation of Well-known Effort Estimation Models based on Predictive Accuracy Indicators." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4778.

Full text
Abstract:
Accurate and reliable effort estimation is still one of the most challenging processes in software engineering. There have been numbers of attempts to develop cost estimation models. However, the evaluation of model accuracy and reliability of those models have gained interest in the last decade. A model can be finely tuned according to specific data, but the issue remains there is the selection of the most appropriate model. A model predictive accuracy is determined by the difference of the various accuracy measures. The one with minimum relative error is considered to be the best fit. The model predictive accuracy is needed to be statistically significant in order to be the best fit. This practice evolved into model evaluation. Models predictive accuracy indicators need to be statistically tested before taking a decision to use a model for estimation. The aim of this thesis is to statistically evaluate well known effort estimation models according to their predictive accuracy indicators using two new approaches; bootstrap confidence intervals and permutation tests. In this thesis, the significance of the difference between various accuracy indicators were empirically tested on the projects obtained from the International Software Benchmarking Standard Group (ISBSG) data set. We selected projects of Un-Adjusted Function Points (UFP) of quality A. Then, the techniques; Analysis Of Variance ANOVA and regression to form Least Square (LS) set and Estimation by Analogy (EbA) set were used. Step wise ANOVA was used to form parametric model. K-NN algorithm was employed in order to obtain analogue projects for effort estimation use in EbA. It was found that the estimation reliability increased with the pre-processing of the data statistically, moreover the significance of the accuracy indicators were not only tested statistically but also with the help of more complex inferential statistical methods. The decision of selecting non-parametric methodology (EbA) for generating project estimates in not by chance but statistically proved.
APA, Harvard, Vancouver, ISO, and other styles
23

Koch, Stefan. "Effort Modeling and Programmer Participation in Open Source Software Projects." Department für Informationsverarbeitung und Prozessmanagement, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/1494/1/document.pdf.

Full text
Abstract:
This paper analyses and develops models for programmer participation and effort estimation in open source software projects. This has not yet been a centre of research, although any results would be of high importance for assessing the efficiency of this model and for various decision-makers. In this paper, a case study is used for hypotheses generation regarding manpower function and effort modeling, then a large data set retrieved from a project repository is used to test these hypotheses. The main results are that Norden-Rayleigh-based approaches need to be complemented to account for the addition of new features during the lifecycle to be usable in this context, and that programmer-participation based effort models show significantly less effort than those based on output metrics like lines-of-code. (author's abstract)
Series: Working Papers on Information Systems, Information Business and Operations
APA, Harvard, Vancouver, ISO, and other styles
24

Asif, Sajjad. "Investigating Web Size Metrics for Early Web Cost Estimation." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16036.

Full text
Abstract:
Context Web engineering is a new research field which utilizes engineering principles to produce quality web applications. Web applications have become more complex with the passage of time and it's quite difficult to analyze the web metrics for the estimation due to a wide range of web applications. Correct estimates for web development effort play a very important role in the success of large-scale web development projects. Objectives In this study I investigated size metrics and cost drivers used by web companies for early web cost estimation. I also aim to get validation through industrial interviews and web quote form. This form is designed based on most frequently occurring metrics after analyzing different companies. Secondly, this research aims to revisit previous work done by Mendes (a senior researcher and contributor in this research area) to validate whether early web cost estimation trends are same or changed? The ultimate goal is to help companies in web cost estimation. Methods First research question is answered by conducting an online survey through 212 web companies and finding their web predictor forms (quote forms). All companies included in the survey used Web forms to give quotes on Web development projects based on gathered size and cost measures. The second research question is answered by finding most occurring size metrics from the results of Survey 1. List of size metrics are validated by two methods: (i) Industrial interviews are conducted with 15 web companies to validate results of the first survey (ii) a quote form is designed using validated results from industrial interviews and quote form sent to web companies around the world to seek data on real Web projects. Data gathered from Web projects are analyzed using CBR tool and results are validated with Industrial interview results along with Survey 1.  Final results are compared with old research to justify answer of third research question whether size metrics have been changed. All research findings are contributed to Tukutuku research benchmark project. Results “Number of pages/features” and “responsive implementation” are top web size metrics for early Web cost estimation. Conclusions. This research investigated metrics which can be used for early Web cost estimation at the early stage of Web application development. This is the stage where the application is not built yet but just requirements are being collected and an expected cost estimation is being evaluated. List of new metrics variable is concluded which can be added in Tukutuku project.
APA, Harvard, Vancouver, ISO, and other styles
25

Khan, Abid Ali, and Zaka Ullah Muhammad. "Exploring the Accuracy of Existing Effort Estimation Methods for Distributed Software Projects-Two Case Studies." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4126.

Full text
Abstract:
The term “Globalization” brought many challenges with itself in the field of software development. The challenge of accurate effort estimation in GSD is one among them. When talking about effort estimation, the discussion starts for effort estimation methods. There are a number of effort estimation methods available. Existing effort estimation methods used for co-located projects are might not enough capable to estimate effort for distributed projects. This is why; ratio of failure of GSD projects is high. It is important to calibrate existing methods or invent new with respect to GSD environment. This thesis is an attempt to explore the accuracy of effort estimation methods for distributed projects. For this purpose, the authors selected three estimation approaches: COCOMO II, SLIM and ISBSG. COCOMO II and SLIM are two well known effort estimation methods, whereas, ISBSG is used to check the trend of a project depending upon its (ISBSG’s) repository. The selection of the methods and approaches was based on their popularity and advantages over other methods/approaches. Two finished projects from two different organizations were selected and analyzed as case studies. The results indicated that effort estimation with COCOMO II deviated 15.97 % for project A and 9.71% for project B. Whereas, SLIM showed the deviation of 4.17% for project A and 10.86 % for project B. Thus, the authors concluded that both methods underestimated the effort in the studied cases. Furthermore, factors that might cause deviation are discussed and several solutions are recommended. Particularly, the authors state that existing effort estimation methods can be used for GSD projects but they need calibration by considering GSD factors to achieve accurate results. This calibration will help in process improvement of effort estimation.
APA, Harvard, Vancouver, ISO, and other styles
26

Strohkirch, Cornelis, and Marcus Österberg. "Effort distribution for the Small System Migration Framework." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258087.

Full text
Abstract:
Performing a migration of a legacy system can often be a daunting task. However, there often comes a time where maintaining a legacy system is not profitable. At such a time, estimating how much effort is required to perform a migration can be vital for the legacy system holders. There is a lack of research that shows the effort distribution for migrations of small legacy systems.The contribution of this thesis is an effort distribution for a framework for migrations called Small System Migration Framework (SSMF) and SSMF. The purpose of the thesis is to evaluate how the effort is distributed over different activities when migrating a small legacy system. The goal of the thesis is to help provide a basis for the estimation process during migrations. This was done by documenting how effort is distributed over different activities contained in SSMF.This thesis takes an abductive approach, combining an inductive approach used in the creation of a framework and a deductive approach to document how effort was distributed during the migration. A framework was created using the literature study and this framework was used to conduct a migration.The result of this thesis was an updated framework and a table presenting the effort distribution of the migration. The framework showed factors that were influential when migrating the system. The effort distribution presents how effort is distributed over activities and shows which activities during the migration required more effort.Finally the thesis concludes that effort is highly centered around the preparation phase of the migration. Understanding legacy systems can be a challenge, lacking documentation and issues brought by the lack of maintenance results in high effort during this phase. Allocating more resources for the preparation phase and having access to people with experience during the preparation phase would likely make for a smoother transition with less unidentified problems appearing.
Att utföra en migration av ett ”legacy” system kan ofta vara en skrämmande uppgift. Det kommer dock ofta en tidpunkt då det inte längre är lönsamt att underhålla ett legacy system. Vid en sådan tidpunkt kan estimering av hur mycket insats som krävs för att utföra en migrering vara vital för ägarna av legacy systemet. Det finns en avsaknad av forskning som visar hur insats är fördelad för migrationer av små system.Bidraget av denna avhandling är ett ramverk för migrationer kallat Small System Migration Framework (SSMF) och en insats fördelning for SSMF. Ändamålet för avhandlingen är att evaluera hur insats är fördelad över olika aktiviteter vid migrering av små ”legacy” system. Målet med avhandlingen är att hjälpa förse en bas för estimeringsprocessen under migrering. Detta gjordes genom att dokumentera hur insats var fördelad över olika aktiviter i SSMF.Denna avhandling använde sig av ett abduktiv tillvägagångsätt, en kombination av ett induktivt tillvägagångssätt i skapandet av ett ramverk och ett deduktivt tillvägagångsätt i dokumenteringen av hur insats var fördelad under migrationen. En litteratur studie gjordes för att skapa ramverket och detta ramverk användes sedan för att göra en migrering.Resultatet av fallstudien var ett uppdaterat ramverk och en tabell som presenterar insatsfördelningen för en migrering. Ramverket visade faktorer som var inflytelserika vid migrering av systemet. Insatsfördelningen presenterade hur insats var fördelat mellan olika aktiviter och vilka aktiviteter som krävde mer insats under migreringen.Slutligen sammanfattar avhandlingen att insats är högt centrerad runt förberedelsefasen vid migrering. Att förstå legacy system kan vara en utmaning, bristande dokumentation och problem från bristande underhåll resulterar i hög insatsfördelning i denna fas. Allokering av mer resurser vid förberedelsefasen och att ha tillgång till personer med erfarenheter vid förberedelsefasen skulle troligen ge en mjukare övergång med mindre oidentifierade problem som visar sig.
APA, Harvard, Vancouver, ISO, and other styles
27

Milicic, Darko. "Applying COCOMO II : A case study." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4188.

Full text
Abstract:
This thesis presents the work based on the software cost estimation model COCOMO II, which was applied to a case study object derived from a software organization that had a completed project at its disposal. One of the most difficult phases in software development is the planning process and the ability to provide accurate cost estimations for a project. A competitive market calls for delicate and strategic excellence in management issues such as project plans. While software estimations may by straightforward in perception it is intricate in actuality. COCOMO II is allegedly one of the top contenders for the number one tool to utilize in software cost estimations, based on available literature, and it is an important ingredient for managing software lines of business. The original model was initially published by Dr. Barry Boehm in 1981, but as the software field moved rapidly into newfangled processes and techniques, the need to cope with this evolutionary change resulted in a revised and novel edition of the model. The industry project subjected to this case study acts as a source of data for the model to use as input parameters, and this procedure is systematically explicated in a data collection exposition. Validation and application of parameters as well as the model is later on applied as a foundation for subsequent discussions. Characteristics such as calibration and prediction accuracy in the estimation model are moreover scrutinized in order to base farther conclusions on.
APA, Harvard, Vancouver, ISO, and other styles
28

Deng, Kefu. "The value and validity of software effort estimation models built from a multiple organization data set." Click here to access this resource online, 2008. http://hdl.handle.net/10292/473.

Full text
Abstract:
The objective of this research is to empirically assess the value and validity of a multi-organization data set in the building of prediction models for several ‘local’ software organizations; that is, smaller organizations that might have a few project records but that are interested in improving their ability to accurately predict software project effort. Evidence to date in the research literature is mixed, due not to problems with the underlying research ideas but with limitations in the analytical processes employed: • the majority of previous studies have used only a single organization as the ‘local’ sample, introducing the potential for bias • the degree to which the conclusions of these studies might apply more generally is unable to be determined because of a lack of transparency in the data analysis processes used. It is the aim of this research to provide a more robust and visible test of the utility of the largest multi-organization data set currently available – that from the ISBSG – in terms of enabling smaller-scale organizations to build relevant and accurate models for project-level effort prediction. Stepwise regression is employed to enable the construction of ‘local’, ‘global’ and ‘refined global’ models of effort that are then validated against actual project data from eight organizations. The results indicate that local data, that is, data collected for a single organization, is almost always more effective as a basis for the construction of a predictive model than data sourced from a global repository. That said, the accuracy of the models produced from the global data set, while worse than that achieved with local data, may be sufficiently accurate in the absence of reliable local data – an issue that could be investigated in future research. The study concludes with recommendations for both software engineering practice – in setting out a more dynamic scenario for the management of software development – and research – in terms of implications for the collection and analysis of software engineering data.
APA, Harvard, Vancouver, ISO, and other styles
29

Wienke, Sandra [Verfasser], Matthias [Akademischer Betreuer] Müller, and Thomas [Akademischer Betreuer] Ludwig. "Productivity and Software Development Effort Estimation in High-Performance Computing / Sandra Wienke ; Matthias Müller, Thomas Ludwig." Aachen : Universitätsbibliothek der RWTH Aachen, 2018. http://d-nb.info/1162503319/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Nilsson, Nathalie, and Linn Bencker. "Exploring Impact of Project Size in Effort Estimation : A Case Study of Large Software Development Projects." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21970.

Full text
Abstract:
Background: Effort estimation is one of the cornerstones in project management with the purpose of creating efficient planning and the ability to keep budgets. Despite the extensive research done within this area, one of the biggest and most complex problems in project management within software development is still considered to be the estimation process. Objectives: The main objectives of this thesis were threefold: i) firstly to define the characteristics for a large project, ii) secondly to identify factors causing inaccurate effort estimates and iii) lastly to understand how the identified factors impact the effort estimation process, all of this within the context of large-scale agile software development and from the perspective of a project team.Methods: To fulfill the purpose of this thesis, an exploratory case study was executed. The data collection consisted of archival research, questionnaire, and interviews. The data analysis was partly conducted using the statistical software toolStata.Results: The definition of a large project is from a project team’s perspective based on high complexity and a large scope of requirements. The following identified factors were identified to affect the estimation process in large projects: deficient requirements, changes in scope, complexity, impact in multiple areas, coordination, and required expertise, and the findings indicate that these are affecting estimation accuracy negatively. Conclusions: The conclusion of this study is that besides the identified factors affecting the estimation process there are many different aspects that can directly or indirectly contribute to inaccurate effort estimates, categorized as requirements, complexity, coordination, input and estimation process, management, and usage of estimates.
APA, Harvard, Vancouver, ISO, and other styles
31

Bazeghi, Cyrus. "System and processor design effort estimation : using complexity and variability to explore new opportunities for optimization /." Diss., Digital Dissertations Database. Restricted to UC campuses, 2007. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Cook, Amy W. "Predictive models to support quoting of fixed fee consulting projects." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/104557/1/Amy_Cook_Thesis.pdf.

Full text
Abstract:
This thesis tackled a problem faced by consulting companies in the construction industry, where a significant proportion of projects result in losses. This occurs despite managers’ best efforts to price and execute projects profitably. Several machine learning and statistical techniques were applied to a case study company’s historic timesheet, client, and invoicing data in order to predict loss-making projects. The algorithms were tested in a simulated business decision-making scenario and the best model improved profits by 9%. The work from this research makes a step towards helping businesses reduce risk by integrating their data into financial decisions.
APA, Harvard, Vancouver, ISO, and other styles
33

Bajwa, Sohaib-Shahid. "Investigating the Nature of Relationship between Software Size and Development Effort." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-6004.

Full text
Abstract:
Software effort estimation still remains a challenging and debatable research area. Most of the software effort estimation models take software size as the base input. Among the others, Constructive Cost Model (COCOMO II) is a widely known effort estimation model. It uses Source Lines of Code (SLOC) as the software size to estimate effort. However, many problems arise while using SLOC as a size measure due to its late availability in the software life cycle. Therefore, a lot of research has been going on to identify the nature of relationship between software functional size and effort since functional size can be measured very early when the functional user requirements are available. There are many other project related factors that were found to be affecting the effort estimation based on software size. Application Type, Programming Language, Development Type are some of them. This thesis aims to investigate the nature of relationship between software size and development effort. It explains known effort estimation models and gives an understanding about the Function Point and Functional Size Measurement (FSM) method. Factors, affecting relationship between software size and development effort, are also identified. In the end, an effort estimation model is developed after statistical analyses. We present the results of an empirical study which we conducted to investigate the significance of different project related factors on the relationship between functional size and effort. We used the projects data in the International Software Benchmarking Standards Group (ISBSG) dataset. We selected the projects which were measured by utilizing the Common Software Measurement International Consortium (COSMIC) Function Points. For statistical analyses, we performed step wise Analysis of Variance (ANOVA) and Analysis of Co-Variance (ANCOVA) techniques to build the multi variable models. We also performed Multiple Regression Analysis to formalize the relation.
Software effort estimation still remains a challenging and debatable research area. Most of the software effort estimation models take software size as the base input. Among the others, Constructive Cost Model (COCOMO II) is a widely known effort estimation model. It uses Source Lines of Code (SLOC) as the software size to estimate effort. However, many problems arise while using SLOC as a size measure due to its late availability in the software life cycle. Therefore, a lot of research has been going on to identify the nature of relationship between software functional size and effort since functional size can be measured very early when the functional user requirements are available. There are many other project related factors that were found to be affecting the effort estimation based on software size. Application Type, Programming Language, Development Type are some of them. This thesis aims to investigate the nature of relationship between software size and development effort. It explains known effort estimation models and gives an understanding about the Function Point and Functional Size Measurement (FSM) method. Factors, affecting relationship between software size and development effort, are also identified. In the end, an effort estimation model is developed after statistical analyses. We present the results of an empirical study which we conducted to investigate the significance of different project related factors on the relationship between functional size and effort. We used the projects data in the International Software Benchmarking Standards Group (ISBSG) dataset. We selected the projects which were measured by utilizing the Common Software Measurement International Consortium (COSMIC) Function Points. For statistical analyses, we performed step wise Analysis of Variance (ANOVA) and Analysis of Co-Variance (ANCOVA) techniques to build the multi variable models. We also performed Multiple Regression Analysis to formalize the relation.
+46-(0)-739763245
APA, Harvard, Vancouver, ISO, and other styles
34

Hönel, Sebastian. "Efficient Automatic Change Detection in Software Maintenance and Evolutionary Processes." Licentiate thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-94733.

Full text
Abstract:
Software maintenance is such an integral part of its evolutionary process that it consumes much of the total resources available. Some estimate the costs of maintenance to be up to 100 times the amount of developing a software. A software not maintained builds up technical debt, and not paying off that debt timely will eventually outweigh the value of the software, if no countermeasures are undertaken. A software must adapt to changes in its environment, or to new and changed requirements. It must further receive corrections for emerging faults and vulnerabilities. Constant maintenance can prepare a software for the accommodation of future changes. While there may be plenty of rationale for future changes, the reasons behind historical changes may not be accessible longer. Understanding change in software evolution provides valuable insights into, e.g., the quality of a project, or aspects of the underlying development process. These are worth exploiting, for, e.g., fault prediction, managing the composition of the development team, or for effort estimation models. The size of software is a metric often used in such models, yet it is not well-defined. In this thesis, we seek to establish a robust, versatile and computationally cheap metric, that quantifies the size of changes made during maintenance. We operationalize this new metric and exploit it for automated and efficient commit classification. Our results show that the density of a commit, that is, the ratio between its net- and gross-size, is a metric that can replace other, more expensive metrics in existing classification models. Models using this metric represent the current state of the art in automatic commit classification. The density provides a more fine-grained and detailed insight into the types of maintenance activities in a software project. Additional properties of commits, such as their relation or intermediate sojourn-times, have not been previously exploited for improved classification of changes. We reason about the potential of these, and suggest and implement dependent mixture- and Bayesian models that exploit joint conditional densities, models that each have their own trade-offs with regard to computational cost and complexity, and prediction accuracy. Such models can outperform well-established classifiers, such as Gradient Boosting Machines. All of our empirical evaluation comprise large datasets, software and experiments, all of which we have published alongside the results as open-access. We have reused, extended and created datasets, and released software packages for change detection and Bayesian models used for all of the studies conducted.
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Scott J. "Machine Learning Model for the U.S. Customs and Border Protection| Cargo Systems Program Directorate's Sprint Effort Capacity Estimation." Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10981774.

Full text
Abstract:

Agile methodology has been widely adopted by both commercial and government software development projects since 2001. Agile methodology promotes product delivery by executing multiple small iterations that are also known as sprints. Each sprint is a small software development project with own planning, development, testing, demonstration, with possible deployment for production. Agile software development projects commonly use Yesterday’s Weather Model to estimate sprint effort capacity. However, the accuracy of Yesterday’s Weather Model is unreliable. Over 60% of Agile software development projects experience schedule delays, cost overruns, or cancellations and inaccurate effort estimation is one of the leading causes of these issues. As such, Agile software development projects may benefit from a sprint effort capacity estimation model with improved accuracy. In this research, we compute the error rate of Yesterday’s Weather Model using a large-scale real data from the U.S. Customs and Border Protection–Cargo Systems Program Directorate and identify a list of essential predictors that can be used to estimate sprint effort capacity. Using machine learning algorithms, we develop, test, and validate a sprint effort capacity estimation model on the same historical data. The model demonstrated better performance when compared to other models including Yesterday’s Weather Model.

APA, Harvard, Vancouver, ISO, and other styles
36

Sigweni, Boyce B. "An investigation of feature weighting algorithms and validation techniques using blind analysis for analogy-based estimation." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/12797.

Full text
Abstract:
Context: Software effort estimation is a very important component of the software development life cycle. It underpins activities such as planning, maintenance and bidding. Therefore, it has triggered much research over the past four decades, including many machine learning approaches. One popular approach, that has the benefit of accessible reasoning, is analogy-based estimation. Machine learning including analogy is known to significantly benefit from feature selection/weighting. Unfortunately feature weighting search is an NP hard problem, therefore computationally very demanding, if not intractable. Objective: Therefore, one objective of this research is to develop an effi cient and effective feature weighting algorithm for estimation by analogy. However, a major challenge for the effort estimation research community is that experimental results tend to be contradictory and also lack reliability. This has been paralleled by a recent awareness of how bias can impact research results. This is a contributory reason why software effort estimation is still an open problem. Consequently the second objective is to investigate research methods that might lead to more reliable results and focus on blinding methods to reduce researcher bias. Method: In order to build on the most promising feature weighting algorithms I conduct a systematic literature review. From this I develop a novel and e fficient feature weighting algorithm. This is experimentally evaluated, comparing three feature weighting approaches with a na ive benchmark using 2 industrial data sets. Using these experiments, I explore blind analysis as a technique to reduce bias. Results: The systematic literature review conducted identified 19 relevant primary studies. Results from the meta-analysis of selected studies using a one-sample sign test (p = 0.0003) shows a positive effect - to feature weighting in general compared with ordinary analogy-based estimation (ABE), that is, feature weighting is a worthwhile technique to improve ABE. Nevertheless the results remain imperfect so there is still much scope for improvement. My experience shows that blinding can be a relatively straightforward procedure. I also highlight various statistical analysis decisions which ought not be guided by the hunt for statistical significance and show that results can be inverted merely through a seemingly inconsequential statistical nicety. After analysing results from 483 software projects from two separate industrial data sets, I conclude that the proposed technique improves accuracy over the standard feature subset selection (FSS) and traditional case-based reasoning (CBR) when using pseudo time-series validation. Interestingly, there is no strong evidence for superior performance of the new technique when traditional validation techniques (jackknifing) are used but is more effi cient. Conclusion: There are two main findings: (i) Feature weighting techniques are promising for software effort estimation but they need to be tailored for target case for their potential to be adequately exploited. Despite the research findings showing that assuming weights differ in different parts of the instance space ('local' regions) may improve effort estimation results - majority of studies in software effort estimation (SEE) do not take this into consideration. This represents an improvement on other methods that do not take this into consideration. (ii) Whilst there are minor challenges and some limits to the degree of blinding possible, blind analysis is a very practical and an easy-to-implement method that supports more objective analysis of experimental results. Therefore I argue that blind analysis should be the norm for analysing software engineering experiments.
APA, Harvard, Vancouver, ISO, and other styles
37

Alomari, Hakam W. "Supporting Software Engineering Via Lightweight Forward Static Slicing." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1341996135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Miguel, Marcos Alexandre. "Giveme effort: um framework para apoiar estimativa de esforço em atividades de manutenção e compreensão de software." Universidade Federal de Juiz de Fora (UFJF), 2016. https://repositorio.ufjf.br/jspui/handle/ufjf/3091.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-01-16T13:41:31Z No. of bitstreams: 1 marcosalexandremiguel.pdf: 10203756 bytes, checksum: 35844967ee919f58955320a1c591c5dc (MD5)
Approved for entry into archive by Diamantino Mayra (mayra.diamantino@ufjf.edu.br) on 2017-01-31T10:34:46Z (GMT) No. of bitstreams: 1 marcosalexandremiguel.pdf: 10203756 bytes, checksum: 35844967ee919f58955320a1c591c5dc (MD5)
Made available in DSpace on 2017-01-31T10:34:46Z (GMT). No. of bitstreams: 1 marcosalexandremiguel.pdf: 10203756 bytes, checksum: 35844967ee919f58955320a1c591c5dc (MD5) Previous issue date: 2016-09-01
Muitas organizações encontram problemas na tentativa de estimar esforço em atividades de manutenção de software. Quando a estimativa de esforço não está bem definida ou é imprecisa, os resultados obtidos podem refletir diretamente na entrega do software, causando insatisfação do cliente ou diminuição da qualidade do produto. O sucesso ou fracasso de projetos depende da precisão do esforço e do cronograma das atividades envolvidas. O surgimento de métodos ágeis no campo de desenvolvimento de software tem apresentado muitas oportunidades e desafios para pesquisadores e profissionais da área. Um dos principais desafios é a estimativa de esforço para as atividades de manutenção no desenvolvimento ágil de software. Nesse contexto, este trabalho apresenta um framework, nomeado GiveMe Effort, o qual objetiva apoiar as atividades de estimativa de esforço na manutenção de software usando dados históricos e informações de compreensão de software.
Many organizations encounter problems when estimating effort for software maintenance activities. When estimating effort is not well defined or are inaccurate, the results may reflect directly into the software delivery, causing customer dissatisfaction or decreased product quality. The success or failure of projects depends on the accuracy of the effort and the schedule of involved activities. The rise of agile methods in software development has presented many opportunities and challenges for researchers and professionals. In this context, a key challenge is the effort estimate for maintenance activities in the agile software development context. This work presents a framework, called GiveMe Effort, to support the effort estimation activities in software maintenance. It is based on historical data and software comprehension information.
APA, Harvard, Vancouver, ISO, and other styles
39

Marounek, Petr. "Podpora a údržba SW: Rozšíření otologie o koncept KC, simplifikace odhadování pracnosti." Doctoral thesis, Vysoká škola ekonomická v Praze, 2007. http://www.nusl.cz/ntk/nusl-126594.

Full text
Abstract:
Effective implementation (in terms of time, cost, utilization of human resources, etc.) of information systems operation is a strategic issue in today's time when business processes are integrally aligned with the informatics. Currently, costs associated with software support and maintenance represent more than 90% of total costs. Software maintenance is a set of activities needed for cost-effective support of IT solution. IS / IT Center of excellence (COE) do not cover the area of software support and maintenance, there is no formalized methodology or procedural framework for COE for support and maintenance -- in reality, it means missing processes and procedures for creating it, management and evaluation of it. Moreover, there are missing recommendation about organization structure, services to be provided and overall continuous improvement. Therefore author proposes his own solution by definition and implementation of center of excellence for support and maintenance and its sub-centers of excellence for support and maintenance of particular applications. Current ontology of support and maintenance does not capture the necessary components and links -- namely missing management, planning and effort estimation views. Therefore author proposes his redefinition and enrichment of ontology of organizational structure about elements of competence and sub-competence center, typology of tasks (management, maintenance), and their management - estimating, planning and realization. In his work, Magne Jorgensen formulated conclusions that 83 to 84% of all estimation is done by pure expert estimates and estimating models are not used basically due to their complexity. Based on extending PERT formula about quality of estimator and historical experience, author introduced his simplified, easy to use approach to effort estimation in software maintenance. Both introduced formulas were verified in sub-competence center for supporting mortgage IS with significantly better result than only pure PERT estimate (98.8% and 91.8% against pure PERT 90.1%). In conclusion, author discusses the benefits of the implementation of center of excellence for support and maintenance and sub-centers of excellence for support and maintenance of particular applications, and overall fulfilling of thesis scope.
APA, Harvard, Vancouver, ISO, and other styles
40

Zarrad, Walid. "Télé-opération avec retour d'effort pour la chirugie mini-invasive." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2007. http://tel.archives-ouvertes.fr/tel-00263824.

Full text
Abstract:
Ces travaux de thèse s'inscrivent dans le cadre de l'assistance robotique pour la chirurgie mini-invasive télé-opérée. L'une des principales limitations des systèmes existants concerne l'absence de restitution des efforts au chirurgien lorsqu'il télé-opère le robot. Ainsi, un système de télé-opération avec retour d'effort est proposé. Il est basé sur une architecture position - position avec une commande en effort du robot esclave. Cette dernière utilise un découplage non-linéaire du robot ainsi que des techniques d'observation et de commande par retour d'état. Des validations expérimentales ont montré les performances de l'architecture lors d'une télé-opération en espace libre et lors d'interactions avec un environnement de faible rigidité. Une première contribution est l'estimation en ligne de la raideur de l'environnement afin de garantir la stabilité du système lors d'interactions avec un environnement rigide. Cette stratégie utilise un filtre de Kalman étendu qui s'affranchit de la position de repos de l'environnement et qui considère une compensation des erreurs de modélisation du robot esclave. La deuxième contribution est l'étude de la transparence et de la stabilité de l'architecture de télé-opération. Une adaptation des paramètres de commande en fonction de la raideur estimée est proposée en considérant un compromis entre ces deux critères. La structure a été validée expérimentalement en considérant un environnement constitué de tissus ex-vivo. Une dernière contribution est l'adaptation de l'architecture de télé-opération avec retour d'effort aux contraintes imposées par la chirurgie mini-invasive. La première est le passage de l'instrument du robot par le trocart. Une architecture de commande utilisant le principe du découplage tâche - posture est alors proposée. La posture permettant le passage par le trocart est contrôlée en considérant la commande en position d'un robot virtuel attaché à l'instrument du robot télé-opéré. La deuxième contrainte considérée est la compensation des mouvements physiologiques. L'objectif étant d'offrir au chirurgien une stabilisation virtuelle de l'organe. L'approche de commande consiste à atténuer les perturbations dans les efforts appliqués sur un environnement en mouvement. La compensation repose sur une commande référencée modèle qui considère que les perturbations atténuées génèrent un déplacement du robot esclave. Des expérimentations ont montré la pertinence et l'efficacité des stratégies proposées.
APA, Harvard, Vancouver, ISO, and other styles
41

Kumar, Tushar. "Characterizing and controlling program behavior using execution-time variance." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55000.

Full text
Abstract:
Immersive applications, such as computer gaming, computer vision and video codecs, are an important emerging class of applications with QoS requirements that are difficult to characterize and control using traditional methods. This thesis proposes new techniques reliant on execution-time variance to both characterize and control program behavior. The proposed techniques are intended to be broadly applicable to a wide variety of immersive applications and are intended to be easy for programmers to apply without needing to gain specialized expertise. First, we create new QoS controllers that programmers can easily apply to their applications to achieve desired application-specific QoS objectives on any platform or application data-set, provided the programmers verify that their applications satisfy some simple domain requirements specific to immersive applications. The controllers adjust programmer-identified knobs every application frame to effect desired values for programmer-identified QoS metrics. The control techniques are novel in that they do not require the user to provide any kind of application behavior models, and are effective for immersive applications that defy the traditional requirements for feedback controller construction. Second, we create new profiling techniques that provide visibility into the behavior of a large complex application, inferring behavior relationships across application components based on the execution-time variance observed at all levels of granularity of the application functionality. Additionally for immersive applications, some of the most important QoS requirements relate to managing the execution-time variance of key application components, for example, the frame-rate. The profiling techniques not only identify and summarize behavior directly relevant to the QoS aspects related to timing, but also indirectly reveal non-timing related properties of behavior, such as the identification of components that are sensitive to data, or those whose behavior changes based on the call-context.
APA, Harvard, Vancouver, ISO, and other styles
42

Bashir, Hamdi A. "Models for estimating design effort." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0034/NQ64508.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Verma, Vishash. "Improved Slope Estimation in Organic Field-Effect Transistor Mobility Estimation." Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1618703169092189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Nelson, Diotto Junior. "Estimating Effort for Cross-platform Web ApplicationDevelopment." Thesis, Uppsala universitet, Informationssystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-322062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Jemai, Asma. "Estimation fonctionnelle non paramétrique au voisinage du bord." Thesis, Poitiers, 2018. http://www.theses.fr/2018POIT2257/document.

Full text
Abstract:
L’objectif de cette thèse est de construire des estimateurs non-paramétriques d’une fonction de distribution, d’une densité de probabilité et d’une fonction de régression en utilisant les méthodes d’approximation stochastiques afin de corriger l’effet du bord créé par les estimateurs à noyaux continus classiques. Dans le premier chapitre, on donne quelques propriétés asymptotiques des estimateurs continus à noyaux. Puis, on présente l’algorithme stochastique de Robbins-Monro qui permet d’introduire les estimateurs récursifs. Enfin, on rappelle les méthodes utilisées par Vitale, Leblanc et Kakizawa pour définir des estimateurs d’une fonction de distribution et d’une densité de probabilité en se basant sur les polynômes de Bernstein.Dans le deuxième chapitre, on a introduit un estimateur récursif d’une fonction de distribution en se basant sur l’approche de Vitale. On a étudié les propriétés de cet estimateur : biais, variance, erreur quadratique intégré (MISE) et on a établi sa convergence ponctuelle faible. On a comparé la performance de notre estimateur avec celle de Vitale et on a montré qu’avec le bon choix du pas et de l’ordre qui lui correspond notre estimateur domine en terme de MISE. On a confirmé ces résultatsthéoriques à l’aide des simulations. Pour la recherche pratique de l’ordre optimal, on a utilisé la méthode de validation croisée. Enfin, on a confirmé les meilleures qualités de notre estimateur à l’aide des données réelles. Dans le troisième chapitre, on a estimé une densité de probabilité d’une manière récursive en utilisant toujours les polynômes de Bernstein. On a donné les caractéristiques de cet estimateur et on les a comparées avec celles de l’estimateur de Vitale, de Leblanc et l’estimateur donné par Kakizawa en utilisant la méthode multiplicative de correction du biais. On a appliqué notre estimateur sur des données réelles. Dans le quatrième chapitre, on a introduit un estimateur récursif et non récursif d’une fonction de régression en utilisant les polynômes de Bernstein. On a donné les caractéristiques de cet estimateur et on les a comparées avec celles de l’estimateur à noyau classique. Ensuite, on a utilisé notre estimateur pour interpréter des données réelles
The aim of this thesis is to construct nonparametric estimators of distribution, density and regression functions using stochastic approximation methods in order to correct the edge effect created by kernels estimators. In the first chapter, we givesome asymptotic properties of kernel estimators. Then, we introduce the Robbins-Monro stochastic algorithm which creates the recursive estimators. Finally, we recall the methods used by Vitale, Leblanc and Kakizawa to define estimators of distribution and density functions based on Bernstein polynomials. In the second chapter, we introduced a recursive estimator of a distribution function based on Vitale’s approach. We studied the properties of this estimator : bias, variance, mean integratedsquared error (MISE) and we established a weak pointwise convergence. We compared the performance of our estimator with that of Vitale and we showed that, with the right choice of the stepsize and its corresponding order, our estimator dominatesin terms of MISE. These theoretical results were confirmed using simulations. We used the cross-validation method to search the optimal order. Finally, we applied our estimator to interpret real dataset. In the third chapter, we introduced a recursive estimator of a density function using Bernstein polynomials. We established the characteristics of this estimator and we compared them with those of the estimators of Vitale, Leblanc and Kakizawa. To highlight our proposed estimator, we used real dataset. In the fourth chapter, we introduced a recursive and non-recursive estimator of a regression function using Bernstein polynomials. We studied the characteristics of this estimator. Then, we compared our proposed estimator with the classical kernel estimator using real dataset
APA, Harvard, Vancouver, ISO, and other styles
46

Rahman, Mohammad. "Estimation of treatment effects using Regression Discontinuity design." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/estimation-of-treatment-effects-using-regression-discontinuity-design(b838592f-7648-4119-8e73-a299fddfda5f).html.

Full text
Abstract:
This thesis includes three substantive empirical studies (in Chapters 3, 4 and 5), where each study uses the same econometric methodology, named Regression Discontinuity design, which has an attractive feature - local randomisation. This feature has given the superiority of the method over the other evaluation methods in estimating unbiased treatment effects. Besides, the fuzzy Regression Discontinuity design can control for the endogeneity of the treatment variable, which is another advantage of the method. In each of the studies considered, the endogeneity problem exists. The application of the fuzzy Regression Discontinuity design is itself a contribution in each of the studies. Moreover, each study contributes in its own field. In Chapter 3, we investigate how much the Social Safety Net programs, that provide free food, or cash, or both to the food insecure households in Bangladesh, improve calorie consumption of the beneficiary households. Using Household Income and Expenditure Survey 2005, we find that the effect of the programs is around 843 kilo calorie, which is substantial compared to the previous studies. In Chapter 4, we examine how much was the impact of Education Maintenance Allowance, a program that provided weekly allowance to the young people in Years 12 and 13 in England, on the staying rate in the post compulsory full-time education. The program was abolished in 2010. Using the Longitudinal Survey of Young People in England, we find that the effect of the program was substantial - around 15 percent. The effect of a £1 increase in weekly allowance was around 1 percent. These effects were mainly on the white young people. Using the household survey data - Family Expenditure Survey (1968-2009) - in UK, Chapter 5 establishes that before 1981 consumption substantially fell at the retirement age. This fall is less severe after 1980. However, throughout the data period, consumption fall at the retirement age is fully explained by the expected fall in income, which contradicts the life cycle model, where a consumption growth is independent of an income growth.
APA, Harvard, Vancouver, ISO, and other styles
47

Chakraverti, Sugandha, Sheo Kumar, S. C. Agarwal, and Ashish Kumar Chakraverti. "Modified Cocomo Model For Maintenance cost Estimation of Real Time System Software." IJCSN, 2012. http://hdl.handle.net/10150/219511.

Full text
Abstract:
Software maintenance is an important activity in software engineering. Over the decades, software maintenance costs have been continually reported to account for a large majority of software costs [Zelkowitz 1979, Boehm 1981, McKee 1984, Boehm 1988, Erlikh 2000]. This fact is not surprising. On the one hand, software environments and requirements are constantly changing, which lead to new software system upgrades to keep pace with the changes. On the other hand, the economic benefits of software reuse have encouraged the software industry to reuse and enhance the existing systems rather than to build new ones [Boehm 1981, 1999]. Thus, it is crucial for project managers to estimate and manage the software maintenance costs effectively.
Accurate cost estimation of software projects is one of the most desired capabilities in software development Process. Accurate cost estimates not only help the customer make successful investments but also assist the software project manager in coming up with appropriate plans for the project and making reasonable decisions during the project execution. Although there have been reports that software maintenance accounts for the majority of the software total cost, the software estimation research has focused considerably on new development and much less on maintenance. Now if we talk about real time software system(RTSS) development cost estimation and maintenance cost estimation is not much differ from simple software but some critical factor are considered for RTSS development and maintenance like response time of software for input and processing time to give correct output. As like simple software maintenance cost estimation existing models (i.e. Modified COCOMO-II) can be used but after inclusion of some critical parameters related to RTSS. A Hypothetical Expert input and an industry data set of eighty completed software maintenance projects were used to build the model for RTSS maintenance cost. The full model, which was derived through the Bayesian analysis, yields effort estimates within 30% of the actual 51% of the time,outperforming the original COCOMO II model when it was used to estimate theseprojects by 34%. Further performance improvement was obtained when calibrating the full model to each individual program, generating effort estimates within 30% of the actual 80% of the time.
APA, Harvard, Vancouver, ISO, and other styles
48

Sagemo, Oscar. "Estimating Post-Editing Effort with Translation Quality Features." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-299143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lemaire, Charles-Éric. "Estimation des efforts de contact cylindre / matériau d'un compacteur vibrant." Nantes, 2005. http://www.theses.fr/2005NANT2135.

Full text
Abstract:
Le compactage des routes est une phase majeure de la construction des routes. Les efforts de contact caractérisent l'action du compacteur, leur estimation est nécessaire à l'amélioration du compactage. La mesure des efforts de contact n'est pas possible, ils sont estimés grâce au modèle dynamique du compacteur. La modélisation revient à considérer le compacteur comme un robot manipulateur. Cette approche a été enrichie par le formalisme mixte eulerien lagrangien qui donne un modèle plus simple. La méthode d'identification retenue est celle des moindres carrés pondérés. Une attention particulière a été portée à la description complète de cette méthode. La partie expérimentale constitue une partie majeure du travail et s'articule autour de trois axes : - Instrumentation d'un compacteur, - Définition et mise en place des essais, - Intégration et validation de la méthode sur un chantier. Pour la première fois le torseur des efforts de contact d'un compacteur a été estimé sur un chantier réel.
APA, Harvard, Vancouver, ISO, and other styles
50

Eren, Emrah. "Effect Of Estimation In Goodness-of-fit Tests." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/2/12611046/index.pdf.

Full text
Abstract:
In statistical analysis, distributional assumptions are needed to apply parametric procedures. Assumptions about underlying distribution should be true for accurate statistical inferences. Goodness-of-fit tests are used for checking the validity of the distributional assumptions. To apply some of the goodness-of-fit tests, the unknown population parameters are estimated. The null distributions of test statistics become complicated or depend on the unknown parameters if population parameters are replaced by their estimators. This will restrict the use of the test. Goodness-of-fit statistics which are invariant to parameters can be used if the distribution under null hypothesis is a location-scale distribution. For location and scale invariant goodness-of-fit tests, there is no need to estimate the unknown population parameters. However, approximations are used in some of those tests. Different types of estimation and approximation techniques are used in this study to compute goodness-of-fit statistics for complete and censored samples from univariate distributions as well as complete samples from bivariate normal distribution. Simulated power properties of the goodness-of-fit tests against a broad range of skew and symmetric alternative distributions are examined to identify the estimation effects in goodness-of-fit tests. The main aim of this thesis is to modify goodness-of-fit tests by using different estimators or approximation techniques, and finally see the effect of estimation on the power of these tests.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography