Academic literature on the topic 'Constructive Cost Model and Software Development Phase'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Constructive Cost Model and Software Development Phase.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Constructive Cost Model and Software Development Phase"

1

Attarzadeh, Iman, and Siew Hock Ow. "Proposing an Effective Artificial Neural Network Architecture to Improve the Precision of Software Cost Estimation Model." International Journal of Software Engineering and Knowledge Engineering 24, no. 06 (2014): 935–53. http://dx.doi.org/10.1142/s0218194014500338.

Full text
Abstract:
Software companies have to manage different software projects based on different time, cost, and manpower requirement, which is a very complex task in software project management. Accurate software estimates at the early phase of software development is one of the crucial objectives and a great challenge in software project management, in the last decades. Since software development attributes are vague and uncertain at the early phase of development, software estimates tend to a certain degree of estimation error. A software development cost estimation model incorporates soft computing techniques provides a solution to fit the vagueness and uncertainty of software attributes. In this paper, an adaptive artificial neural network (ANN) architecture for Constructive Cost Model (COCOMO) is proposed in order to produce accurate software estimates. The ANN is utilized to determine the importance of calibration of the software attributes using past project data in order to produce accurate software estimates. Software project data from the COCOMO I and NASA'93 data sets were used in the evaluation of the proposed model. The result shows an improvement in estimation accuracy of 8.36% of the ANN-COCOMO II when compared with the original COCOMO II.
APA, Harvard, Vancouver, ISO, and other styles
2

Ba’abbad, Ibrahim Mohammad, and M. Rizwan Jameel Qureshi. "Quality Extended Use Case Point (QUCP): An Improved Cost Estimation Method." International Journal of Computer Science and Mobile Computing 10, no. 6 (2021): 1–9. http://dx.doi.org/10.47760/ijcsmc.2021.v10i06.001.

Full text
Abstract:
The quality of a product is one of the major interests of the manufacturing process in all industries. The software industry imposes to construct a project with several phases to ensure producing high-quality software. A software development company estimates time, effort and cost of the project during planning phase. It is important to have accurate estimations to reduce the risks of project failure. Several cost estimation methods are practiced in the software development companies such as Function Point (FP), Use Case Points (UCP), Constructive Cost Model I and II and Story Points (SP). UCP cost estimation method is taken in this research to improve the accuracy of its estimation. UCP estimation depends on the use case diagram of the proposed system. A use case diagram describes the main functional requirements of the proposed system. UCP partially considers non-functional requirements through the technical and environmental factors. There is a lacking in the UCP method to consider the importance of quality attributes in the estimating process. This paper proposes an extended version of the existing UCP method named Quality Extended Use Case Point (QUCP) method in which quality attributes are included to improve the accuracy of cost estimation. A questionnaire is used to validate the proposed QUCP method. It is found after data analysis that seventy five percentages of the participants are agreed that the proposed method will not only help to improve the accuracy of cost estimation but it will also enable a software development company to deliver high-quality products.
APA, Harvard, Vancouver, ISO, and other styles
3

Hasoon, Safwan, and Fatima Younis. "Constructing Expert System to Automatic Translation for Software development." Al-Kitab Journal for Pure Sciences 2, no. 2 (2018): 231–47. http://dx.doi.org/10.32441/kjps.02.02.p16.

Full text
Abstract:
the development in computer fields, especially in the software engineering, emerged the need to construct intelligence tool for automatic translation from design phase to coding phase, for producing the source code from the algorithm model represented in pseudo code, and execute it depending on the constructing expert system which reduces the cost, time and errors that may occur during the translation process, which has been built the knowledge base, inference engine, and the user interface. The knowledge bases consist of the facts and the rules for the automatic transition. The results are compared with a set of neural networks, which are Back propagation neural network, Cascade-Forward network, and Radial Basis Function network. The results showed the superiority of the expert system in automatic transition process speed, as well as easy to add, delete or modify process for rules or data of the pseudo code compared with previously mentioned neural networks.
APA, Harvard, Vancouver, ISO, and other styles
4

Khosakitchalert, Chavanont, Nobuyoshi Yabuki, and Tomohiro Fukuda. "Development of BIM-based quantity takeoff for light-gauge steel wall framing systems." Journal of Information Technology in Construction 25 (December 18, 2020): 522–44. http://dx.doi.org/10.36680/j.itcon.2020.030.

Full text
Abstract:
Quantity takeoff based on building information modeling (BIM) is more reliable, accurate, and rapid than the traditional quantity takeoff approach. However, the quality of BIM models affects the quality of BIM-based quantity takeoff. Our research focuses on drywalls, which consist of wall framings and wall panels. If BIM models from the design phases do not contain wall framing models, contractors or sub-contractors cannot perform quantity takeoff for purchasing materials. Developing wall framing models under a tight schedule in the construction phase is time-consuming, cost-intensive, and error-prone. The increased geometries in a BIM model also slow down the software performance. Therefore, in this research, an automatic method is proposed for calculating quantities of wall framings from drywalls in a BIM model. Building elements that overlap with the drywalls are subtracted from the drywall surfaces before calculation. The quantities of wall framings are then embedded into the properties of drywall in the BIM model and hence they can be extracted directly from the BIM model. A prototype system is developed and the proposed method is validated in an actual construction project. The results of the case study showed that the prototype system took 282 s to deliver accurate quantities of wall framings with deviations of 0.11 to 0.30% when compared to a baseline, and the file size of the BIM model after applying the proposed method was increased very slightly from 47.0 MB to 47.1 MB. This research contributes to developing an approach for quantity takeoff of wall framings that are not present in a BIM model. Accurate quantities of wall framings can be obtained while the time and cost of developing wall framings for quantity takeoff can be saved. The proposed method does not increase the geometries in the BIM model; therefore, the file size of the model does not increase greatly, which stabilizes the software performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Challa, Ratna Kumari, and Kanusu Srinivasa Rao. "An Effective Optimization of Time and Cost Estimation for Prefabrication Construction Management Using Artificial Neural Networks." Revue d'Intelligence Artificielle 36, no. 1 (2022): 115–23. http://dx.doi.org/10.18280/ria.360113.

Full text
Abstract:
The success of every construction business relies on the projects performed in a given period and at the negotiated rate. The construction business includes prefabrication firms, logistics companies, design industries on site and so on. The manufacturing method includes the assembly of structural parts at a development plant and the transport of them onto the building site as finished or semi-assembled components. For optimization, artificial neural networks (ANNs) are used because of their capacity to overcome qualitative and quantitative difficulties in the building industry. An ANN is used to execute the input, hidden, and output layers depending on the weight of the hidden layer. Different modeling strategies maximize the layers. ANN covers a wide variety of issues in construction management, for instance cost analysis, decision making, prediction of the mark-up percentage and the production rate in the construction industry. The main advantage of prefabricated methodology is that the procedure is easily done. The other real benefit of the prefabrication process is its integrated versatility. The present study underlines that the total project period and cost are the key considerations in the current job procurement phase in constructing prefabrication. The results of the proposed model are based on ANN algorithms which mainly achieves the perfect weight values in time and cost estimates.
APA, Harvard, Vancouver, ISO, and other styles
6

Huseynov, Mehdi, Natig Hamidov, and Jabrayil Eyvazov. "Construction of Numerical PVT-Models for the Bulla-Daniz Gas-Condensate Field Based on Laboratory Experiments on Reservoir Fluid Samples." European Journal of Applied Science, Engineering and Technology 2, no. 1 (2024): 26–33. https://doi.org/10.59324/ejaset.2024.2(1).04.

Full text
Abstract:
PVT analysis is important for field-wide optimization and development. This is because we must understand the fluid's overall behavior from the reservoir to the production and processing facilities, and ultimately to the refinery. Modern computer software that uses equation of state (EOS) models to simulate experiments and illustrate fluid phase characteristics has contributed to its growth as a distinct field of study. To find the operating parameters that will maximize the surface liquid content and prolong the production plateau duration at the lowest feasible cost, PVT simulations are run. These simulations employ laboratory-derived data to fine-tune the EOS models, with the outcomes being integrated into reservoir simulation and research. The quality of the data is crucial to getting a good match between EOS and laboratory data, and for retrograde gas condensates, this can be particularly difficult because of their complex phase behavior. When utilized in reservoir simulations, an inadequate match leads to computational mistakes and unrepresentative findings, endangering the reservoir management decisions that depend on it.
APA, Harvard, Vancouver, ISO, and other styles
7

Rianat Abbas, Sunday Jacob Nwanyim, Joy Awoleye Adesina, Augustine Udoka Obu, Adetomiwa Adesokan, and Jeremiah Folorunso. "Secure by design - enhancing software products with AI-Driven security measures." Computer Science & IT Research Journal 6, no. 3 (2025): 184–200. https://doi.org/10.51594/csitrj.v6i3.1880.

Full text
Abstract:
As cyber threats continue to evolve in scale and complexity, traditional reactive security measures no longer suffice. This study explores the integration of AI-driven security within the Secure by Design framework as a forward-looking approach to building inherently secure digital products across industries. Rather than treating security as an afterthought, Secure by Design embeds protective mechanisms—such as encryption, predictive analytics, and real-time threat detection—throughout the product development lifecycle. This research employs quantitative design, surveying 203 professionals from sectors including finance, software development, agriculture, and construction. It investigates the adoption, effectiveness, and challenges of AI-powered security measures, using machine learning algorithms to analyze key security features. The findings reveal that encryption, predictive security, and automated response systems are the most impactful components in strengthening product security. The model achieved a strong performance with an accuracy of 79%, though challenges such as false positives and integration complexity persist. Despite growing awareness, many organizations still address security reactively, with only 14.8% incorporating it during the design phase. Barriers such as limited awareness, cost, and complexity continue to slow adoption. However, 74.9% of respondents express openness to deeper AI integration in future product developments, highlighting optimism about its potential. This study reinforces the need for a proactive shift in security practices, where AI not only supports real-time threat detection but also future-proofs products in an increasingly hostile cyber landscape. By embedding AI into the design phase, organizations can reduce attack surfaces, comply with regulatory demands, and build stakeholder trust. Future research should explore industry-specific implementations, autonomous AI systems in low-tech environments, and the scalability of cross-sector security frameworks. Keywords: Secure by Design, AI-Driven Security, Encryption, Predictive Threat Detection, Machine Learning, Product Development.
APA, Harvard, Vancouver, ISO, and other styles
8

Nevendra, Meetesh, and Pradeep Singh. "Cross-Project Defect Prediction with Metrics Selection and Balancing Approach." Applied Computer Systems 27, no. 2 (2022): 137–48. http://dx.doi.org/10.2478/acss-2022-0015.

Full text
Abstract:
Abstract In software development, defects influence the quality and cost in an undesirable way. Software defect prediction (SDP) is one of the techniques which improves the software quality and testing efficiency by early identification of defects(bug/fault/error). Thus, several experiments have been suggested for defect prediction (DP) techniques. Mainly DP method utilises historical project data for constructing prediction models. SDP performs well within projects until there is an adequate amount of data accessible to train the models. However, if the data are inadequate or limited for the same project, the researchers mainly use Cross-Project Defect Prediction (CPDP). CPDP is a possible alternative option that refers to anticipating defects using prediction models built on historical data from other projects. CPDP is challenging due to its data distribution and domain difference problem. The proposed framework is an effective two-stage approach for CPDP, i.e., model generation and prediction process. In model generation phase, the conglomeration of different pre-processing, including feature selection and class reweights technique, is used to improve the initial data quality. Finally, a fine-tuned efficient bagging and boosting based hybrid ensemble model is developed, which avoids model over -fitting/under-fitting and helps enhance the prediction performance. In the prediction process phase, the generated model predicts the historical data from other projects, which has defects or clean. The framework is evaluated using25 software projects obtained from public repositories. The result analysis shows that the proposed model has achieved a 0.71±0.03 f1-score, which significantly improves the state-of-the-art approaches by 23 % to 60 %.
APA, Harvard, Vancouver, ISO, and other styles
9

Jausovec, Marko, and Metka Sitar. "Comparative Evaluation Model Framework for Cost-Optimal Evaluation of Prefabricated Lightweight System Envelopes in the Early Design Phase." Sustainability 11, no. 18 (2019): 5106. http://dx.doi.org/10.3390/su11185106.

Full text
Abstract:
This paper proposes an extended comparative evaluation model framework (ECEMF) that highlights two objectives: (1) a specific economic evaluation method for the cost-optimisation of prefabricated lightweight system envelopes to achieve a greater value of the building, and (2) a comparative evaluation model framework usable by different profiles of stakeholders, when adopting the decision on the most optimal envelope type in the early design phase. Based on the proposed framework, the analysis was conducted for the case study building representing a small single-family house located in Slovenia. The methodology applied is based on the life cycle cost (LCC) including construction, operation, maintenance, and refurbishment costs, but excluding dismantling, disposal, and reuse, for the period of 50 years’ lifetime of the building which combines the Building Information Modelling (BIM) with Value for Money (VfM) assessment. To exploit the automated evaluation process in the computing environment, several tools were used, including Archicad for BIM in combination with Legep software for LCC. On one hand, the model confirms the assumption that the optimal value parameters of a building do not only depend on the typical costs related to high-performance buildings. On the other hand, from the stakeholders’ view, the model enables the choice of the optimal solution regarding the envelope type to be made in the early design phase. In this view, the model could function as an important decision tool, with a direct economic impact on the value.
APA, Harvard, Vancouver, ISO, and other styles
10

Kanwal, Iqra, and Ali Afzal Malik. "Regression-based predictive modelling of software size of fintech projects using technical specifications." Mehran University Research Journal of Engineering and Technology 44, no. 2 (2025): 164–73. https://doi.org/10.22581/muet1982.3289.

Full text
Abstract:
This research aims to develop a predictive model to estimate the lines of code (LOC) of software projects using technical requirements specifications. It addresses the recurring issue of inaccurate effort and cost estimation in software development that often results in budget overruns and delays. This study includes a detailed analysis of a dataset comprising past real-life software projects. It focuses on extracting relevant predictors from projects' requirements written in technical and easily comprehensible natural language. To assess feasibility, a pilot study is conducted at the beginning. Then, Simple Linear Regression (SLR) is employed to determine the relative predictive strength of eight potential predictors identified earlier. The number of API calls is found to be the strongest independent predictor (R2 = 0.670) of LOC. The subsequent phase entails constructing a software size prediction model using Forward Stepwise Multiple Linear Regression (FSMLR). The adjusted R2 value of the final model indicates that two factors – the number of API calls and the number of GUI fields – account for more than 80% of the variation in code size (measured using LOC). Model validation is performed using k-fold cross-validation. Validation results are also promising. The average MMRE of all folds is 0.203 indicating that, on average, the model's predictions are off by approximately 20% relative to the actual values. The average PRED (25) is 0.708 implying that nearly 71% of predicted size values are within 25% of the actual size values. This model can help project managers in making better decisions regarding project management, budgeting, and scheduling.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Constructive Cost Model and Software Development Phase"

1

Burnett, Robert Carlisle. "A trade-off model between cost and reliability during the design phase of software development." Thesis, University of Newcastle Upon Tyne, 1995. http://hdl.handle.net/10443/2104.

Full text
Abstract:
This work proposes a method for estimating the development cost of a software system with modular structure taking into account the target level of reliability for that system. The required reliability of each individual module is set in order to meet the overall required reliability of the system. Consequently the individual cost estimates for each module and the overall cost of the software system are linked to the overall required reliability. Cost estimation is carried out during the early design phase, that is, well in advance of any detailed development. Where a satisfactory compromise between cost and reliability is feasible, this will enable a project manager to plan the allocation of resources to the implementation and testing phases so that the estimated total system cost does not exceed the project budget and the estimated system reliability matches the required target. The line of argument developed here is that the operational reliability of a software module can be linked to the effort spent during the testing phase. That is, a higher level of desired reliability will require more testing effort and will therefore cost more. A method is developed which enable us to estimate the cost of development based on an estimate of the number of faults to be found and fixed, in order to achieve the required reliability, using data obtained from the requirements specification and historical data. Using Markov analysis a method is proposed for allocating an appropriate reliability requirement to each module of a modular software system. A formula to calculate an estimate of the overall system reliability is established. Using this formula, a procedure to allocate the reliability requirement for each module is derived using a minimization process, which takes into account the stipulated overall required level of reliability. This procedure allow us to construct some scenarios for cost and the overall required reliability. The foremost application of the outcome of this work is to establish a basis for a trade-off model between cost and reliability during the design phase of the development of a modular software system. The proposed model is easy to understand and suitable for use by a project manager.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Constructive Cost Model and Software Development Phase"

1

Kama Nazri, Basri Sufyan, Asl Mehran Halimi, and Ibrahim Roslina. "COCHCOMO: A Change Effort Estimation Tool for Software Development Phase." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2014. https://doi.org/10.3233/978-1-61499-434-3-1029.

Full text
Abstract:
It is important for software project manager to make effective decisions when managing software changes during software development. One type of information that helps to make the decision is the estimation of the change effort produced by the changes. One of the inputs that help to decide whether to accept or reject the changes is by having reliable estimation on the change effort. From software development perspective, the estimation has to take into account the inconsistent states of software artifacts across project lifecycle i.e., fully developed and partially developed. This research introduces a new change effort estimation tool (Constructive Change Cost Model or COCHCOMO) that is able to take into account the inconsistent states of software artifacts in its estimation process. This tool was developed based on our extended version of static and dynamic impact analysis techniques. Extensive experiments using several case studies have been conducted in which the results show acceptable error rates have been achieved.
APA, Harvard, Vancouver, ISO, and other styles
2

Safavi, Sarah Afzal, and Maqbool Uddin Shaikh. "Effort Estimation Model for each Phase of Software Development Life Cycle." In Handbook of Research on E-Services in the Public Sector. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-61520-789-3.ch021.

Full text
Abstract:
The assessment of main risks in software development discloses that a major threat of delays are caused by poor effort / cost estimation of the project. Low / poor cost estimation is the second highest priority risk [Basit Shahzad]. This risk can affect four out of a total five phases of the software development life cycle i.e. Analysis, Design, Coding and Testing. Hence targeting this risk alone may reduce the overall risk impact of the project by fifty percent. Architectural designing of the system is a great activity which consumes most of the time in SDLC. Obviously, effort is put forth to produce the design of the system. It is evident that none of the existing estimation models try to calculate the effort put on designing of the system. Although use case estimation model uses the use case points to estimate the cost. But what is the cost of creating use cases? One reason of poor estimates produced by existing models can be negligence of design effort/cost. Therefore it shall be well estimated to prevent any cost overrun of the project. We propose a model to estimate the effort in each of these phases rather than just relying upon the cost estimation of the coding phase only. It will also ease the monitoring of project status and comparison against planned cost and actual cost incurred so far at any point of time.
APA, Harvard, Vancouver, ISO, and other styles
3

Ilyas, Qazi Mudassar. "Ontology Augmented Software Engineering." In Software Development Techniques for Constructive Information Systems Design. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-3679-8.ch023.

Full text
Abstract:
Semantic Web was proposed to make the content machine-understandable by developing ontologies to capture domain knowledge and annotating content with this domain knowledge. Although, the original idea of semantic web was to make content on the World Wide Web machine-understandable, with recent advancements and awareness about these technologies, researchers have applied ontologies in many interesting domains. Many phases in software engineering are dependent on availability of knowledge, and the use of ontologies to capture and process this knowledge is a natural choice. This chapter discusses how ontologies can be used in various stages of the system development life cycle. Ontologies can be used to support requirements engineering phase in identifying and fixing inconsistent, incomplete, and ambiguous requirement. They can also be used to model the requirements and assist in requirements management and validation. During software design and development stages, ontologies can help software engineers in finding suitable components, managing documentation of APIs, and coding support. Ontologies can help in system integration and evolution process by aligning various databases with the help of ontologies capturing knowledge about database schema and aligning them with concepts in ontology. Ontologies can also be used in software maintenance by developing a bug tracking system based upon ontological knowledge of software artifacts and roles of developers involved in software maintenance task.
APA, Harvard, Vancouver, ISO, and other styles
4

Siau, Keng, and Yuhong Tian. "Open Source Software Development Process Model." In Open Source Technology. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-7230-7.ch051.

Full text
Abstract:
The global open source movement has provided software users with more choices, lower software acquisition cost, more flexible software customization, and possibly higher quality software product. Although the development of open source software is dynamic and it encourages innovations, the process can be chaotic and involve members around the globe. An Open Source Software Development (OSSD) process model to enhance the survivability of OSSD projects is needed. This research uses the grounded theory approach to derive a Phase-Role-Skill-Responsibility (PRSR) OSSD process model. The three OSSD process phases -- Launch Stage, Before the First Release, and Between Releases -- address the characteristics of the OSSD process as well as factors that influence the OSSD process. In the PRSR model, different roles/actors are required to have different skills and responsibilities corresponding to each of the three OSSD process phases. This qualitative research contributes to the software development literature as well as open source practice.
APA, Harvard, Vancouver, ISO, and other styles
5

Pal, Kamalendu. "Markov Decision Theory-Based Crowdsourcing Software Process Model." In Research Anthology on Agile Software, Software Development, and Testing. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3702-5.ch010.

Full text
Abstract:
The word crowdsourcing, a compound contraction of crowd and outsourcing, was introduced by Jeff Howe in order to define outsourcing to the crowd. It is a sourcing model in which individuals or organizations obtain goods and services. These services include ideas and development of software or hardware, or any other business-task from a large, relatively open and often rapidly-evolving group of internet users; it divides work between participants to achieve a cumulative result. It has been used for completing various human intelligence tasks in the past, and this is an emerging form of outsourcing software development as it has the potential to significantly reduce the implementation cost. This chapter analyses the process of software development at a crowdsourced platform. The work analyses and identifies the phase wise deliverables in a competitive software development problem. It also proposes the use of Markov decision theory to model the dynamics of the development processes of a software by using a simulated example.
APA, Harvard, Vancouver, ISO, and other styles
6

Sidhu, Arshpreet Kaur, and Sumeet Kaur Sehra. "Use of Software Metrics to Improve the Quality of Software Projects Using Regression Testing." In Research Anthology on Agile Software, Software Development, and Testing. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3702-5.ch020.

Full text
Abstract:
Testing of software is broadly divided into three types i.e., code based, model based and specification based. To find faults at early stage, model based testing can be used in which testing can be started from design phase. Furthermore, in this chapter, to generate new test cases and to ensure the quality of changed software, regression testing is used. Early detection of faults will not only reduce the cost, time and effort of developers but also will help finding risks. We are using structural metrics to check the effect of changes made to software. Finally, the authors suggest identifying metrics and analyze the results using NDepend simulator. If results show deviation from standards then again perform regression testing to improve the quality of software.
APA, Harvard, Vancouver, ISO, and other styles
7

Sidhu, Arshpreet Kaur, and Sumeet Kaur Sehra. "Use of Software Metrics to Improve the Quality of Software Projects Using Regression Testing." In Analyzing the Role of Risk Mitigation and Monitoring in Software Development. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-6029-6.ch012.

Full text
Abstract:
Testing of software is broadly divided into three types i.e., code based, model based and specification based. To find faults at early stage, model based testing can be used in which testing can be started from design phase. Furthermore, in this chapter, to generate new test cases and to ensure the quality of changed software, regression testing is used. Early detection of faults will not only reduce the cost, time and effort of developers but also will help finding risks. We are using structural metrics to check the effect of changes made to software. Finally, the authors suggest identifying metrics and analyze the results using NDepend simulator. If results show deviation from standards then again perform regression testing to improve the quality of software.
APA, Harvard, Vancouver, ISO, and other styles
8

Pal, Kamalendu. "Markov Decision Theory-Based Crowdsourcing Software Process Model." In Advances in Systems Analysis, Software Engineering, and High Performance Computing. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-5225-9659-2.ch001.

Full text
Abstract:
The word crowdsourcing, a compound contraction of crowd and outsourcing, was introduced by Jeff Howe in order to define outsourcing to the crowd. It is a sourcing model in which individuals or organizations obtain goods and services. These services include ideas and development of software or hardware, or any other business-task from a large, relatively open and often rapidly-evolving group of internet users; it divides work between participants to achieve a cumulative result. It has been used for completing various human intelligence tasks in the past, and this is an emerging form of outsourcing software development as it has the potential to significantly reduce the implementation cost. This chapter analyses the process of software development at a crowdsourced platform. The work analyses and identifies the phase wise deliverables in a competitive software development problem. It also proposes the use of Markov decision theory to model the dynamics of the development processes of a software by using a simulated example.
APA, Harvard, Vancouver, ISO, and other styles
9

Uppal, Mudita, Deepali Gupta, and Vaishali Mehta. "A Bibliometric Analysis of Fault Prediction System using Machine Learning Techniques." In Challenges and Opportunities for Deep Learning Applications in Industry 4.0. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/9789815036060122010008.

Full text
Abstract:
Fault prediction in software is an important aspect to be considered in software development because it ensures reliability and the quality of a software product. A high-quality software product consists of a few numbers of faults and failures. Software fault prediction (SFP) is crucial for the software quality assurance process as it examines the vulnerability of software products towards failures. Fault detection is a significant aspect of cost estimation in the initial stage, and hence, a fault predictor model is required to lower the expenses used during the development and maintenance phase. SFP is applied to identify the faulty modules of the software in order to complement the development as well as the testing process. Software metric based fault prediction reflects several aspects of the software. Several Machine Learning (ML) techniques have been implemented to eliminate faulty and unnecessary data from faulty modules. This chapter gives a brief introduction to SFP and includes a bibliometric analysis. The objective of the bibliometric analysis is to analyze research trends of ML techniques that are used for predicting software faults. This chapter uses the VOSviewer software and Biblioshiny tool to visually analyze 1623 papers fetched from the Scopus database for the past twenty years. It explores the distribution of publications over the years, top-rated publishers, contributing authors, funding agencies, cited papers and citations per paper. The collaboration of countries and cooccurrence analysis as well as over the year’s trend of author keywords are also explored. This chapter can be beneficial for young researchers to locate attractive and relevant research insights within SFP.
APA, Harvard, Vancouver, ISO, and other styles
10

Christos, Georgousopoulos, Xenia Ziouvelou, Gregory Yovanof, and Antonis Ramfos. "An Open Source e-Procurement Application Framework for B2B and G2B." In Innovations in SMEs and Conducting E-Business. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-60960-765-4.ch005.

Full text
Abstract:
Since the early 1980s, Open Source Software (OSS) has gained a strong interest and an increased acceptance in the software industry that has to date initiated a “paradigm shift” (O’Reilly, 2004). The Open Source paradigm has introduced wholly new means of software development and distribution, creating a significant impact on the evolution of numerous business processes. In this chapter we examine the impact of the open source paradigm in the e-Procurement evolution and identify a trend towards Open Source e-Procurement Application Frameworks (AFs) which enable the development of tailored e-Procurement Solutions. Anchored in this notion, we present an Open-Source e-Procurement AF with a two-phase generation procedure. The innovative aspect of the proposed model relates to the combination of the Model Driven Engineering (MDE) approach with the Service-Oriented Architecture (SOA) paradigm for enabling the cost-effective production of e-Procurement Solutions by facilitating integration, interoperability, easy maintenance, and management of possible changes in the European e-Procurement environment. The assessment process of the proposed AF and its resulting e-Procurement Solutions occurs in the context of G2B in the Western-Balkan European region. Our evaluation yields positive results and further enhancing opportunities for the proposed Open Source e-Procurement AF and its resulting e-Procurement Solutions.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Constructive Cost Model and Software Development Phase"

1

Holzer, I., E. Kozeschnik, and H. Cerjak. "Prediction of the Loss of Precipitation Strengthening in Modern 9-12% Cr Steels – A Numerical Approach." In AM-EPRI 2007, edited by R. Viswanathan, D. Gandy, and K. Coleman. ASM International, 2007. https://doi.org/10.31399/asm.cp.am-epri-2007p0197.

Full text
Abstract:
Abstract The creep resistance of 9-12% Cr steels is significantly influenced by the presence and stability of different precipitate populations. Numerous secondary phases grow, coarsen and, sometimes, dissolve again during heat treatment and service. Based on the software package MatCalc, the evolution of these precipitates during the thermal treatment of the COST 522 steel CB8 is simulated from the cooling process after cast solidification to heat treatment and service up to the aspired service life time of 100.000h. On basis of the results obtained from these simulations in combination with a newly implemented model for evaluation of the maximum threshold stress by particle strengthening, the strengthening effect of each individual precipitate phase, as well as the combined effect of all phases is evaluated - a quantification of the influence of Z-Phase formation on the long-term creep behaviour is thus made possible. This opens a wide field of application for alloy development and leads to a better understanding of the evolution of microstructural components as well as the mechanical properties of these complex alloys.
APA, Harvard, Vancouver, ISO, and other styles
2

Srinivasan, Sridhar, Vishal Lagad, and Russell D. Kane. "Evaluation of Prediction Tool for Sour Water Corrosion Quantification and Management in Refineries." In CORROSION 2009. NACE International, 2009. https://doi.org/10.5006/c2009-09337.

Full text
Abstract:
Abstract The Sour Water Joint Industry Program (JIP) has led to the development of new data and insights related to corrosion in H2S-dominated Ammonium bisulfide sour water systems, commonly found in a number of refinery processes. The JIP, termed as Sour Water Phase I, provided a new methodology and software prediction tool targeted towards predicting and quantifying corrosion in hydro-processing units, specifically reactor effluent air cooler (REAC) equipment and associated piping. Based on comprehensive laboratory data, multiphase flow-modeling and numerical data interpolation, the Sour Water Corrosion Prediction Software (prediction tool) facilitates corrosion prediction, failure prevention, optimized material selection and safe operational practice as a basis for compelling cost saving and improved maintenance planning in refinery operations. This paper evaluates the efficacy of the JIP-based Prediction System when applied to real plant evaluation situations in a variety of refinery conditions. The paper also details appropriate methods of utilizing the data and functionality within the software tool to ensure optimized prediction accuracy. Case studies identifying pitfalls to avoid while using the software model are also provided along side comparison of predicted corrosion rates with measured rates from inspection. The case studies and trends presented demonstrate the importance of rigorous flow modeling and need for accurate characterization of effects of Ammonium Bisulfide NH4HS concentration and environmental parametric data variables in assessment of corrosivity of alkaline sour water systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Justo, Gabriel, Bruno Michael Mollon, Rodrigo Struck da Rosa, Willian Montanari, and Frederico Nodari Pio. "Highway Trailer Pneumatic Suspension Bracket Development Using Test Data and Meshless Simulation Technology." In 12th SAE BRASIL Colloquium on Suspensions and Road Implements & Engineering Exhibition. SAE International, 2023. http://dx.doi.org/10.4271/2023-36-0364.

Full text
Abstract:
<div class="section abstract"><div class="htmlview paragraph">This paper presents a new product development cycle focused on moving the simulation technology from the middle and late cycles of the design process to the front of it. The main goal on this shift is to reduce the time-to-market and costs while increasing the product innovation. The proposed workflow includes the use of meshless technology in the concept definition phase of the design and the use of the finite element method to validate the final version of the design. The workflow was expanded to include the usage of test data to define realistic load cases to correctly dimensioning of the component to accommodate the project criterions. In this study the commercial software Altair SimSolid and Altair OptiStruct were used to evaluate the proposed design structural performance in the design proposals and the test data used to define the model’s boundary conditions were obtained combining roads that represent the Suspensys accelerated durability procedure. The resulting product from this workflow was a pneumatic suspension bracket with the same mass as the original design but 5% stiffer, demonstrating the effectiveness of the proposed methodology.</div><div class="htmlview paragraph">This document also covers the procedure description for the proposed development cycle. Including the virtual model construction, a comparison between the methods in terms of time and effort required to build the model in the pre-processing phase, computational time, and cost to perform the calculation, as well as the answers found during post-processing to ensure that the benefits of implementing a new methodology will not compromise the accuracy of the final result. Concepts needed to define the test duration and roads used on the physical test and processing of the measured data will also be presented.</div></div>
APA, Harvard, Vancouver, ISO, and other styles
4

Krasova, D., O. J. Andersen, and C. Achen. "Enhanced Well Cost Management Through Innovative Methodology in Drilling and Completions Cost Estimation and Tracking: A Case Study in Malaysia." In International Petroleum Technology Conference. IPTC, 2025. https://doi.org/10.2523/iptc-25091-ms.

Full text
Abstract:
Abstract The development of marginal fields with limited budgets necessitates effective well cost management, beginning with cost estimation for investment decision-making, continuing through cost estimation for corporate budgeting, evaluation of cost-saving potentials through well plan optimization, and well cost monitoring during operation. This paper describes the implementation of advanced methodologies utilizing digital tools to modernize the well construction cost management workflow, adopted by an operator in Malaysia. The application's flexible architecture facilitates a scalable workflow to meet the company's cost estimation standards, in which each front-end stage requires a different level of input granularity depending primarily on the quality of information available to assemble the estimate. This approach effectively indicates the variance in the expected well cost at every stage throughout the data maturation. The probabilistic cost estimation incorporates the assessment of operational risks and performance through offset well information extracted from the daily operation database. During the operation phase, the final well cost estimate model serves as the baseline for well cost tracking, where executed activity is benchmarked against the plan. This allows monitoring of time and cost variations, as well as the generation of a comprehensive lookahead for measuring the campaign performance. The operator's transition to the digital well cost platform in 2021 marked a departure from conventional spreadsheets and standalone applications. This cloud-based solution improves data management and facilitates collaboration among finance analysts, cost controllers, drilling and completion engineers, and managers. Cloud computing efficiency transformed hours-long Monte Carlo probabilistic computations to seconds, enabling more effective cost planning through what-if scenarios and sensitivity analyses, thus streamlining decision-making, especially when well plans or batch operation sequences change. The offset wells data analytics model provides precise predictions of performance and common risks captured in historical non-productive time (NPT) events. This transformation allowed for detailed operational time and cost tracking, closely monitoring deviations from the plan due to operational factors like performance, plan changes, or the execution of improvement plans during operation, aiding in the generation of accurate lookahead projections, identifying lessons learned and cost-saving opportunities. Through effective cost management, the company successfully generated accurate estimates of expected costs, achieving up to 95% accuracy when no major scope changes occur. The operational excellence was recognized with industry awards, highlighting achievements in the lowest cost per foot in drilling and completions in the years 2022 and 2023. This paper presents a case study on a two-year implementation of improved well cost management by an operator. Through cloud-native collaborative software, the operator achieved higher confidence in cost estimates, improved cost control, and enhanced assurance that operation cost tracking is based on valid measures.
APA, Harvard, Vancouver, ISO, and other styles
5

Ligetti, Christopher B., Daniel A. Finke, and Jim Bean. "Construction Visualization Modeling." In SNAME Maritime Convention. SNAME, 2010. http://dx.doi.org/10.5957/smc-2010-p04.

Full text
Abstract:
This paper discusses the development of a software tool that utilizes light-weight versions of 3D digital product models and detailed module construction and erection schedules to visually validate and modify the erection sequence. The tool enables planners to quickly and easily modify the schedule to improve the sequence during the planning phase, where planners with limited knowledge of the predecessor/successor relationships can visually determine erection sequence conflicts. In addition, the Erection Visualization Tool (EVT) provides a means for communicating module status and aids decision making throughout the erection process to avoid excess construction costs and ensure on time delivery of ships.
APA, Harvard, Vancouver, ISO, and other styles
6

Araújo, Caroline Silva, Leandro Cândido de Siqueira, Bruno Leão de Brito, and Emerson de Andrade Marques Ferreira. "Rotina de programação para geração de modelos BIM visando estimativas de custos." In XI SIMPÓSIO BRASILEIRO DE GESTÃO E ECONOMIA DA CONSTRUÇÃO. Antac, 2021. http://dx.doi.org/10.46421/sibragec.v11i00.84.

Full text
Abstract:
This study presents a comparison between a general programming routine and a reference routine from the literature. Both routines propose a cost estimate in the initial phases of the project using generative programming associated to BIM, which involves generating a model and extracting quantitative data through Dynamo, Revit and Excel software. The purpose of this study is to refine the reference routine to allow variations in the construction model by modifying only the data inputs, without having to change the programming structure for each new modeled solution. The research strategy adopted in this work is Design Science Research (DSR), which is oriented to the solution of relevant and pragmatic real-world problems. It has been divided into the stages of awareness, suggestion, development, evaluation and completion. In order to compare the results, the structural elements (beams, slabs and pillars) of a building were modeled through the general routine. The routine presented an error of 12% in relation to the budgeted cost and was considered a viable and fast solution for the creation of BIM models for preliminary cost estimates, in order to support decisions in the initial phases of the project.
APA, Harvard, Vancouver, ISO, and other styles
7

Şaykol, Ediz. "An Economic Analysis of Software Development Process based on Cost Models." In International Conference on Eurasian Economies. Eurasian Economists Association, 2012. http://dx.doi.org/10.36880/c03.00427.

Full text
Abstract:
Software development process generally includes requirement analysis, design, implementation, and testing phases. The overall process is called Software Development Life Cycle (SDLC). Each of these steps was executed sequentially in earlier times, i.e. the output of a step is used as the input to the following step. This sequential execution is called waterfall process, and since the total duration of SDLC has been increasing, more dynamic models need to be employed in today’s software engineering methodologies. V-Shaped model, Spiral model, Incremental or iterative software development model, are some examples of these methodologies. On the other hand, due to the increase in the total number of projects for a company and due to the product variability, reusability aspect has entered into the domain. This aspect gained importance in the recent years, leading to the execution of framework-based models and software product line engineering process. In this study, the above methodologies are analyzed from an economic perspective with respect to their cost models. Reusability will require upfront investment, but the gain will be higher as the number of common software items increases, which are determined in commonality/variability analysis phase. Improvements in SDLC might also require organizational changes to adapt new methodologies. These considerations are discussed along with the cost model analysis, and a cost-evaluation criterion is provided in the paper.
APA, Harvard, Vancouver, ISO, and other styles
8

Sarafim, Diego S., Karina V. Delgado, and Daniel Cordeiro. "Detecting Code Smells in JavaScript: An Annotated Dataset for Software Quality Analysis." In Simpósio Brasileiro de Engenharia de Software. Sociedade Brasileira de Computação, 2024. http://dx.doi.org/10.5753/sbes.2024.3432.

Full text
Abstract:
The source code quality level attained during the development phase is an important factor in increasing costs in later stages of software development. Among the most detrimental quality problems are code smells, which are violations of both programming principles and good practices that negatively affect the maintainability and evolution of computer programs. Much effort has been put into creating tools for code smell detection over the last decades. A promising approach relies on machine learning (ML) algorithms for automated smell detection. Those algorithms usually need datasets with labeled instances pointing to the presence/absence of smells in programming constructs such as classes and methods. Despite a good number of studies using ML for code smell detection, there is a lack of studies adopting this approach for programming languages other than Java. Even widely popular languages like JavaScript have few or no studies covering the usage of ML models for smell detection despite lexical, structural, and paradigm differences when compared to Java. A symptom of the lack of such studies in JavaScript is the absence of standard code smell datasets for this language in the literature. This work presents a new dataset for code smell detection in JavaScript software focused on detecting God Class and Long Method, two of the most prevalent and harmful code smells. We describe the strategy used for the dataset construction, its characteristics, and a few preliminary experiments using our dataset, along with ML models for code smell detection.
APA, Harvard, Vancouver, ISO, and other styles
9

Yan, Bin, Wenxuan Zhu, Bin Gao, Guanlin Ye, and Yinghui Tian. "Numerical Analysis of the Effect of Multidirectional Load on the Bearing Capacity of Suction Bucket Foundation." In ASME 2023 42nd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/omae2023-105425.

Full text
Abstract:
Abstract With the development of clean energy, the development of offshore wind and tidal energy is being emphasized. In deeper waters, shared anchored foundations are becoming the preferred form of foundation for future energy development. Multiple floating structures are simultaneously anchored by a single anchor foundation, thereby reducing the number of anchor foundations used and saving construction costs. The anchor and the seabed form a coupled system that needs to withstand various loads, including wind, waves, currents and their combined effects. Under wind and wave loads, a foundation can be loaded in one direction and unloaded in another due to the phase difference of multi-directional loads or failure of one anchor system, which can have a great impact on the normal operation of the foundation. In this paper, a suction bucket foundation is investigated as an anchor of shared anchor foundations. The effect of the magnitude of the loads in two different directions and the angle between them on the load carrying capacity was investigated. The finite element software named DBLEAVES-X with an advanced constitutive model was used in the research. According to the results, when the suction bucket foundation is subjected to multidirectional loading, its bearing capacity decreases, and the decrease is influenced by the angle between two different directional loads, the load level, and the relative density of the soil.
APA, Harvard, Vancouver, ISO, and other styles
10

Cheng, Zhong, Rongqiang Xu, Jianbing Chen, et al. "Rapid Development of Multi-Source Heterogeneous Drilling Data Service System." In SPE/IADC Middle East Drilling Technology Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/202199-ms.

Full text
Abstract:
Abstract Digital oil and gas field is an overly complex integrated information system, and with the continuous expansion of business scale and needs, oil companies will constantly raise more new and higher requirements for digital transformation. In the previous system construction, we adopted multi-phase, multi-vendor, multi-technology and multi-method, resulting in the problem of data silos and fragmentation. The result of the data management problems is that decisions are often made using incomplete information. Even when the desired data is accessible, requirements for gathering and formatting it may limit the amount of analysis performed before a timely decision must be made. Therefore, through the use of advanced computer technologies such as big data, cloud computing and IOT (internet of things), it has become our current goal to build an integrated data integration platform and provide unified data services to improve the company's bottom line. As part of the digital oilfield, offshore drilling operations is one of the potential areas where data processing and advanced analytics technology can be used to increase revenue, lower costs, and reduce risks. Building a data mining and analytics engine that uses multiple drilling data is a difficult challenge. The workflow of data processing and the timeliness of the analysis are major considerations for developing a data service solution. Most of the current analytical engines require more than one tool to have a complete system. Therefore, adopting an integrated system that combines all required tools will significantly help an organization to address the above challenges in a timely manner. This paper serves to provide a technical overview of the offshore drilling data service system currently developed and deployed. The data service system consists of four subsystems. They are the static data management system including structured data (job report) and unstructured data (design documentation and research report), the real-time data management system, the third-party software data management system integrating major industry software databases, and the cloud-based data visual application system providing dynamic analysis results to achieve timely optimization of the operations. Through a unified logical data model, it can realize the quick access to the third-party software data and application support; These subsystems are fully integrated and interact with each other to function as microservices, providing a one-stop solution for real-time drilling optimization and monitoring. This data service system has become a powerful decision support tool for the drilling operations team. The learned lessons and gained experiences from the system services presented here provide valuable guidance for future demands E&P and the industrial revolution.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography