To see the other types of publications on this topic, follow the link: McCabe Cyclomatic.

Journal articles on the topic 'McCabe Cyclomatic'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 journal articles for your research on the topic 'McCabe Cyclomatic.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Vilar, Rodrigo A., Anderson A. Lima, Hyggo O. Almeida, and Angelo Perkusich. "Unanticipated Software Evolution: Evaluating the Impact on Development Cost and Quality." International Journal of Software Engineering and Knowledge Engineering 25, no. 09n10 (2015): 1727–31. http://dx.doi.org/10.1142/s0218194015710072.

Full text
Abstract:
Unanticipated Software Evolution (USE) techniques enable developers to easily change any element of the software without being obligated to anticipate and isolate extension points. However, we have not found empirical validations of the impact of USE on development cost and quality. In this work, we design and execute an experiment for USE, in order to compare its resulting metrics — time, lines of code, test coverage and complexity — using OO systems as baseline. 30 undergraduate students were subjects in this experiment. The results suggest that USE has significant impact on the lines of code and complexity metrics, reducing the amount of lines changed and the McCabe cyclomatic complexity on software evolution.
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Batah, Mohammad Subhi, Nouh Alhindawi, Rami Malkawi, and Ahmad Al Zuraiqi. "Hybrid Technique for Complexity Analysis for Java Code." International Journal of Software Innovation 7, no. 3 (2019): 118–33. http://dx.doi.org/10.4018/ijsi.2019070107.

Full text
Abstract:
Software complexity can be defined as the degree of difficulty in analysis, testing, design and implementation of software. Typically, reducing model complexity has had a significant impact on maintenance activities. A lot of metrics have been used to measure the complexity of source code such as Halstead, McCabe Cyclomatic, Lines of Code, and Maintainability Index, etc. This article proposed a hybrid module which consists of two theories which are Halstead and McCabe, both theories will be used to analyze a code written in Java. The module provides a mechanism to better evaluate the proficiency level of programmers, and also provides a tool which enables the managers to evaluate the programming levels and their enhancements over time. This will be known by discovering the various differences between levels of complexity in the code. If the program complexity level is low, then of the programmer professionalism level is high, on the other hand, if the program complexity level is high, then the programmer professionalism level is almost low. The results of the conducted experiments show that the proposed approach give very high and accurate evaluation for the undertaken systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Cheng, Po-Hsun, Li-Wei Chen, and Chia-Hsuan Lin. "A Customizable No-Code Realistic Motion Editor for VRM-Based Avatars." Sustainability 15, no. 2 (2023): 1182. http://dx.doi.org/10.3390/su15021182.

Full text
Abstract:
Avatar actions can be captured using certain gesture sensors or can be predefined by game designers through desktop applications. In other words, developing an online avatar editor could be necessary to specify the detailed actions for use by people who are not game creators. Our research team proposed a web-based toolset, myKLA, to construct and design avatar actions with editor and player features. The goal of myKLA is to help users define the required behaviors of avatars within a set time frame without codes. We used cyber–physical system theory in a software reconstruction initiative. Additionally, an exchangeable JSON file format for predefining the avatar actions was opened and shared here. Furthermore, the cyclomatic complexity of the main code blocks in our toolset was measured and changed using the McCabe approach to fine-tune the performance. An algorithm was proposed for quickly calculating an integrated activity diagram from several sub-activity diagrams. Our research showed that it is easy to create an avatar head and embed it in other web-based applications for additional interaction utilization. Therefore, our findings will be useful in creating and designing new educational tools.
APA, Harvard, Vancouver, ISO, and other styles
4

Sanusi, B.A., S.O. Olabiyisi, A.O. Afolabi, and A.O. Olowoye. "Development of an Enhanced Automated Software Complexity Measurement System." Journal of Advances in Computational Intelligence Theory 1, no. 3 (2020): 1–11. https://doi.org/10.5281/zenodo.3597631.

Full text
Abstract:
<em>Code Complexity measures can simply be used to predict critical information about reliability, testability, and maintainability of software systems from the automatic measurement of the source code. The existing automated code complexity measurement is performed using a commercially available code analysis tool called QA-C for the code complexity of C-programming language which runs on Solaris and does not measure the defect-rate of the source code. Therefore, this paper aimed at developing an enhanced automated system that evaluates the code complexity of C-family programming languages and computes the defect rate. The existing code-based complexity metrics: Source Lines of Code metric, McCabe Cyclomatic Complexity metrics and Halstead Complexity Metrics were studied and implemented so as to extend the existing schemes. </em> <em>The developed system was built following the procedure of waterfall model that involves: Gathering requirements, System design, Development coding, Testing, and Maintenance. The developed system was developed in the Visual Studio Integrated Development Environment (2019) using C-Sharp (C#) programming language, .NET framework and MYSQL Server for database design. The performance of the system was tested efficiently using a software testing technique known as Black-box testing to examine the functionality and quality of the system. The results of the evaluation showed that the system produced functionality of 100, 100, 75, 75, and 100 %, and quality of 100, 100, 75, 75, and 100 % for the source code written in C++, C, Python, C# and JavaScript programming languages respectively. Hence, the tool helped software developers to view the quality of their code in terms of code metrics. Also, all data concerning the measured source code was well documented and stored for maintenance and functionality in the possibility of future development.</em>
APA, Harvard, Vancouver, ISO, and other styles
5

Хомяков, И. А. "Novel Approach for software metrics Sharing." Южно-Сибирский научный вестник, no. 5(39) (October 31, 2021): 133–37. http://dx.doi.org/10.25699/sssb.2021.39.5.007.

Full text
Abstract:
Сбор метрик программного обеспечения является фундаментальной деятельностью, которая необходима для проведния практически любого эмпирического исследования в области программной инженерии. Однако, даже при наличии широкого спектра инструментов, сбор таких фундаментальных данных по-прежнему занимает много времени. Более того, каждый исследователь собирает практически одни и те же данные (например, метрики CK, цикломатическая сложность МакКейба и т.д.) из практически одних и тех же проектов (например, из известных проектов с открытым исходным кодом). Объем такой дублирующей работы, выполняемой в сообществе, уменьшает усилия, которые исследователи могут потратить на наиболее ценную часть своих исследований, такую как разработка новых теорий и моделей и их эмпирическая оценка. В данной работе предлагается новый подход для сбора и обмена данными метрик программного обеспечения, позволяющий сотрудничать исследователям и сократить количество напрасных усилий в сообществе разработчиков программного обеспечения. Мы стремимся достичь этой цели, предлагая Формат обмена программными метриками (SMEF)и REST API для сбора, хранения и обмена данными метрик программного обеспечения. In almost every empirical software engineering study, software metrics collection is a fundamental activity. Although many tools exist to collect this data, it still takes a considerable amount of time. In addition, almost all researchers collect essentially the same data (e.g., CK metrics, McCabe Cyclomatic Complexity, etc.) from essentially the same sources (e.g., well-known open-source projects).Having so much duplication of work done within a community reduces the amount of time that researchers can spend developing new ideas and evaluating them empirically, which is the most valuable part of their research. In this paper, we propose a novel approach for getting and sharing software metrics data that will allow them to collaborate and reduce the amount of wasted effort. SMEF, a file format for exchanging software metrics information, and a REST API, targeted at this objective, are proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
6

Singh, Gurdev, Satinderjit Singh, and Monika Monga. "Code Comprehending Measure (CCM)." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 2, no. 1 (2012): 9–14. http://dx.doi.org/10.24297/ijct.v2i1.6733.

Full text
Abstract:
Software complexity, accurately, plays a vital role in life cycle of the software. Many metrics have been proposed in the past like LOC, McCabes cyclomatic measure, Halsteads measures and cognitive measures. This paper proposes a new method to measure the software complexity, by not only taking into account the internal structure of the algorithm in terms of the total cognitive weights of the basic control structures but also by quantifying the flow of data between the various basic control structures and data volume factor (variables and perators) used within basic control structure. The preliminary tests show that this metrics is independent of the existing measures. Comparison with some existing measures has been done to prove the robustness of this new metrics.
APA, Harvard, Vancouver, ISO, and other styles
7

LYTVYNOV, Oleksandr, Dmytro HRUZIN, and Maksym FROLOV. "ON THE MIGRATION OF DOMAIN DRIVEN DESIGN TO CQRS WITH EVENT SOURCING SOFTWARE ARCHITECTURE." Information Technology: Computer Science, Software Engineering and Cyber Security, no. 1 (June 12, 2024): 50–60. http://dx.doi.org/10.32782/it/2024-1-7.

Full text
Abstract:
The article addresses the issue of migrating applications, particularly those following the Domain-Driven Design (DDD) architecture, to the Command Query Responsibility Segregation (CQRS) paradigm with Event Sourcing. Long-standing systems often need help with problems related to inflexible, outdated architecture, and dependencies, leading to increased maintenance costs. The paper examines the advantages of DDD and proposes CQRS as a viable alternative, focusing on improving productivity and scalability. The main objective of the work is to assess a secure path for migrating a project from DDD architecture to the CQRS and Event Sourcing architecture and to determine the migration roadmap. The article conducts an experiment in which migration of a test project is performed, evaluating the time, effort, and results of the migration. The research methodology includes evaluating complexity using McCabe's Cyclomatic Complexity metric and assessing performance through the execution time of system methods. The experiment is conducted on a typical project – a task-tracking system. The results of implementing CQRS show a fourfold increase in the number of classes and a 50% increase in the number of lines of code. However, this increase is justified as it improves modularity, transparency, and manageability during development, ultimately facilitating system maintenance and significantly enhancing overall system productivity. It is worth noting that the overall cyclomatic complexity of the system remains almost unchanged. In summary, the article examines the assessment of migrating a project from DDD architecture to CQRS and Event Sourcing, combining theoretical findings with practical experimentation. It provides valuable insights into the advantages, disadvantages, and challenges of implementing CQRS architecture in complex information systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Alakmeh, Tarek, David Reich, Lena Jäger, and Thomas Fritz. "Predicting Code Comprehension: A Novel Approach to Align Human Gaze with Code using Deep Neural Networks." Proceedings of the ACM on Software Engineering 1, FSE (2024): 1982–2004. http://dx.doi.org/10.1145/3660795.

Full text
Abstract:
The better the code quality and the less complex the code, the easier it is for software developers to comprehend and evolve it. Yet, how do we best detect quality concerns in the code? Existing measures to assess code quality, such as McCabe’s cyclomatic complexity, are decades old and neglect the human aspect. Research has shown that considering how a developer reads and experiences the code can be an indicator of its quality. In our research, we built on these insights and designed, trained, and evaluated the first deep neural network that aligns a developer’s eye gaze with the code tokens the developer looks at to predict code comprehension and perceived difficulty. To train and analyze our approach, we performed an experiment in which 27 participants worked on a range of 16 short code comprehension tasks while we collected fine-grained gaze data using an eye tracker. The results of our evaluation show that our deep neural sequence model that integrates both the human gaze and the stimulus code, can predict (a) code comprehension and (b) the perceived code difficulty significantly better than current state-of-the-art reference methods. We also show that aligning human gaze with code leads to better performance than models that rely solely on either code or human gaze. We discuss potential applications and propose future work to build better human-inclusive code evaluation systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Hnatkowska, Bogumiła, and Bartosz Krych. "Comparing the understandability of iteration mechanisms over Collections in Java." Foundations of Computing and Decision Sciences 48, no. 1 (2023): 19–37. http://dx.doi.org/10.2478/fcds-2023-0002.

Full text
Abstract:
Abstract Source code understandability is a desirable quality factor affecting long-term code maintenance. Understandability of source code can be assessed in a variety of ways, including subjective evaluation of code fragments (perceived understandability), correctness, and response time to tasks performed. It can also be assessed using various source code metrics, such as cyclomatic complexity or cognitive complexity. Programming languages are evolving, giving programmers new ways to do the same things, e.g., iterating over collections. Functional solutions (lambda expressions and streams) are added to typical imperative constructs like iterators or for-each statements. This research aims to check if there is a correlation between perceived understandability, understandability measured by task correctness, and predicted by source code metrics for typical tasks that require iteration over collections implemented in Java. The answer is based on the results of an experiment. The experiment involved 99 participants of varying ages, declared Java knowledge and seniority measured in years. Functional code was perceived as the most understandable, but only in one case, the subjective assessment was confirmed by the correctness of answers. In two examples with the highest perceived understandability, streams received the worst correctness scores. Cognitive complexity and McCabe’s complexity had the lowest values in all tasks for the functional approach, but – unfortunately – they did not correlate with answer correctness. The main finding is that the functional approach to collection manipulation is the best choice for the filter-map-reduce idiom and its alternatives (e.g., filter-only). It should not be used in more complex tasks, especially those with higher complexity metrics.
APA, Harvard, Vancouver, ISO, and other styles
10

Yakovyna, Vitaliy S., and Ivan I. Symets. "Towards a software defect proneness model: feature selection." Applied Aspects of Information Technology 4, no. 4 (2021): 354–65. http://dx.doi.org/10.15276/aait.04.2021.5.

Full text
Abstract:
This article is focused on improving static models of software reliability based on using machine learning methods to select the software code metrics that most strongly affect its reliability. The study used a merged dataset from the PROMISE Software Engineering repository, which contained data on testing software modules of five programs and twenty-one code metrics. For the prepared sampling, the most important features that affect the quality of software code have been selected using the following methods of feature selection: Boruta, Stepwise selection, Exhaustive Feature Selection, Random Forest Importance, LightGBM Importance, Genetic Algorithms, Principal Component Analysis, Xverse python. Basing on the voting on the results of the work of the methods of feature selection, a static (deterministic) model of software reliability has been built, which establishes the relationship between the probability of a defect in the software module and the metrics of its code. It has been shown that this model includes such code metrics as branch count of a program, McCabe’s lines of code and cyclomatic complexity, Halstead’s total number of operators and operands, intelligence, volume, and effort value. A comparison of the effectiveness of different methods of feature selection has been put into practice, in particular, a study of the effect of the method of feature selection on the accuracy of classification using the following classifiers: Random Forest, Support Vector Machine, k-Nearest Neighbors, Decision Tree classifier, AdaBoost classifier, Gradient Boosting for classification. It has been shown that the use of any method of feature selection increases the accuracy of classification by at least ten percent compared to the original dataset, which confirms the importance of this procedure for predicting software defects based on metric datasets that contain a significant number of highly correlated software code metrics. It has been found that the best accuracy of the forecast for most classifiers was reached using a set of features obtained from the proposed static model of software reliability. In addition, it has been shown that it is also possible to use separate methods, such as Autoencoder, Exhaustive Feature Selection and Principal Component Analysis with an insignificant loss of classification and prediction accuracy
APA, Harvard, Vancouver, ISO, and other styles
11

Ching, Kin Keong, Tieng Wei Koh, Azim Abd. Ghani Abdul, and Yatim Sharif Khaironi. "Towards the Use of Software Product Metrics as an Indicator for Measuring Mobile Applications Power Consumption." October 1, 2015. https://doi.org/10.5281/zenodo.1109802.

Full text
Abstract:
Maintaining factory default battery endurance rate over time in supporting huge amount of running applications on energy-restricted mobile devices has created a new challenge for mobile applications developer. While delivering customers' unlimited expectations, developers are barely aware of efficient use of energy from the application itself. Thus, developers need a set of valid energy consumption indicators in assisting them to develop energy saving applications. In this paper, we present a few software product metrics that can be used as an indicator to measure energy consumption of Android-based mobile applications in the early of design stage. In particular, Trepn Profiler (Power profiling tool for Qualcomm processor) has used to collect the data of mobile application power consumption, and then analyzed for the 23 software metrics in this preliminary study. The results show that McCabe cyclomatic complexity, number of parameters, nested block depth, number of methods, weighted methods per class, number of classes, total lines of code and method lines have direct relationship with power consumption of mobile application.
APA, Harvard, Vancouver, ISO, and other styles
12

Kafura, Dennis. "Reflections on McCabe’s Cyclomatic Complexity." IEEE Transactions on Software Engineering, 2025, 1–6. https://doi.org/10.1109/tse.2025.3534580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Cheng, Po‐Hsun, Li‐Wei Chen, Pin‐Rong Chen, Shih‐Yueh Chao, and Tzu‐Yin Liu. "Customizing Avatars to Enable No‐Code Telerehabilitation Movement Therapy." Computer Animation and Virtual Worlds 36, no. 1 (2025). https://doi.org/10.1002/cav.70005.

Full text
Abstract:
ABSTRACTWhile numerous motion capture tools have emerged, scarcer are solutions tailored for code‐free utilization environments. An intuitive no‐code interface promises greater accessibility for diverse user groups. Guided by this premise, the myKLA2 toolkit incorporates integrated decoder, editor, and player components tailored for code‐free motion development workflows. Leveraging cyber‐physical system principles, the software enables users to customize digital avatars by selecting from a large repository of sharable 3D VRM models. Additionally, McCabe's cyclomatic complexity analysis facilitated iterative refactoring to optimize system capabilities and toolset performance. Advanced joint tracking and motion capture algorithms enable precise avatar animation mirroring user movements in real‐time. A human‐readable JSON format facilitates configurable behavior encoding and exchange. This research demonstrates the system's potential as a versatile no‐code toolkit for rapid development and design across healthcare, education, and other domains. By automating core tasks like motion tracking configuration, sensor calibration, and basic quantitative assessments, this tool promises to streamline therapist workflows in telerehabilitation.
APA, Harvard, Vancouver, ISO, and other styles
14

Doneva, Rositsa, Silvia Gaftandzhieva, Zhelyana Doneva, and Nevena Staevsky. "Software Quality Assessment Tool Based on Meta-Models." May 30, 2015. https://doi.org/10.5281/zenodo.399304.

Full text
Abstract:
In the software industry it is indisputably essential to control the quality of produced software systems in terms of capabilities for easy maintenance, reuse, portability and others in order to ensure reliability in the software development. But it is also clear that it is very difficult to achieve such a control through a ‘manual’ management of quality.There are a number of approaches for software quality assurance based typically on software quality models (e.g. ISO 9126, McCall’s, Boehm’s and Dormey’s models) and software quality metrics (e.g. LOC, McCabe's cyclomatic complexity, Halstead's metric, Object-oriented metrics) for assessment of various quality characteristics. Since the appearance of the software quality assurance as a field in the software engineering, researchers have been looking for ways to automatically assess and manage the quality of the software systems.This paper presents a conceptual design of a comprehensive solution, referring to the automation of the software quality assessment process. The designed software tool allows the definition of software quality models, based on standards, and enable the setting of matching between criteria of a software quality model and appropriate software quality metrics. The automatic definition and application of software quality models and software quality metrics is based on relevant supported by the software tool meta-models proposed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
15

Lavazza, Luigi, Sandro Morasca, and Marco Gatto. "An empirical study on software understandability and its dependence on code characteristics." Empirical Software Engineering 28, no. 6 (2023). http://dx.doi.org/10.1007/s10664-023-10396-7.

Full text
Abstract:
Abstract Context Insufficient code understandability makes software difficult to inspect and maintain and is a primary cause of software development cost. Several source code measures may be used to identify difficult-to-understand code, including well-known ones such as Lines of Code and McCabe’s Cyclomatic Complexity, and novel ones, such as Cognitive Complexity. Objective We investigate whether and to what extent source code measures, individually or together, are correlated with code understandability. Method We carried out an empirical study with students who were asked to carry out realistic maintenance tasks on methods from real-life Open Source Software projects. We collected several data items, including the time needed to correctly complete the maintenance tasks, which we used to quantify method understandability. We investigated the presence of correlations between the collected code measures and code understandability by using several Machine Learning techniques. Results We obtained models of code understandability using one or two code measures. However, the obtained models are not very accurate, the average prediction error being around 30%. Conclusions Based on our empirical study, it does not appear possible to build an understandability model based on structural code measures alone. Specifically, even the newly introduced Cognitive Complexity measure does not seem able to fulfill the promise of providing substantial improvements over existing measures, at least as far as code understandability prediction is concerned. It seems that, to obtain models of code understandability of acceptable accuracy, process measures should be used, possibly together with new source code measures that are better related to code understandability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!