To see the other types of publications on this topic, follow the link: Software classification.

Journal articles on the topic 'Software classification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Software classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Konet, I. M., and T. P. Pylypiuk. "PEDAGOGICAL SOFTWARE FOR PHYSICS: CLASSIFICATION, ANALYSIS, CREATION TOOLS." Collection of scientific papers of Kamianets-Podilskyi National Ivan Ohiienko University. Pedagogical series, no. 24 (November 29, 2018): 63–66. http://dx.doi.org/10.32626/2307-4507.2018-24.63-66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Warintarawej, P., M. Huchard, M. Lafourcade, A. Laurent, and P. Pompidor. "Software understanding: Automatic classification of software identifiers." Intelligent Data Analysis 19, no. 4 (2015): 761–78. http://dx.doi.org/10.3233/ida-150744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Feoktistov, Aleksandr G., Aleksandr S. Korsukov, and Olga Yu Basharina. "CLASSIFICATION OF SCALABLE SOFTWARE COMPLEXES." Proceedings of Irkutsk State Technical University 21, no. 11 (2017): 92–103. http://dx.doi.org/10.21285/1814-3520-2017-11-92-103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Mradul. "Software Bug Classification and Assignment." IOSR Journal of Engineering 3, no. 7 (2013): 01–03. http://dx.doi.org/10.9790/3021-03730103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shelekhov, V. I. "Program Classification in Software Engineering." PROGRAMMNAYA INGENERIA 7, no. 12 (2016): 531–38. http://dx.doi.org/10.17587/prin.7.531-538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ji, Shujuan, and Xiaohong Bao. "Research on Software Hazard Classification." Procedia Engineering 80 (2014): 407–14. http://dx.doi.org/10.1016/j.proeng.2014.09.098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Negishi, Hirokazu. "Tentative classification of global software." Behaviour & Information Technology 4, no. 2 (1985): 163–70. http://dx.doi.org/10.1080/01449298508901796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tichý, Lubomír. "JUICE, software for vegetation classification." Journal of Vegetation Science 13, no. 3 (2002): 451–53. http://dx.doi.org/10.1111/j.1654-1103.2002.tb02069.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yusof, Yuhanis, and Qusai Hussein Ra. "Automation of Software Artifacts Classification." International Journal of Soft Computing 5, no. 3 (2010): 109–15. http://dx.doi.org/10.3923/ijscomp.2010.109.115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lavrischeva, E. M. "Classification of software engineering disciplines." Cybernetics and Systems Analysis 44, no. 6 (2008): 791–96. http://dx.doi.org/10.1007/s10559-008-9053-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Walters, Suvarsha, and T. B. Rajashekar. "Mapping of Two Schemes of Classification for Software Classification." Cataloging & Classification Quarterly 41, no. 1 (2005): 163–82. http://dx.doi.org/10.1300/j104v41n01_08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Lloyd, Ian. "UK: Classification of Software as Goods?" Computer Law Review International 20, no. 3 (2019): 84–87. http://dx.doi.org/10.9785/cri-2019-200306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Khusidman, Vitaly, and David M. Bridgeland. "A Classification Framework for Software Reuse." Journal of Object Technology 5, no. 6 (2006): 43. http://dx.doi.org/10.5381/jot.2006.5.6.a1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ebert, Christof. "Fuzzy classification for software criticality analysis." Expert Systems with Applications 11, no. 3 (1996): 323–42. http://dx.doi.org/10.1016/s0957-4174(96)00048-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Khoshgoftaar, Taghi M., Naeem Seliya, and Angela Herzberg. "Resource-oriented software quality classification models." Journal of Systems and Software 76, no. 2 (2005): 111–26. http://dx.doi.org/10.1016/j.jss.2004.04.027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Shock, Robert C., and Thomas C. Hartrum. "A classification scheme for software modules." Journal of Systems and Software 42, no. 1 (1998): 29–44. http://dx.doi.org/10.1016/s0164-1212(98)00005-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Laitinen, Kari. "Document classification for software quality systems." ACM SIGSOFT Software Engineering Notes 17, no. 4 (1992): 32–39. http://dx.doi.org/10.1145/141874.141882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Prieto-Díaz, Rubén. "Implementing faceted classification for software reuse." Communications of the ACM 34, no. 5 (1991): 88–97. http://dx.doi.org/10.1145/103167.103176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Kozhevnikova, G. P., and A. A. Stognii. "Facet classification of software quality measures." Cybernetics 25, no. 4 (1990): 546–63. http://dx.doi.org/10.1007/bf01070378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hong, Euy-Seok. "Software Quality Classification using Bayesian Classifier." Journal of the Korea society of IT services 11, no. 1 (2012): 211–21. http://dx.doi.org/10.9716/kits.2012.11.1.211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

AFZAL, WASIF, RICHARD TORKAR, and ROBERT FELDT. "RESAMPLING METHODS IN SOFTWARE QUALITY CLASSIFICATION." International Journal of Software Engineering and Knowledge Engineering 22, no. 02 (2012): 203–23. http://dx.doi.org/10.1142/s0218194012400037.

Full text
Abstract:
In the presence of a number of algorithms for classification and prediction in software engineering, there is a need to have a systematic way of assessing their performances. The performance assessment is typically done by some form of partitioning or resampling of the original data to alleviate biased estimation. For predictive and classification studies in software engineering, there is a lack of a definitive advice on the most appropriate resampling method to use. This is seen as one of the contributing factors for not being able to draw general conclusions on what modeling technique or set of predictor variables are the most appropriate. Furthermore, the use of a variety of resampling methods make it impossible to perform any formal meta-analysis of the primary study results. Therefore, it is desirable to examine the influence of various resampling methods and to quantify possible differences. Objective and method: This study empirically compares five common resampling methods (hold-out validation, repeated random sub-sampling, 10-fold cross-validation, leave-one-out cross-validation and non-parametric bootstrapping) using 8 publicly available data sets with genetic programming (GP) and multiple linear regression (MLR) as software quality classification approaches. Location of (PF, PD) pairs in the ROC (receiver operating characteristics) space and area under an ROC curve (AUC) are used as accuracy indicators. Results: The results show that in terms of the location of (PF, PD) pairs in the ROC space, bootstrapping results are in the preferred region for 3 of the 8 data sets for GP and for 4 of the 8 data sets for MLR. Based on the AUC measure, there are no significant differences between the different resampling methods using GP and MLR. Conclusion: There can be certain data set properties responsible for insignificant differences between the resampling methods based on AUC. These include imbalanced data sets, insignificant predictor variables and high-dimensional data sets. With the current selection of data sets and classification techniques, bootstrapping is a preferred method based on the location of (PF, PD) pair data in the ROC space. Hold-out validation is not a good choice for comparatively smaller data sets, where leave-one-out cross-validation (LOOCV) performs better. For comparatively larger data sets, 10-fold cross-validation performs better than LOOCV.
APA, Harvard, Vancouver, ISO, and other styles
22

J, Shankar Murthy. "Network Software Vulnerability Identifier using J48 decision tree algorithm." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (2021): 1889–92. http://dx.doi.org/10.22214/ijraset.2021.37685.

Full text
Abstract:
Abstract: Software vulnerabilities are the primary causes of different security issues in the modern era. When vulnerability is exploited by malicious assaults, it substantially jeopardizes the system's security and may potentially result in catastrophic losses. As a result, automatic classification methods are useful for successfully managing software vulnerabilities, improving system security performance, and lowering the chance of the system being attacked and destroyed. In the software industry and in the field of cyber security, the ever-increasing number of publicly reported security flaws has become a major source of concern. Because software security flaws play such a significant part in cyber security attacks, relevant security experts are conducting an increasing number of vulnerability classification studies, this project can predict the software vulnerability means the software's in the device are authorized or not and who scan the system multiple times, to identify the vulnerability j48 decision tree algorithm was used. Keywords: Malicious assaults, catastrophic losses, Security flaws, Cyber security, Vulnerability Classifications.
APA, Harvard, Vancouver, ISO, and other styles
23

Gupta, Sangita, and Suma V. "Application and Assessment of Classification Techniques on Programmers’ Performance in Software Industry." Journal of Software 10, no. 9 (2015): 1096–103. http://dx.doi.org/10.17706//jsw.10.9.1096-1103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Audigé, Laurent, Carl-Peter Cornelius, Christoph Kunz, Carlos H. Buitrago-Téllez, and Joachim Prein. "The Comprehensive AOCMF Classification System: Classification and Documentation within AOCOIAC Software." Craniomaxillofacial Trauma & Reconstruction 7, no. 1_suppl (2014): 114–22. http://dx.doi.org/10.1055/s-0034-1389564.

Full text
Abstract:
The AOCMF Classification Group developed a hierarchical three-level craniomaxillofacial (CMF) fracture classification system. The fundamental level 1 distinguishes four major anatomical units including the mandible (code 91), midface (code 92), skull base (code 93) and cranial vault (code 94); level 2 relates to the location of the fractures within defined topographical regions within each units; level 3 relates to fracture morphology in these regions regarding fragmentation, displacement, and bone defects, as well as the involvement of specific anatomical structures. The resulting CMF classification system has been implemented into AO comprehensive injury automatic classifier (AOCOIAC) software allowing for fracture classification as well as clinical documentation of individual cases including a selected sample of diagnostic images. This tutorial highlights the main features of the software. In addition, a series of illustrative case examples is made available electronically for viewing and editing.
APA, Harvard, Vancouver, ISO, and other styles
25

Simons, Anthony J. H. "The Theory of Classification, Part 8: Classification and Inheritance." Journal of Object Technology 2, no. 4 (2003): 55. http://dx.doi.org/10.5381/jot.2003.2.4.c4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Dias Canedo, Edna, and Bruno Cordeiro Mendes. "Software Requirements Classification Using Machine Learning Algorithms." Entropy 22, no. 9 (2020): 1057. http://dx.doi.org/10.3390/e22091057.

Full text
Abstract:
The correct classification of requirements has become an essential task within software engineering. This study shows a comparison among the text feature extraction techniques, and machine learning algorithms to the problem of requirements engineer classification to answer the two major questions “Which works best (Bag of Words (BoW) vs. Term Frequency–Inverse Document Frequency (TF-IDF) vs. Chi Squared (CHI2)) for classifying Software Requirements into Functional Requirements (FR) and Non-Functional Requirements (NF), and the sub-classes of Non-Functional Requirements?” and “Which Machine Learning Algorithm provides the best performance for the requirements classification task?”. The data used to perform the research was the PROMISE_exp, a recently made dataset that expands the already known PROMISE repository, a repository that contains labeled software requirements. All the documents from the database were cleaned with a set of normalization steps and the two feature extractions, and feature selection techniques used were BoW, TF-IDF and CHI2 respectively. The algorithms used for classification were Logist Regression (LR), Support Vector Machine (SVM), Multinomial Naive Bayes (MNB) and k-Nearest Neighbors (kNN). The novelty of our work is the data used to perform the experiment, the details of the steps used to reproduce the classification, and the comparison between BoW, TF-IDF and CHI2 for this repository not having been covered by other studies. This work will serve as a reference for the software engineering community and will help other researchers to understand the requirement classification process. We noticed that the use of TF-IDF followed by the use of LR had a better classification result to differentiate requirements, with an F-measure of 0.91 in binary classification (tying with SVM in that case), 0.74 in NF classification and 0.78 in general classification. As future work we intend to compare more algorithms and new forms to improve the precision of our models.
APA, Harvard, Vancouver, ISO, and other styles
27

Shaout, Adnan, and Juan C. Garcia. "Fuzzy Rule Base System for Software Classification." International Journal of Computer Science and Information Technology 5, no. 3 (2013): 1–21. http://dx.doi.org/10.5121/ijcsit.2013.5301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Farhady, Hamid, and Akihiro Nakao. "Tag-Based Classification for Software-Defined Networking." International Journal of Grid and High Performance Computing 7, no. 1 (2015): 1–14. http://dx.doi.org/10.4018/ijghpc.2015010101.

Full text
Abstract:
Software-Defined Networking (SDN) increasingly attracts more researchers as well as industry attentions. Most of current SDN packet processing approaches classify packets based on matching a set of fields on the packet against a flow table and then applying an action on the packet. The authors argue that it is possible to simplify this mechanism using single-field classification and reduce the overhead. They propose a tag-based packet classification architecture to reduce filtering and flow management overhead. Then, they show how to use this extra capacity to perform application layer classification for different purposes. The authors demonstrated their evaluation results to indicate the effectiveness of the proposal. Furthermore, they implemented a customized user-defined SDN action that addresses some security challenges of one of their previous works and showed performance evaluation results.
APA, Harvard, Vancouver, ISO, and other styles
29

KHOSHGOFTAAR, TAGHI M., LOFTON A. BULLARD, and KEHAN GAO. "A RULE-BASED SOFTWARE QUALITY CLASSIFICATION MODEL." International Journal of Reliability, Quality and Safety Engineering 15, no. 03 (2008): 247–59. http://dx.doi.org/10.1142/s0218539308003064.

Full text
Abstract:
A rule-based classification model is presented to identify high-risk software modules. It utilizes the power of rough set theory to reduce the number of attributes, and the equal frequency binning algorithm to partition the values of the attributes. As a result, a set of conjuncted Boolean predicates are formed. The model is inherently influenced by the practical needs of the system being modeled, thus allowing the analyst to determine which rules are to be used for classifying the fault-prone and not fault-prone modules. The proposed model also enables the analyst to control the number of rules that constitute the model. Empirical validation of the model is accomplished through a case study of a large legacy telecommunications system. The ease of rule interpretation and the transparency of the functional aspects of the model are clearly demonstrated. It is concluded that the new model is effective in achieving the software quality classification.
APA, Harvard, Vancouver, ISO, and other styles
30

Graham, Dorothy R. "Software testing tools: A new classification scheme." Software Testing, Verification and Reliability 1, no. 3 (1991): 17–34. http://dx.doi.org/10.1002/stvr.4370010304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Crnkovic, Ivica, Severine Sentilles, A. Vulgarakis, and Michel R. V. Chaudron. "A Classification Framework for Software Component Models." IEEE Transactions on Software Engineering 37, no. 5 (2011): 593–615. http://dx.doi.org/10.1109/tse.2010.83.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Cox, M. G. "A classification of mathematical software for metrology." ISA Transactions 33, no. 4 (1994): 383–89. http://dx.doi.org/10.1016/0019-0578(94)90021-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Ebert, Christof. "Classification techniques for metric-based software development." Software Quality Journal 5, no. 4 (1996): 255–72. http://dx.doi.org/10.1007/bf00209184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Nazir, Shah, Sara Shahzad, and Lala Septem Riza. "Birthmark-Based Software Classification Using Rough Sets." Arabian Journal for Science and Engineering 42, no. 2 (2016): 859–71. http://dx.doi.org/10.1007/s13369-016-2371-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Iwata, Kazunori, Toyoshiro Nakashima, Yoshiyuki Anan, and Naohiro Ishii. "Machine Learning Classification to Effort Estimation for Embedded Software Development Projects." International Journal of Software Innovation 5, no. 4 (2017): 19–32. http://dx.doi.org/10.4018/ijsi.2017100102.

Full text
Abstract:
This paper discusses the effect of classification in estimating the amount of effort (in man-days) associated with code development. Estimating the effort requirements for new software projects is especially important. As outliers are harmful to the estimation, they are excluded from many estimation models. However, such outliers can be identified in practice once the projects are completed, and so they should not be excluded during the creation of models and when estimating the required effort. This paper presents classifications for embedded software development projects using an artificial neural network (ANN) and a support vector machine. After defining the classifications, effort estimation models are created for each class using linear regression, an ANN, and a form of support vector regression. Evaluation experiments are carried out to compare the estimation accuracy of the model both with and without the classifications using 10-fold cross-validation. In addition, the Games-Howell test with one-way analysis of variance is performed to consider statistically significant evidence.
APA, Harvard, Vancouver, ISO, and other styles
36

Hadi, Wa'el, Qasem A. Al-Radaideh, and Samer Alhawari. "Integrating associative rule-based classification with Naïve Bayes for text classification." Applied Soft Computing 69 (August 2018): 344–56. http://dx.doi.org/10.1016/j.asoc.2018.04.056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mao, Chengsheng, Lijuan Lu, and Bin Hu. "Local probabilistic model for Bayesian classification: A generalized local classification model." Applied Soft Computing 93 (August 2020): 106379. http://dx.doi.org/10.1016/j.asoc.2020.106379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kim, Yesol, Seong-je Cho, Sangchul Han, and Ilsun You. "A software classification scheme using binary-level characteristics for efficient software filtering." Soft Computing 22, no. 2 (2016): 595–606. http://dx.doi.org/10.1007/s00500-016-2357-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Mughal, Muhammad Hussain, and Zubair Ahmed Shaikh. "Software Atom: An approach towards software components structuring to improve reusability." Sukkur IBA Journal of Computing and Mathematical Sciences 1, no. 2 (2017): 66. http://dx.doi.org/10.30537/sjcms.v1i2.31.

Full text
Abstract:
Diversity of application domain compelled to design sustainable classification scheme for significantly amassing software repository. The atomic reusable software components are articulated to improve the software component reusability in volatile industry. Numerous approaches of software classification have been proposed over past decades. Each approach has some limitations related to coupling and cohesion. In this paper, we proposed a novel approach by constituting the software based on radical functionalities to improve software reusability. We analyze the element's semantics in Periodic Table used in chemistry to design our classification approach, and present this approach using tree-based classification to curtail software repository search space complexity and further refined based on semantic search techniques. We developed a Global unique Identifier (GUID) for indexing the functions and related components. We have exploited the correlation between chemistry element and software elements to simulate one to one mapping between them. Our approach is inspired from sustainability chemical periodic table. We have proposed software periodic table (SPT) representing atomic software components extracted from real application software. Based on SPT classified repository tree parsing & extraction to enable the user to program their software by customizing the ingredients of software requirements. The classified repository of software ingredients assist user to exploits their requirements to software engineer and enable requirement engineer to develop a rapid large-scale prototype with great essence. Furthermore, we would predict the usability of the categorized repository based on feedback of users. The continuous evolution of that proposed repository will be fine-tuned based on utilization and SPT would be gradually optimized by ant colony optimization techniques. Succinctly would provoke automating the software development process.
APA, Harvard, Vancouver, ISO, and other styles
40

Al-Hamed, Shareef. "Comparison Between the Classical Classification and Digital Classification for Selected Samples of Igneous and Carbonate Rocks." Iraqi Geological Journal 54, no. 1C (2021): 16–29. http://dx.doi.org/10.46717/igj.54.1c.2ms-2021-03-22.

Full text
Abstract:
As igneous rocks have widely chemical and mineralogical compositions, there are many ways to classify these rocks. These ways are classical approved methods to give a reliable classification and nomenclature of rocks. Some igneous rocks may be classified by digital image processing to assist in classical methods. Five igneous samples were cut, prepared of thin sections, and polished to classify them by classical methods and digital image processing by ENVI software. Moreover, part of these samples crushed an analysis of major oxides. The current igneous samples have referred to the basic and mesocratic rocks based on the classical methods and this has corresponded to ENVI software. The igneous samples have reflected the leucogabbros when classify them by classical and ENVI classifications, except the G5 sample, which has been referred to as gabbro by ENVI. There is a clear similarity between the classical and ENVI classifications. ENVI classification is a reliable classification to assist the classical methods in the nomenclature of igneous rocks, especially, plutonic rocks, it can be also applied to thin sections of volcanic rocks to classify and nomenclature classification by ENVI has been applied on fifty thin sections of limestones to identify microfacies which are classified beforehand by classical (optical) classification. According to optical classification, microfacies have classified as mudstone, wackestone, packstone, and grainstone. When the digital classification is applied to them, there is no grainstone texture found in these them. Digital thin sections, where the true name of these microfacies is packstone. Therefore, the positive sides of the digital image processing by ENVI software appeared and contrasted to the optical classification which contained some mistakes when applied to the nomenclature of these microfacies.
APA, Harvard, Vancouver, ISO, and other styles
41

Yanto, Iwan Tri Riyadi. "Minimum Error Classification Clustering." International Journal of Software Engineering and Its Applications 7, no. 5 (2013): 221–32. http://dx.doi.org/10.14257/ijseia.2013.7.5.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Munkby, Gustav, and Sibylle Schupp. "Automating exception-safety classification." Science of Computer Programming 76, no. 4 (2011): 278–89. http://dx.doi.org/10.1016/j.scico.2008.06.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Yan, Yongquan, and Ping Guo. "Predicting Software Abnormal State by using Classification Algorithm." Journal of Database Management 27, no. 2 (2016): 49–65. http://dx.doi.org/10.4018/jdm.2016040103.

Full text
Abstract:
Software aging, also called smooth degradation or chronics, has been observed in a long running software application, accompanied by performance degradation, hang/crash failures or both. The key for software aging problem is how to fast and accurately detect software aging occurrence, which is a hard work due to the long delay before aging appearance. In this paper, two problems about software aging prediction are solved, which are how to accurately find proper running software system variables to represent system state and how to predict software aging state in a running software system with a minor error rate. Firstly, the authors use proposed stepwise forward selection algorithm and stepwise backward selection algorithm to find a proper subset of variables set. Secondly, a classification algorithm is used to model software aging process. Lastly, t-test with k-fold cross validation is used to compare performance of two classification algorithms. In the experiments, the authors find that their proposed method is an efficient way to forecast software aging problems in advance.
APA, Harvard, Vancouver, ISO, and other styles
44

PAN, Wei-Feng, Bing LI, Bo SHAO, and Peng HE. "Service Classification and Recommendation Based on Software Networks." Chinese Journal of Computers 34, no. 12 (2012): 2355–69. http://dx.doi.org/10.3724/sp.j.1016.2011.02355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Härer, S., M. Bernhardt, J. G. Corripio, and K. Schulz. "PRACTISE – Photo Rectification And ClassificaTIon SoftwarE (V.1.0)." Geoscientific Model Development 6, no. 3 (2013): 837–48. http://dx.doi.org/10.5194/gmd-6-837-2013.

Full text
Abstract:
Abstract. Terrestrial photography is a cost-effective and easy-to-use method for measuring and monitoring spatially distributed land surface variables. It can be used to continuously investigate remote and often inaccessible terrain. We focus on the observation of snow cover patterns in high mountainous areas. The high temporal and spatial resolution of the photographs have various applications, for example validating spatially distributed snow hydrological models. However, the analysis of a photograph requires a preceding georectification of the digital camera image. To accelerate and simplify the analysis, we have developed the "Photo Rectification And ClassificaTIon SoftwarE" (PRACTISE) that is available as a Matlab code. The routine requires a digital camera image, the camera location and its orientation, as well as a digital elevation model (DEM) as input. If the viewing orientation and position of the camera are not precisely known, an optional optimisation routine using ground control points (GCPs) helps to identify the missing parameters. PRACTISE also calculates a viewshed using the DEM and the camera position. The visible DEM pixels are utilised to georeference the photograph which is subsequently classified. The resulting georeferenced and classified image can be directly compared to other georeferenced data and can be used within any geoinformation system. The Matlab routine was tested using observations of the north-eastern slope of the Schneefernerkopf, Zugspitze, Germany. The results obtained show that PRACTISE is a fast and user-friendly tool, able to derive the microscale variability of snow cover extent in high alpine terrain, but can also easily be adapted to other land surface applications.
APA, Harvard, Vancouver, ISO, and other styles
46

Härer, S., M. Bernhardt, and K. Schulz. "PRACTISE – Photo Rectification And ClassificaTIon SoftwarE (V.2.1)." Geoscientific Model Development 9, no. 1 (2016): 307–21. http://dx.doi.org/10.5194/gmd-9-307-2016.

Full text
Abstract:
Abstract. Terrestrial photography combined with the recently presented Photo Rectification And ClassificaTIon SoftwarE (PRACTISE V.1.0) has proven to be a valuable source to derive snow cover maps in a high temporal and spatial resolution. The areal coverage of the used digital photographs is however strongly limited. Satellite images on the other hand can cover larger areas but do show uncertainties with respect to the accurate detection of the snow covered area. This is especially the fact if user defined thresholds are needed, e.g. in case of the frequently used normalized-difference snow index (NDSI). The definition of this value is often not adequately defined by either a general value from literature or over the impression of the user, but not by reproducible independent information. PRACTISE V.2.1 addresses this important aspect and shows additional improvements. The Matlab-based software is now able to automatically process and detect snow cover in satellite images. A simultaneously captured camera-derived snow cover map is in this case utilized as in situ information for calibrating the NDSI threshold value. Moreover, an additional automatic snow cover classification, specifically developed to classify shadow-affected photographs, was included. The improved software was tested for photographs and Landsat 7 Enhanced Thematic Mapper (ETM+) as well as Landsat 8 Operational Land Imager (OLI) scenes in the Zugspitze massif (Germany). The results show that using terrestrial photography in combination with satellite imagery can lead to an objective, reproducible, and user-independent derivation of the NDSI threshold and the resulting snow cover map. The presented method is not limited to the sensor system or the threshold used in here but offers manifold application options for other scientific branches.
APA, Harvard, Vancouver, ISO, and other styles
47

Härer, S., M. Bernhardt, J. G. Corripio, and K. Schulz. "PRACTISE – Photo Rectification And ClassificaTIon SoftwarE (V.1.0)." Geoscientific Model Development Discussions 6, no. 1 (2013): 171–202. http://dx.doi.org/10.5194/gmdd-6-171-2013.

Full text
Abstract:
Abstract. Terrestrial photography is a cost-effective and easy-to-use method to derive the status of spatially distributed land surface parameters. It can be used to continuously investigate remote and often inaccessible terrain. We focus on the observation of snow cover patterns in high mountainous areas. The high temporal and spatial resolution of the photographs have various applications, e.g. validating spatially distributed snow hydrological models. However, a one to one analysis of projected model results to photographs requires a preceding georectification of the digital camera images. To accelerate and simplify the analysis, we have developed the "Photo Rectification And ClassificaTIon SoftwarE" (PRACTISE) that is available as a Matlab code. The routine requires a digital camera image, the camera location and its orientation, as well as a digital elevation model (DEM) as input. In case of an unknown viewing orientation an optional optimisation routine using ground control points (GCPs) helps to identify the missing parameters. PRACTISE also calculates a viewshed using the DEM and the camera position and it projects the visible DEM pixels to the image plane where they are subsequently classified. The resulting projected and classified image can be directly compared to other projected data and can be used within any geoinformation system. The Matlab routine was tested using observations of the north western slope of the Schneefernerkopf, Zugspitze, Germany. The obtained results have shown that PRACTISE is a fast and user-friendly tool, able to derive the microscale variability of snow cover extent in high alpine terrain, but can also easily be adapted to other land surface applications.
APA, Harvard, Vancouver, ISO, and other styles
48

Härer, S., M. Bernhardt, and K. Schulz. "PRACTISE – Photo Rectification And ClassificaTIon SoftwarE (V.2.0)." Geoscientific Model Development Discussions 8, no. 10 (2015): 8481–518. http://dx.doi.org/10.5194/gmdd-8-8481-2015.

Full text
Abstract:
Abstract. Terrestrial photography combined with the recently presented Photo Rectification And ClassificaTIon SoftwarE (PRACTISE V.1.0) has proven to be a valuable source to derive snow cover maps in a high temporal and spatial resolution. The areal coverage of the used digital photographs is however strongly limited. Satellite images on the other hand can cover larger areas but do show uncertainties with respect to the accurate detection of the snow covered area. This is especially the fact if user defined thresholds are needed e.g. in case of the frequently used Normalised-Difference Snow Index (NDSI). The definition of this value is often not adequately defined by either a general value from literature or over the impression of the user but not by reproducible independent information. PRACTISE V.2.0 addresses this important aspect and does show additional improvements. The Matlab based software is now able to automatically process and detect snow cover in satellite images. A simultaneously captured camera-derived snow cover map is in this case utilised as in-situ information for calibrating the NDSI threshold value. Moreover, an additional automatic snow cover classification, specifically developed to classify shadow-affected photographs was included. The improved software was tested for photographs and Landsat 7 Enhanced Thematic Mapper (ETM+) as well as Landsat 8 Operational Land Imager (OLI) scenes in the Zugspitze massif (Germany). The results have shown that using terrestrial photography in combination with satellite imagery can lead to an objective, reproducible and user-independent derivation of the NDSI threshold and the resulting snow cover map. The presented method is not limited to the sensor system or the threshold used in here but offers manifold application options for other scientific branches.
APA, Harvard, Vancouver, ISO, and other styles
49

Karen, P., M. Števanec, V. Smerdu, E. Cvetko, L. Kubínová, and I. Eržen. "Software for muscle fibre type classification and analysis." European Journal of Histochemistry 53, no. 2 (2009): 11. http://dx.doi.org/10.4081/ejh.2009.e11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Saleem, Nada, and Rasha Saeed. "Classification Software Engineering Documents Based on Hybrid Model." AL-Rafidain Journal of Computer Sciences and Mathematics 12, no. 2 (2018): 61–87. http://dx.doi.org/10.33899/csmj.2018.163582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography