Academic literature on the topic 'Wheat – Classification – Computer programs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Wheat – Classification – Computer programs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Wheat – Classification – Computer programs"

1

Parsadanova, Tatyana. "MODERN APPROACHES TO CLASSIFICATION OF TELEVISION CONTENT." Scientific and analytical journal Burganov House. The space of culture 16, no. 2 (June 10, 2020): 19–32. http://dx.doi.org/10.36340/2071-6818-2020-16-2-19-32.

Full text
Abstract:
Program classification is an important tool for records and marketing. Accessible, reliable, and internationally comparable data is needed in all aspects of work. And this is not only a matter of program organisation but also a fundamental part of the research of the audience in terms of studying the relationship between the public and the programs. A typology can be created according to the motives and habits, underlying the behavior of the viewer as a buyer with regard to their television preferences. It can be first considered whether the viewer is watching TV carefully or in parallel with other things, constantly changing channels, or watching a selected channel continuously. The demographic approach is formed by derived indicators and estimates, which allows to comprehensively characterize the structure and movement of the population, social, and demographic processes. In the case of studying television audiences, it is more common to deal with indicators of the number of people, gender, age, state of marriage, level of education, profession, social status, income. A TV set, a computer, a tablet, a smartphone - all this is now television. Arranged on the air according to the broadcasting grid, we watch television programs at home; it is called linear viewing. However, we can also request the content we are interested in on any screen at any time, anywhere - this is non-linear viewing. Recently, in connection with the Covid-19 pandemic, even journalists have been broadcasting from home. It used to be just television but nowadays the definition of "big television" has come into use. Television is primarily what it shows - television content that has certain characteristics. The usual division is based on the basic functions of television - informative, entertaining, and educational. However, a lot depends on the idea, thematic focus, genre structure, origin, format, and content. The division is necessary for a greater understanding of what exactly we intend to produce according to the formula: there is an idea, what the manufacturer wants to convey to the audience, after, it is necessary to understand how it can be made and with what content filled directly. This article deals with approaches to the classification of television programs.
APA, Harvard, Vancouver, ISO, and other styles
2

Uzhinskiy, A. V., G. A. Ososkov, P. V. Goncharov, A. V. Nechaevskiy, and A. A. Smetanin. "One-shot learning with triplet loss for vegetation classification tasks." Computer Optics 45, no. 4 (July 2021): 608–14. http://dx.doi.org/10.18287/2412-6179-co-856.

Full text
Abstract:
Triplet loss function is one of the options that can significantly improve the accuracy of the One-shot Learning tasks. Starting from 2015, many projects use Siamese networks and this kind of loss for face recognition and object classification. In our research, we focused on two tasks related to vegetation. The first one is plant disease detection on 25 classes of five crops (grape, cotton, wheat, cucumbers, and corn). This task is motivated because harvest losses due to diseases is a serious problem for both large farming structures and rural families. The second task is the identification of moss species (5 classes). Mosses are natural bioaccumulators of pollutants; therefore, they are used in environmental monitoring programs. The identification of moss species is an important step in the sample preprocessing. In both tasks, we used self-collected image databases. We tried several deep learning architectures and approaches. Our Siamese network architecture with a triplet loss function and MobileNetV2 as a base network showed the most impressive results in both above-mentioned tasks. The average accuracy for plant disease detection amounted to over 97.8% and 97.6% for moss species classification.
APA, Harvard, Vancouver, ISO, and other styles
3

Eagles, H. A., G. J. Hollamby, N. N. Gororo, and R. F. Eastwood. "Estimation and utilisation of glutenin gene effects from the analysis of unbalanced data from wheat breeding programs." Australian Journal of Agricultural Research 53, no. 4 (2002): 367. http://dx.doi.org/10.1071/ar01074.

Full text
Abstract:
Glutenins are a major determinant of dough characteristics in wheat. These proteins are determined by genes at 6 loci (Glu genes), with multiple alleles present in most breeding programs. This study was conducted to determine whether estimates of allele effects for the important dough rheological characters, maximum dough resistance (Rmax) and dough extensibility, could be determined from aggregated data from southern Australian wheat breeding programs using statistical techniques appropriate for unbalanced data. From a 2-stage analysis of 3226 samples of 1926 cultivars and breeding lines, estimates of Rmax and extensibility effects were obtained, first for the lines, and then for 31 glutenin alleles. Glutenin genes did not determine flour protein concentration, and this character was used as a covariate. Rankings of the estimates of Rmax for the alleles were similar to the relative scores for dough strength reported from previous studies, providing strong evidence that the analysis of a large, unbalanced data set from applied wheat breeding programs can provide reliable estimates. All 2-way interactions between loci were present for 18 of the alleles. Analyses including interactions showed that epistasis was important for both Rmax and extensibility, especially between the Glu-B1 locus coding for high molecular weight glutenins and the Glu-A3 and Glu-B3 loci coding for low molecular weight glutenins. Because of the complexity of these interactions, similar values of Rmax and extensibility were predicted for diverse combinations of alleles. This implied that the practical application of glutenin genes in applied wheat breeding would be greatly enhanced by computer software which can predict dough rheology characteristics from glutenin allele classifications.
APA, Harvard, Vancouver, ISO, and other styles
4

Van Zandt, CDR Robert A. "THE OSRO CLASSIFICATION PROGRAM: WHAT IT IS AND WHAT IT IS NOT." International Oil Spill Conference Proceedings 1997, no. 1 (April 1, 1997): 453–57. http://dx.doi.org/10.7901/2169-3358-1997-1-453.

Full text
Abstract:
ABSTRACT The Coast Guard's oil spill removal organization (OSRO) classification process underwent significant revision in late 1995. The revision was necessary to strengthen the program into a more credible and useful tool for facilitating preparation and review of vessel and facility response plans. The revised process is more closely linked to the response planning criteria that vessel and facility owners and operators are required to meet under the Oil Pollution Act of 1990. As a result, the process provides a better indication of an OSRO's capacity and potential to respond to and recover oil spills of various sizes. Data provided by each OSRO are being included in the computer-based national Response Resource Inventory (RRI). The paper describes the important features and limitations of the revised classification process and gives an interpretation of what the new classifications mean to response plan holders and reviewers. It also describes the method by which plan holders can use the computer-based RRI as a tool to conduct their own analysis of an OSRO's capacity to meet their specific planning requirements.
APA, Harvard, Vancouver, ISO, and other styles
5

Uhl, Joan E., William E. Wilkinson, and Connie S. Wilkinson. "Aching Backs? A Glimpse into the Hazards of Nursing." AAOHN Journal 35, no. 1 (January 1987): 13–17. http://dx.doi.org/10.1177/216507998703500103.

Full text
Abstract:
This study evaluates a combination of extant and collected data that compute frequency and exposure to back injuries reported by nursing employees at a Northwest Medical Center system. A major problem of interest, and the focus of this study is whether there is objective evidence to support the commonly held belief that lifting patients is the main cause of back injuries experienced by nurses; and whether job classification and worksite unit might be confounded with back injuries reported and what demographic characteristics of the nursing personnel, e.g., sex, age, job classification, and worksite unit, might be confounded with occupations that are high at risk for back injuries. Personnel records and injury report forms provided objective data for 659 registered nurses, licensed practical nurses, and nurse aides. Injury report forms providing data for 123 nursing personnel filed during the most recent consecutive twelve-month period between January 1, 1982 and April 30, 1985, were abstracted, summarized, and analyzed for number of back injuries reported using DataBase III and SPSSx computer programs on an IBM-AT system. In addition, on-site observations of patient lifts were made for ten eight-hour shifts on 15 different occasions and different worksites by a nurse-research analyst. These observation data were compared with self-report questionnaire responses representing over 54% of the total population of nurses within this medical center system. An inverse relationship of reported numbers of patient lift per shift was found for the observation and self-report data. Of the 2.5 females to males reporting back injuries, the average age was 43 years, with greater numbers of those injuries working on surgical and medical units versus lesser numbers injured from psychiatry and long-term care units in decreasing order. The Chi-square test was used to compute associations found not significant between reported high and low numbers of lifts and the incidence of back injuries. The t-test compared data from the observed and self-reported number of patient lifts and provided a significant (t = p <.001) difference in favor of self-report for number of lifts per eight-hour shift. Results of this study will contribute to increasing validity of lifting patients resulting in back injuries and further study of feasible and effective methods for evaluating back injuries and preventive interventions for nursing personnel who are at high risk of developing or sustaining back injury from any cause, while on the job.
APA, Harvard, Vancouver, ISO, and other styles
6

Dawood, Abdul Majeed S., and Saad Salman Awad AL Maeeni. "The role of electronic auditing in verifying the principles and approaches of accounting measurement for financial instruments when adopting international financial reporting standards IFRS." Muthanna Journal of Administrative and Economic Sciences 11, no. 1 (May 5, 2021): 229–47. http://dx.doi.org/10.52113/6/2021-11/229-247.

Full text
Abstract:
The diversity of financial assets owned by Iraqi companies, which are measured and presented in different ways according to the classification of these assets according to international financial reporting standards, and that re-measuring these assets (shares) affects the income statement and the financial position of companies according to the change in the fair value of shares. The auditor uses multiple auditing methods for the purpose of verifying the measurement and presentation of these assets, including the use of electronic means in auditing (computer auditing.(The aim of the research is to clarify what electronic auditing is and to explain and analyse the measurement requirements in accordance with the International Financial Reporting Standard (IFRS - 9), in addition to preparing an electronic audit program that helps the auditor to verify the re measurement and presentation of the companies ’financial assets. Two mixed joint stock companies (Iraqi Company for Manufacturing and Marketing Dates - the National Company for Tourism Investments and Real Estate Projects) are adopted as a field of application by analysing their financial data for the year / 2018 and conducting a simulation of the outcome of the activity and the financial position of the company using an electronic audit program. This is to show the difference between the actual results and the results expected to be shown in light of the measurement principles adopted under international financial reporting standards. The researchers have concluded that the use of electronic means helps the auditor to conduct the audit process for the various financial assets due to their multiplicity and diversity in addition to the diversity of their market values. In addition, this enables the auditor to identify errors and indicate their impact on the income statement and budget and thus reach a final opinion on the financial statements towards the use of electronic means in auditing operations by professional organizations and relevant authorities for the purpose of speed and accuracy in completing auditing operations. Moreover, the necessity to prepare electronic programs for various auditing purposes in line with the activity of the bodies subject for auditing and training auditors in the use of such programs.
APA, Harvard, Vancouver, ISO, and other styles
7

Ion, I., R. Arhire, and M. Macesanu. "Programs complexity: comparative analysis hierarchy, classification." ACM SIGPLAN Notices 22, no. 4 (April 1987): 94–102. http://dx.doi.org/10.1145/24714.24726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shi, Yong, YingJie Tian, XiaoJun Chen, and Peng Zhang. "Regularized multiple criteria linear programs for classification." Science in China Series F: Information Sciences 52, no. 10 (October 2009): 1812–20. http://dx.doi.org/10.1007/s11432-009-0126-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Zhengzhi, Sarath Kodagoda, David Ruiz, Jayantha Katupitiya, and Gamini Dissanayake. "Classification of Bidens in wheat farms." International Journal of Computer Applications in Technology 39, no. 1/2/3 (2010): 123. http://dx.doi.org/10.1504/ijcat.2010.034740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Verma, Surendra P., and M. Abdelaly Rivera-Gómez. "Computer programs for the classification and nomenclature of igneous rocks." Episodes 36, no. 2 (June 1, 2013): 115–24. http://dx.doi.org/10.18814/epiiugs/2013/v36i2/005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Wheat – Classification – Computer programs"

1

Abdalla, Lena(Lena A. ). "Classification of computer programs in the Scratch online community." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129862.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2020
Cataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 133-136).
Scratch is a graphical programming platform that empowers children to create computer programs and realize their ideas. Although the Scratch online community is filled with a variety of diverse projects, many of these projects also share similarities. For example, they tend to fall into certain categories, including games, animations, stories, and more. Throughout this thesis, I describe the application of Natural Language Processing (NLP) techniques to vectorize and classify Scratch projects by type. This effort included constructing a labeled dataset of 873 Scratch projects and their corresponding types, to be used for training a supervised classifier model. This dataset was constructed through a collective process of consensus-based annotation by experts. To realize the goal of classifying Scratch projects by type, I first train an unsupervised model of meaningful vector representations for Scratch blocks based on the composition of 500,000 projects. Using the unsupervised model as a basis for representing Scratch blocks, I then train a supervised classifier model that categorizes Scratch projects by type into one of: "animation", "game", and "other". After an extensive hyperparameter tuning process, I am able to train a classifier model with an F1 Score of 0.737. I include in this paper an in-depth analysis of the unsupervised and supervised models, and explore the different elements that were learned during training. Overall, I demonstrate that NLP techniques can be used in the classification of computer programs to a reasonable level of accuracy.
by Lena Abdalla.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
2

Shahzad, Raja Muhammad Khurram. "Classification of Potentially Unwanted Programs Using Supervised Learning." Licentiate thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00548.

Full text
Abstract:
Malicious software authors have shifted their focus from illegal and clearly malicious software to potentially unwanted programs (PUPs) to earn revenue. PUPs blur the border between legitimate and illegitimate programs and thus fall into a grey zone. Existing anti-virus and anti-spyware software are in many instances unable to detect previously unseen or zero-day attacks and separate PUPs from legitimate software. Many tools also require frequent updates to be effective. By predicting the class of particular piece of software, users can get support before taking the decision to install the software. This Licentiate thesis introduces approaches to distinguish PUP from legitimate software based on the supervised learning of file features represented as n-grams. The overall research method applied in this thesis is experiments. For these experiments, malicious software applications were obtained from anti-malware industrial partners. The legitimate software applications were collected from various online repositories. The general steps of supervised learning, from data preparation (n-gram generation) to evaluation were, followed. Different data representations, such as byte codes and operation codes, with different configurations, such as fixed-size, variable-length, and overlap, were investigated to generate different n-gram sizes. The experimental variables were controlled to measure the correlation between n-gram size, the number of features required for optimal training, and classifier performance. The thesis results suggest that, despite the subtle difference between legitimate software and PUP, this type of software can be classified accurately with a low false positive and false negative rate. The thesis results further suggest an optimal size of operation code-based n-grams for data representation. Finally, the results indicate that classification accuracy can be increased by using a customized ensemble learner that makes use of multiple representations of the data set. The investigated approaches can be implemented as a software tool with a less frequently required update in comparison to existing commercial tools.
APA, Harvard, Vancouver, ISO, and other styles
3

Torri, Stephen A. Hamilton John A. "Generic reverse engineering architecture with compiler and compression classification components." Auburn, Ala, 2009. http://hdl.handle.net/10415/1583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Janidlo, Peter S. "Rule-based expert systems and tonal chord classification." Virtual Press, 1999. http://liblink.bsu.edu/uhtbin/catkey/1137841.

Full text
Abstract:
The purpose of the proposed thesis is to:1. Define expert systems and discuss various implementation techniques for the components of expert systems. This includes discussion on knowledge representation, inference methods, methods for dealing with uncertainty, and methods of explanation. Specifically, the focus will be on the implementation of rule-based expert systems;2. Apply selected expert system techniques to a case study. The case study will be a rule-based expert system in Prolog to recognize and identify musical chords from tonal harmony. The system will have a general knowledge base containing fundamental rules about chord construction. It will also contain some knowledge that will allow it to deduce non-trivial chords. Furthermore, it will contain procedures to deal with uncertainty and explanation;3. Explain general concepts about music theory and tonal chord classification to put the case study in context; and4. Discuss the limitations of expert systems based on the results of the case study and the current literature.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
5

Pester, Matthias. "Visualization Tools for 2D and 3D Finite Element Programs - User's Manual." Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600436.

Full text
Abstract:
This paper deals with the visualization of numerical results as a very convenient method to understand and evaluate a solution which has been calculated as a set of millions of numerical values. One of the central research fields of the Chemnitz SFB 393 is the analysis of parallel numerical algorithms for large systems of linear equations arising from differential equations (e.g. in solid and fluid mechanics). Solving large problems on massively parallel computers makes it more and more impossible to store numerical data from the distributed memory of the parallel computer to the disk for later postprocessing. However, the developer of algorithms is interested in an on-line response of his algorithms. Both visual and numerical response of the running program may be evaluated by the user for a decision how to switch or adjust interactively certain parameters that may influence the solution process. The paper gives a survey of current programmer and user interfaces that are used in our various 2D and 3D parallel finite element programs for the visualization of the solution.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Kye Hyun 1956. "Classification of environmental hydrologic behaviors in Northeastern United States." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277083.

Full text
Abstract:
Environmental response to acidic deposition occurs through the vehicle of water movement in the ecosystem. As a part of the environmental studies for acidic deposition in the ecosystem, output-based hydrologic classification was done from basin hydrologies based on the distribution of the baseflow, snowmelt, and the direct runoff sources. Because of the differences in the flow paths and exposure duration, those components were assumed to represent distinct geochemical responses. As a first step, user-friendly software has been developed to calculate the baseflow based on the separation of annual hydrographs. It also generates the hydrograph for visual analysis using trial separation slope. After the software was completed, about 1200 stream flow gauging stations in Northeastern U.S. were accessed for flow separation and other hydrologic characteristics. At the final stage, based on the output from the streamflow analysis, cluster analysis was performed to classify the streamflow behaviors in terms of acidic inflow. The output from the cluster analysis shows more efficient regional boundaries of the subregions than the current regional boundaries used by U.S. Environmental Protection Agency (U.S.E.P.A.) for the environmental management in terms of acidic deposition based on the regional baseflow properties.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Hongliang, and University of Lethbridge Faculty of Arts and Science. "Implementation of a classification algorithm for institutional analysis." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2008, 2008. http://hdl.handle.net/10133/738.

Full text
Abstract:
The report presents an implemention of a classification algorithm for the Institutional Analysis Project. The algorithm used in this project is the decision tree classification algorithm which uses a gain ratio attribute selectionmethod. The algorithm discovers the hidden rules from the student records, which are used to predict whether or not other students are at risk of dropping out. It is shown that special rules exist in different data sets, each with their natural hidden knowledge. In other words, the rules that are obtained depend on the data that is used for classification. In our preliminary experiments, we show that between 55-78 percent of data with unknown class lables can be correctly classified, using the rules obtained from data whose class labels are known. We feel this is acceptable, given the large number of records, attributes, and attribute values that are used in the experiments. The project results are useful for large data set analysis.
viii, 38 leaves ; 29 cm. --
APA, Harvard, Vancouver, ISO, and other styles
8

Klinka, Karel, Pal Varga, and Christine Chourmouzis. "Select CD : computer support system for making tree species and reproduction cutting decisions in the coastal forest of BC." Forest Sciences Department, University of British Columbia, 1999. http://hdl.handle.net/2429/672.

Full text
Abstract:
"SELECT CD is a site-specific, decision-support tool for selecting ecologically viable tree species, reproduction cuttings, and regeneration methods in the coastal forest (CDF, CWH, and MH zones). SELECT CD integrates information from several existing guides with new information from literature and recent research into a single, user-friendly resource. SELECT CD also includes a rich library of visuals and an illustrated glossary of technical terms."
APA, Harvard, Vancouver, ISO, and other styles
9

Riss, Joëlle. "Principes de stéréologie des formes en pétrographie quantitative." Orléans, 1988. http://www.theses.fr/1988ORLE2015.

Full text
Abstract:
Les différentes populations de granis des agrégats polycristallins monominéraux déformés et cristallises sont classés suivant leur forme d'après le diagramme de blaschke. On étudie les polyèdres trivalents à 13 et 14 faces grâces a un logiciel de calcul des coordonnées des sommets. On peut aussi déduire les caractéristiques d'un polyèdre a partir de ses coordonnées ainsi que des simulations numériques d'intercepts linéaires et planauss du polyèdre isole et de l'agrégat qu'il engendre s'il est empilable
APA, Harvard, Vancouver, ISO, and other styles
10

Schmalzried, Terry Eugene. "Classification of wheat kernels by machine-vision measurement." 1985. http://hdl.handle.net/2097/27530.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Wheat – Classification – Computer programs"

1

Elias, Peter. Computer assisted standard occupational coding. London: H.M.S.O., 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Orlóci, László. Conapack: Program for canonical analysis of classification tables. The Hague, Netherlands: SPB Academic, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dewey for Windows guide: Records, searching, and number building. Albany, N.Y: Forest Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Speaker Classification. Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Christian, Müller, ed. Speaker classification. Berlin: Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Automatic classification of spectra from the Infrared Astronomical Satellite (IRAS). [Washington, D.C.]: NASA Scientific and Technical Information Division, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ending Spam: Bayesian Content Filtering and the Art of Statistical Language Classification. No Starch Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Intelligent Text Categorization And Clustering. Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Food and Agriculture Organization of the United Nations., ed. The European soil information system: Proceedings of a Technical Consultation, Rome, Italy, 2-3 September 1999. Rome: Food and Agriculture Organization of the United Nations, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Office, General Accounting. Information security: Update of data on employees affected by federal security programs : fact sheet for congressional requesters. Washington, D.C: The Office, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Wheat – Classification – Computer programs"

1

Jirapanthong, Waraporn, Winyu Niranatlamphong, and Karuna Yampray. "Applying a Classification Model for Selecting Postgraduate Programs." In Lecture Notes in Computer Science, 330–37. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61833-3_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Siricharoen, Punnarai, Bryan Scotney, Philip Morrow, and Gerard Parr. "Automated Wheat Disease Classification Under Controlled and Uncontrolled Image Acquisition." In Lecture Notes in Computer Science, 456–64. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20801-5_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Goodall, D. W., P. Ganis, and E. Feoli. "Probabilistic Methods in Classification: A Manual for Seven Computer Programs." In Computer assisted vegetation analysis, 453–67. Dordrecht: Springer Netherlands, 1991. http://dx.doi.org/10.1007/978-94-011-3418-7_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karwande, Atharva, Pranesh Kulkarni, Pradyumna Marathe, Tejas Kolhe, Medha Wyawahare, and Pooja Kulkarni. "Computer Vision-Based Wheat Grading and Breed Classification System: A Design Approach." In Machine Learning and Information Processing, 403–13. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4859-2_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Koza, John R. "Human-Competitive Machine Intelligence by Means of Genetic Algorithms." In Perspectives on Adaptation in Natural and Artificial Systems. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195162929.003.0007.

Full text
Abstract:
The subtitle of John Holland's pioneering 1975 book Adaptation in Natural and Artificial Systems correctly anticipated that the genetic algorithm described in that book would have "applications to.. .artificial intelligence." When the entities in the evolving population are computer programs, Holland's genetic algorithm can be used to perform the task of searching the space of computer programs for a program that solves, or approximately solves, a problem. This variation of the genetic algorithm (called genetic programming) enables the genetic algorithm to address the long-standing challenge of getting a computer to solve a problem without explicitly programming it. Specifically, this challenge calls for an automatic system whose input is a high-level statement of a problem's requirements and whose output is a satisfactory solution to the given problem. Paraphrasing Arthur Samuel [33], this challenge concerns "How can computers be made to do what needs to be done, without being told exactly how to do it?" This challenge is the common goal of such fields of research as artificial intelligence and machine learning. Arthur Samuel [32] offered one measure for success in this pursuit, namely "The aim [is].. .to get machines to exhibit behavior, which if done by humans, would be assumed to involve the use of intelligence." Since a problem can generally be recast as a search for a computer program, genetic programming can potentially solve a wide range of problems, including problems of control, classification, system identification, and design. Section 2 describes genetic programming. Section 3 states what we mean when we say that an automatically created solution to a problem is competitive with the product of human creativity. Section 4 discusses the illustrative problem of automatically synthesizing both the topology and sizing for an analog electrical circuit. Section 5 discusses the problem of automatically determining the placement and routing (while simultaneously synthesizing the topology and sizing) of an electrical circuit. Section 6 discusses the problem of automatically synthesizing both the topology and tuning for a controller. Section 7 discusses the importance of illogic in achieving creativity and inventiveness.
APA, Harvard, Vancouver, ISO, and other styles
6

Jansen, Nils, and Reinhard Zimmermann. "Sale of Goods." In Commentaries on European Contract Laws. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198790693.003.0019.

Full text
Abstract:
It seems clear that sale is a contract providing for the exchange of a thing for money. Yet, in historical and comparative perspectives, the contours become blurred. What is a thing? What is money? Wherever a comprehensive statement of the rules governing sales was attempted, these questions had to be answered. As the following survey will show, some issues (like the correct classification of contracts for goods to be manufactured) seem perennial and ubiquitous, others (like the Romans’ difficulties regarding generic sales) have long become obsolete, and some (like the question of whether computer programs or Bitcoins can be sold like corporeal things) have only recently emerged.
APA, Harvard, Vancouver, ISO, and other styles
7

Sackin, M. J. "11 Computer Programs for Classification and Identification." In Methods in Microbiology, 459–94. Elsevier, 1988. http://dx.doi.org/10.1016/s0580-9517(08)70417-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Olmo, Juan Luis, José Raúl Romero, and Sebastián Ventura. "Ant Programming Algorithms for Classification." In Advances in Data Mining and Database Management, 107–28. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-6078-6.ch005.

Full text
Abstract:
Ant programming is a kind of automatic programming that generates computer programs by using the ant colony metaheuristic as the search technique. It has demonstrated good generalization ability for the extraction of comprehensible classifiers. To date, three ant programming algorithms for classification rule mining have been proposed in the literature: two of them are devoted to regular classification, differing mainly in the optimization approach, single-objective or multi-objective, while the third one is focused on imbalanced domains. This chapter collects these algorithms, presenting different experimental studies that confirm the aptitude of this metaheuristic to address this data-mining task.
APA, Harvard, Vancouver, ISO, and other styles
9

Panda, Mrutyunjaya, and Ahmad Taher Azar. "Hybrid Multi-Objective Grey Wolf Search Optimizer and Machine Learning Approach for Software Bug Prediction." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 314–37. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5788-4.ch013.

Full text
Abstract:
Software bugs (or malfunctions) pose a serious threat to software developers with many known and unknown bugs that may be vulnerable to computer systems, demanding new methods, analysis, and techniques for efficient bug detection and repair of new unseen programs at a later stage. This chapter uses evolutionary grey wolf (GW) search optimization as a feature selection technique to improve classifier efficiency. It is also envisaged that software error detection would consider the nature of the error when repairing it for remedial action instead of simply finding it either faulty or non-defective. To address this problem, the authors use bug severity multi-class classification to build an efficient and robust prediction model using multilayer perceptron (MLP), logistic regression (LR), and random forest (RF) for bug severity classification. Both tests are performed on two software error datasets, namely Ant 1.7 and Tomcat.
APA, Harvard, Vancouver, ISO, and other styles
10

Aldwairi, Monther, Musaab Hasan, and Zayed Balbahaith. "Detection of Drive-by Download Attacks Using Machine Learning Approach." In Cognitive Analytics, 1598–611. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2460-2.ch082.

Full text
Abstract:
Drive-by download refers to attacks that automatically download malwares to user's computer without his knowledge or consent. This type of attack is accomplished by exploiting web browsers and plugins vulnerabilities. The damage may include data leakage leading to financial loss. Traditional antivirus and intrusion detection systems are not efficient against such attacks. Researchers proposed plenty of detection approaches mostly passive blacklisting. However, a few proposed dynamic classification techniques, which suffer from clear shortcomings. In this paper, we propose a novel approach to detect drive-by download infected web pages based on extracted features from their source code. We test 23 different machine learning classifiers using data set of 5435 webpages and based on the detection accuracy we selected the top five to build our detection model. The approach is expected to serve as a base for implementing and developing anti drive-by download programs. We develop a graphical user interface program to allow the end user to examine the URL before visiting the website. The Bagged Trees classifier exhibited the highest accuracy of 90.1% and reported 96.24% true positive and 26.07% false positive rate.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Wheat – Classification – Computer programs"

1

Wentao, Song. "Classification Model of Wheat Grain based on Autoencoder." In 2020 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA). IEEE, 2020. http://dx.doi.org/10.1109/icaica50127.2020.9181940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"CONCEPT CLASSIFICATION FOR STUDY PROGRAMS QUALITY EVALUATION." In 2nd International Conference on Computer Supported Education. SciTePress - Science and and Technology Publications, 2010. http://dx.doi.org/10.5220/0002775404410445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ma, Gang, and Liping Sun. "The Design and Implementation of FPSO Mooring Management System." In ASME 2008 27th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2008. http://dx.doi.org/10.1115/omae2008-57895.

Full text
Abstract:
Now FPSO (Floating Production Storage and Offloading) has been widely used in exploitative engineering of offshore petroleum/gas reservoir and becoming one of the most important production facilities. FPSO which is a floating structure has the virtue of sea keeping, permanent mooring, large oil storage, offshore unloading and recursive and so on. FPSO can be used in lots of kinds of oil fields. The Paper mainly introduces software FPSO Mooring Management System which is designed for CNOOC Oil Base Group. There are 17 FPSO serving in China Sea area, so that it is becoming more and more serious on FPSO mooring system. The main failures of mooring system are as follows: broken of main bearing, fracture of YOKE transverse beam, crack of anchor chain, leak of swivel head, local damage under typhoon etc. FPSO Mooring Management System is designed to track and control FPSO mooring situation and to keep detailed information on FPSO mooring management, operation, maintenance etc. This system puts chief data of 136 FPSO together which are serving, under construction, maintaining in the world. This system also provides technical support for company decision. FPSO Mooring Management System is made with JSP, using ORACLE as database, so it can run at any operating system. This system offers access to upload folders and EXCEL files to database to save time and decrease errors in managing lots of data. There are a lot of kinds of files which should be classified by subject. In order to satisfy the need to get different united information not only by certain classification, this system is designed with open structure, it allows users to store files by any classification and own the power to change it. The system offers different searching ways, by which user can get what they want exactly, then the transfers corresponding programs in the computer to play these files. This system meets the requirement of users to get what they want efficiently.
APA, Harvard, Vancouver, ISO, and other styles
4

Ennadifi, Elias, Sohaib Laraba, Damien Vincke, Benoit Mercatoris, and Bernard Gosselin. "Wheat Diseases Classification and Localization Using Convolutional Neural Networks and GradCAM Visualization." In 2020 International Conference on Intelligent Systems and Computer Vision (ISCV). IEEE, 2020. http://dx.doi.org/10.1109/iscv49265.2020.9204258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Hui, Laijun Sun, Guangyan Hui, Xiaodong Mao, and Shang Gao. "The modeling research of wheat classification based on NIR and RBF neural network." In 2013 3rd International Conference on Computer Science and Network Technology (ICCSNT). IEEE, 2013. http://dx.doi.org/10.1109/iccsnt.2013.6967300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chan, W. K., Jeffrey C. F. Ho, and T. H. Tse. "Piping Classification to Metamorphic Testing: An Empirical Study towards Better Effectiveness for the Identification of Failures in Mesh Simplification Programs." In 31st Annual International Computer Software and Applications Conference - Vol. 1- (COMPSAC 2007). IEEE, 2007. http://dx.doi.org/10.1109/compsac.2007.167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lucca Junior, Horacio Emidio de, and Arnaldo Rodrigues Santos Jr. "Classification of Mammographic Images by Openvino: A Proposal of use to Enhance More Effectivity in Cancer Diagnosis." In 2nd International Conference on Machine Learning, IOT and Blockchain (MLIOB 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.111206.

Full text
Abstract:
Diseases that are characterized by the disordered growth of cells that, in many cases, have the property of invading tissues and organs are commonly called cancer. Such cells divide quickly and the invasion can be very aggressive and uncontrolled, resulting in formation of malignant tumors Mammographic images from libraries of the American digital database DDSM were used in this research for digital improvement and characteristic analysis using the OpenVino computer program This work has as main objective to analyze mammography images of breast nodules and to propose a method of classification by shape and texture using computer programs that can maximize the accuracy in the correct diagnosis regarding the malignancy or not of a tumor. It is a tool that it can be useful as a contribution in the interpretation of the results to mastologists who identify such nodules through the analyzed radiological images.
APA, Harvard, Vancouver, ISO, and other styles
8

Jo, D. Y., and E. J. Haug. "Workspace Analysis of Multibody Mechanical Systems Using Continuation Methods." In ASME 1988 Design Technology Conferences. American Society of Mechanical Engineers, 1988. http://dx.doi.org/10.1115/detc1988-0057.

Full text
Abstract:
Abstract A new approach to numerical analysis of workspaces of multibody mechanical systems is developed. Numerical techniques that are based on manifold theory and utilize continuation methods are presented and applied to a variety of mechanical systems, including closed-loop mechanisms. Generalized coordinates that define kinematics of a system are classified and interpreted from an input-output point of view. Boundaries of workspaces, which depend on the classification of generalized coordinates, are defined as sets of singular points of Jacobians of the kinematic equations. Numerical methods for tracing one dimensional trajectories on a workspace boundary are outlined and example are analyzed using one dimensional manifold mapping computer programs, such as PITCON and AUTO.
APA, Harvard, Vancouver, ISO, and other styles
9

Ren, Huilong, Kaihong Zhang, Hui Li, and Di Wang. "Large Containerships’ Fatigue Analysis due to Springing and Whipping." In ASME 2016 35th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/omae2016-54525.

Full text
Abstract:
As the sea transport demand increases constantly, marine corporations around the world are pursuing solutions with large scale and low cost, which makes ultra large containerships’ construction consequentially. Ultra large containerships are more flexible relatively, and the 2-node natural frequency can easily fall into the encountered spectrum frequency range of normal sea state. Meanwhile, as the speed of containerships is high and its large bow flare, when sailing with high speed, the bow structures may suffer severe slamming forces which can increase the design wave loads’ level and the fatigue damage. The importance of hydroelastic analysis of large and flexible containerships of today has been pointed out for structure design. Rules of Many Classification Society have made changes on design wave loads’ value and fatigue influence factor modification. The paper firstly introduced 3-D linear hydroelasticity theory to calculate the Response Amplitude Operator (RAO) in frequency domain, and then described 3-D nonlinear hydroelasticity theory to obtain the nonlinear wave loads time history in irregular waves in time domain, considering large amplitude motion and slamming force due to severe relative motion between ship hull and wave. Based on the theories, computer programs are made to conduct the calculations under specified load case, and some calculation and statistical results are compared with experimental results to verify the accuracy and stability of the programs secondly. The paper focused on the influence of springing and whipping on fatigue damages of 8500TEU and 10000TEU containerships in different loading cases, using spectrum analysis method and time domain statistical analysis method. The spectrum analysis method can calculate fatigue damage due to low-frequency wave loads and high-frequency springing separately, while the time domain statistical analysis can calculate fatigue damage due to the high-frequency damping whipping additionally, based on 3-D time domain nonlinear hydroelasticity wave loads’ time series simulation in irregular waves and rain flow counting method. Finally, discussions on influence factor of springing and whipping with different loading cases are made. Based on these two containerships in example, the fatigue damage due to whipping can be the same as the fatigue damage due to springing and even sometimes can be larger than the springing damage. According to the wave loads influence factor, the fatigue assessment of different position on midship section is done on the basis of nominal stress. Besides, some suggestions on calculating load case selection are made to minimize the quantity of work in frequency and time domain. Thus the tools for fatigue influence factor modification are provided to meet the demand of IACS’ UR[1].
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography