Academic literature on the topic 'ACM Computing Classification System'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'ACM Computing Classification System.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "ACM Computing Classification System"

1

Maruta, Tatsuya. "Construction of Optimal Linear Codes by Geometric Puncturing." Serdica Journal of Computing 7, no. 1 (2013): 73–80. http://dx.doi.org/10.55630/sjc.2013.7.73-80.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ghorbani, Modjtaba. "Remarks on the Balaban Index." Serdica Journal of Computing 7, no. 1 (2013): 25–34. http://dx.doi.org/10.55630/sjc.2013.7.25-34.

Full text
Abstract:
In this paper we compute some bounds of the Balaban index and then by means of group action we compute the Balaban index of vertex transitive graphs. ACM Computing Classification System (1998): G.2.2 , F.2.2.
APA, Harvard, Vancouver, ISO, and other styles
3

Von Collani, Elart. "Five Turning Points in the Historical Progress of Statistics - My Personal Vision." Serdica Journal of Computing 8, no. 3 (2015): 199–226. http://dx.doi.org/10.55630/sjc.2014.8.199-226.

Full text
Abstract:
Statistics has penetrated almost all branches of science and allareas of human endeavor. At the same time, statistics is not only misunderstood, misused and abused to a frightening extent, but it is also oftenmuch disliked by students in colleges and universities. This lecture discusses/covers/addresses the historical development of statistics, aiming at identifying the most important turning points that led to the present stateof statistics and at answering the questions “What went wrong with statistics?” and “What to do next?”.ACM Computing Classification System (1998): A.0, A.m, G.3, K.3.2.
APA, Harvard, Vancouver, ISO, and other styles
4

Bouyukliev, Iliya, and Mariya Dzhumalieva-Stoeva. "Representing Equivalence Problems for Combinatorial Objects." Serdica Journal of Computing 8, no. 4 (2015): 327–54. http://dx.doi.org/10.55630/sjc.2014.8.327-354.

Full text
Abstract:
Methods for representing equivalence problems of various combinatorial objectsas graphs or binary matrices are considered. Such representations can be used for isomorphism testing in classification or generation algorithms. Often it is easier to consider a graph or a binary matrix isomorphism problem than to implement heavy algorithms depending especially on particular combinatorialobjects. Moreover, there already exist well tested algorithms for the graph isomorphismproblem (nauty) and the binary matrix isomorphism problem as well (Q-Extension).ACM Computing Classification System (1998): F.2.1, G.4.
APA, Harvard, Vancouver, ISO, and other styles
5

López-ibáñez, Manuel, Juergen Branke, and Luís Paquete. "Reproducibility in Evolutionary Computation." ACM Transactions on Evolutionary Learning and Optimization 1, no. 4 (2021): 1–21. http://dx.doi.org/10.1145/3466624.

Full text
Abstract:
Experimental studies are prevalent in Evolutionary Computation ( EC ), and concerns about the reproducibility and replicability of such studies have increased in recent times, reflecting similar concerns in other scientific fields. In this article, we discuss, within the context of EC, the different types of reproducibility and suggest a classification that refines the badge system of the Association of Computing Machinery ( ACM ) adopted by ACM Transactions on Evolutionary Learning and Optimization ( TELO ). We identify cultural and technical obstacles to reproducibility in the EC field. Finally, we provide guidelines and suggest tools that may help to overcome some of these reproducibility obstacles.
APA, Harvard, Vancouver, ISO, and other styles
6

Topalova, Svetlana, and Stela Zhelezova. "Orthogonal Resolutions and Latin Squares." Serdica Journal of Computing 7, no. 1 (2013): 13–24. http://dx.doi.org/10.55630/sjc.2013.7.13-24.

Full text
Abstract:
Resolutions which are orthogonal to at least one other resolution (RORs) and sets of m mutually orthogonal resolutions (m-MORs) of 2-(v, k, λ) designs are considered. A dependence of the number of nonisomorphic RORs and m-MORs of multiple designs on the number of inequivalent sets of v/k − 1 mutually orthogonal latin squares (MOLS) of size m is obtained. ACM Computing Classification System (1998): G.2.1.
APA, Harvard, Vancouver, ISO, and other styles
7

Dobrinkova, Nina. "An Overview of Modelling Bulgarian Wildland Fire Behaviour by Application of a Mathematical Game Method and WRF-Fire Models." Serdica Journal of Computing 6, no. 4 (2013): 451–66. http://dx.doi.org/10.55630/sjc.2012.6.451-466.

Full text
Abstract:
This paper presents the main achievements of the author’s PhD dissertation. The work is dedicated to mathematical and semi-empirical approaches applied to the case of Bulgarian wildland fires. After the introductory explanations, short information from every chapter is extracted to cover the main parts of the obtained results. The methods used are described in brief and main outcomes are listed. ACM Computing Classification System (1998): D.1.3, D.2.0, K.5.1.
APA, Harvard, Vancouver, ISO, and other styles
8

Sajid, Naseer Ahmed, Munir Ahmad, Muhammad Tanvir Afzal, and Atta-ur-Rahman. "Exploiting Papers’ Reference’s Section for Multi-Label Computer Science Research Papers’ Classification." Journal of Information & Knowledge Management 20, no. 01 (2021): 2150004. http://dx.doi.org/10.1142/s0219649221500040.

Full text
Abstract:
The profusion of documents production at an exponential rate over the web has made it difficult for the scientific community to retrieve most relevant information against the query. The research community is busy in proposing innovative mechanisms to ensure the document retrieval in a flexible manner. The document classification is a core concept of information retrieval that classifies the documents into predefined categories. In scientific domain, classification of documents to predefined category (ies) is an important research problem and supports number of tasks such as information retrieval, finding experts, recommender systems, etc. In Computer Science, the Association for Computing Machinery (ACM) categorization system is commonly used for organizing research papers in the topical hierarchy defined by the ACM. Accurately assigning a research paper to a predefined category (ACM topic) is a difficult task especially when the paper belongs to multiple topics. In this paper, we exploit the reference section of a research paper to predict the topics of the paper. We have proposed a framework called Category-Based Category Identification (CBCI) for multi-label research papers classification. The proposed approach extracted references from training dataset and grouped them in a Topic-Reference (TR) pair such as TR {Topic, Reference}. The references of the focused paper are parsed and compared in the pair TR {Topic, Reference}. The approach collects the corresponding list of topics matched with the references in the said pair. We have evaluated our technique for two datasets that is Journal of Universal Computer Science (JUCS) and ACM. The proposed approach is able to predict the first node in the ACM topic (topic A to K) with 74% accuracy for both JUCS and ACM dataset for multi-label classification.
APA, Harvard, Vancouver, ISO, and other styles
9

Kratica, Jozef. "A Mixed Integer Quadratic Programming Model for the Low Autocorrelation Binary Sequence Problem." Serdica Journal of Computing 6, no. 4 (2013): 385–400. http://dx.doi.org/10.55630/sjc.2012.6.385-400.

Full text
Abstract:
In this paper the low autocorrelation binary sequence problem (LABSP) is modeled as a mixed integer quadratic programming (MIQP) problem and proof of the model’s validity is given. Since the MIQP model is semidefinite, general optimization solvers can be used, and converge in a finite number of iterations. The experimental results show that IQP solvers, based on this MIQP formulation, are capable of optimally solving general/skew-symmetric LABSP instances of up to 30/51 elements in a moderate time. ACM Computing Classification System (1998): G.1.6, I.2.8.
APA, Harvard, Vancouver, ISO, and other styles
10

Hossein-Zadeh, S., A. Hamzeh, and A. R. Ashrafi. "The Wiener, Eccentric Connectivity and Zagreb Indices of the Hierarchical Product of Graphs." Serdica Journal of Computing 6, no. 4 (2013): 409–18. http://dx.doi.org/10.55630/sjc.2012.6.409-418.

Full text
Abstract:
Let G1 = (V1, E1) and G2 = (V2, E2) be two graphs having a distinguished or root vertex, labeled 0. The hierarchical product G2 ⊓ G1 of G2 and G1 is a graph with vertex set V2 × V1. Two vertices y2y1 and x2x1 are adjacent if and only if y1x1 ∈ E1 and y2 = x2; or y2x2 ∈ E2 and y1 = x1 = 0. In this paper, the Wiener, eccentric connectivity and Zagreb indices of this new operation of graphs are computed. As an application, these topological indices for a class of alkanes are computed. ACM Computing Classification System (1998): G.2.2, G.2.3.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "ACM Computing Classification System"

1

Chen, Yinlin. "A High-quality Digital Library Supporting Computing Education: The Ensemble Approach." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78750.

Full text
Abstract:
Educational Digital Libraries (DLs) are complex information systems which are designed to support individuals' information needs and information seeking behavior. To have a broad impact on the communities in education and to serve for a long period, DLs need to structure and organize the resources in a way that facilitates the dissemination and the reuse of resources. Such a digital library should meet defined quality dimensions in the 5S (Societies, Scenarios, Spaces, Structures, Streams) framework - including completeness, consistency, efficiency, extensibility, and reliability - to ensure that a good quality DL is built. In this research, we addressed both external and internal quality aspects of DLs. For internal qualities, we focused on completeness and consistency of the collection, catalog, and repository. We developed an application pipeline to acquire user-generated computing-related resources from YouTube and SlideShare for an educational DL. We applied machine learning techniques to transfer what we learned from the ACM Digital Library dataset. We built classifiers to catalog resources according to the ACM Computing Classification System from the two new domains that were evaluated using Amazon Mechanical Turk. For external qualities, we focused on efficiency, scalability, and reliability in DL services. We proposed cloud-based designs and applications to ensure and improve these qualities in DL services using cloud computing. The experimental results show that our proposed methods are promising for enhancing and enriching an educational digital library. This work received support from ACM, as well as the National Science Foundation under Grant Numbers DUE-0836940, DUE-0937863, and DUE-0840719, and IMLS LG-71-16-0037-16.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
2

Stolpmann, Alexander. "An intelligent soft-computing texture classification system." Thesis, University of South Wales, 2005. https://pure.southwales.ac.uk/en/studentthesis/an-intelligent-softcomputing-texture-classification-system(a43eb831-a799-438b-9112-3ce1df432fe9).html.

Full text
Abstract:
The aim of this research work was to obtain a system that classifies texture. This so called Texture Classification System is not a system for one special task or group of tasks. It is a general approach that shows a way towards real artificial vision. Finding ways to enable computerised systems to visually recognise its surroundings is of increasing importance for the industry and society at large. To reach this goal not only objects but less well describable texture has to be identified within an image. To achieve this aim a number of objectives had to be met. At first a review of how natural vision works was done to better understand the complexity of visual systems. This is followed by a more detailed definition of what texture is. Next a review of image processing techniques, of statistical methods and of soft-computing methods was made to identify those that can be used or improved for the Texture Classification System. A major objective was to create the structure of the Texture Classification System. The design presented in this work is the framework for a multitude of modules arranged in groups and layers with multiple feedback and optimisation possibilities. The main achievement is a system for texture classification for which natural vision was used as a " blue-print". A more detailed definition of what texture is was made and a new texture library was started. The close review of image processing techniques provided a variety of applicable methods, as did the review and enhancement of statistical methods. Some of those methods were improved or used in a new way. Neural networks and fuzzy clustering were applied for classification, while genetic algorithms provide a means for self optimisation. The concepts and methods have been used for a number of projects next to texture classification itself. This work presents applications for fault detection in glass container manufacturing, quality control of veneer, positioning control of steel blocks in a rotation oven, and measurement of hair gloss. With the Texture Classification System a new, holistic approach for complex image processing and artificial vision tasks is being contributed. It uses a modular combination of statistics, image processing and soft-computing methods, easily adaptable to new tasks, includes new ideas for high order statistics, and incorporates self optimisation to achieve lean sub-systems. The system allows multiple feedbacks and includes a border detection routine. The new texture library provides images for future work of researchers. Still a lot of work has to be done in the future to achieve an artificial vision system that is comparable to the human's visual capabilities. This is mainly due to the fact of missing computational resources. At least another decade of hardware development is needed to reach this goal. During this time more, better or even novel methods will be added to the Texture Classification System to improve its universal capabilities.
APA, Harvard, Vancouver, ISO, and other styles
3

Furrer, Frank J., and Georg Püschel. "From Algorithmic Computing to Autonomic Computing." Technische Universität Dresden, 2018. https://tud.qucosa.de/id/qucosa%3A30773.

Full text
Abstract:
In algorithmic computing, the program follows a predefined set of rules – the algorithm. The analyst/designer of the program analyzes the intended tasks of the program, defines the rules for its expected behaviour and programs the implementation. The creators of algorithmic software must therefore foresee, identify and implement all possible cases for its behaviour in the future application! However, what if the problem is not fully defined? Or the environment is uncertain? What if situations are too complex to be predicted? Or the environment is changing dynamically? In many such cases algorithmic computing fails. In such situations, the software needs an additional degree of freedom: Autonomy! Autonomy allows software to adapt to partially defined problems, to uncertain or dynamically changing environments and to situations that are too complex to be predicted. As more and more applications – such as autonomous cars and planes, adaptive power grid management, survivable networks, and many more – fall into this category, a gradual switch from algorithmic computing to autonomic computing takes place. Autonomic computing has become an important software engineering discipline with a rich literature, an active research community, and a growing number of applications.:Introduction 5 1 A Process Data Based Autonomic Optimization of Energy Efficiency in Manufacturing Processes, Daniel Höschele 9 2 Eine autonome Optimierung der Stabilität von Produktionsprozessen auf Basis von Prozessdaten, Richard Horn 25 3 Assuring Safety in Autonomous Systems, Christian Rose 41 4 MAPE-K in der Praxis - Grundlage für eine mögliche automatische Ressourcenzuweisung, in der Cloud Michael Schneider 59
APA, Harvard, Vancouver, ISO, and other styles
4

Marquez, Astrid. "Use of multispectral data to identify farm intensification levels by applying emergent computing techniques." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6232.

Full text
Abstract:
Concern about feeding an ever increasing population has long been one of humankind’s most pressing problems. This has been addressed throughout history by introducing into farming systems changes allowing them to produce more per unit of land area. However, these changes have also been linked to negative effects on the socio economic and environmental sphere, that have created the need for an integral understanding of this phenomenon. This thesis describes the application of learning machine methods to induct a relationship between the spectral response of farms’ land cover and their intensification levels from a sample of farming of Urdaneta municipality, Aragua state of Venezuela. Data collection like this is a necessary first steep to implement cost-effective methods that can help policymakers to conduct succesful planing tasks, especially in countries such as Venezuela where, in spite of there being areas capable of agricultural production, nearly 50% of the internal food requirements of recent years have been satisfied by importations. In this work, farm intensification levels are investigated through a sample of farms of Urdaneta Municipality, Aragua state of Venezuela. This area is characterised by a wide diversity of farming systems ranging from crop to crop-livestock systems and an increasing population density in regions capable of livestock and arable farming, making it a representative case of the main tropical rural zones. The methodology applied can be divided into two main phases. First an unsupervised classification was performed by applying principal component analysis and agglomerative cluster methods to a set of land use and land management indicators, with the aim to segregate farms into homogeneous groups from the intensification point of view. This procedure resulted in three clusters which were named extensive, semi-intensive and intensive. The land use indicators included the percentage area within each farm devoted to annual crops, orchard and pasture, while the land management indicators were percentage of cultivated land under irrigation, stocking rate, machinery and equipment index and permanent and temporary staff ratio, all of them built from data held on the 1996- 1997 venezuelan agricultural census. The previous clusters reached were compared to the ones obtained by applying the learning machine method known as self-organizing map, which is also an unsupervised classification technique, as a way to confirm the groups’ existence. In the second stage, the learning machine known as kernel adatron algorithm was implemented seeking to identify the intensification level of Urdaneta farms from a landsat image, which consisted of two sequential steps: namely training and validation. In the training step, a predetermined number of instances randomly selected from the data set were analysed looking for a pattern to establish a relationship between the label and the spectral response in an iterative process which was concluded when the machine found a linear function capable of separating the two classes with a maximum margin. The supervised classification finishes with the validation in which the kernel adatron classifies the unseen samples by using a generalisation of the relationships learned while training. Results suggest that farm intensification levels can be effectively derived from multi-spectral data by adopting a machine learning approach like the one described.
APA, Harvard, Vancouver, ISO, and other styles
5

Ghiasvand, Siavash. "Toward Resilience in High Performance Computing:: A Prototype to Analyze and Predict System Behavior." Technische Universität Dresden, 2020. https://tud.qucosa.de/id/qucosa%3A72457.

Full text
Abstract:
Following the growth of high performance computing systems (HPC) in size and complexity, and the advent of faster and more complex Exascale systems, failures became the norm rather than the exception. Hence, the protection mechanisms need to be improved. The most de facto mechanisms such as checkpoint/restart or redundancy may also fail to support the continuous operation of future HPC systems in the presence of failures. Failure prediction is a new protection approach that is beneficial for HPC systems with a short mean time between failure. The failure prediction mechanism extends the existing protection mechanisms via the dynamic adjustment of the protection level. This work provides a prototype to analyze and predict system behavior using statistical analysis to pave the path toward resilience in HPC systems. The proposed anomaly detection method is noise-tolerant by design and produces accurate results with as little as 30 minutes of historical data. Machine learning models complement the main approach and further improve the accuracy of failure predictions up to 85%. The fully automatic unsupervised behavior analysis approach, proposed in this work, is a novel solution to protect future extreme-scale systems against failures.:1 Introduction 1.1 Background and Statement of the Problem 1.2 Purpose and Significance of the Study 1.3 Jam–e Jam: A System Behavior Analyzer 2 Review of the Literature 2.1 Syslog Analysis 2.2 Users and Systems Privacy 2.3 Failure Detection and Prediction 2.3.1 Failure Correlation 2.3.2 Anomaly Detection 2.3.3 Prediction Methods 2.3.4 Prediction Accuracy and Lead Time 3 Data Collection and Preparation 3.1 Taurus HPC Cluster 3.2 Monitoring Data 3.2.1 Data Collection 3.2.2 Taurus System Log Dataset 3.3 Data Preparation 3.3.1 Users and Systems Privacy 3.3.2 Storage and Size Reduction 3.3.3 Automation and Improvements 3.3.4 Data Discretization and Noise Mitigation 3.3.5 Cleansed Taurus System Log Dataset 3.4 Marking Potential Failures 4 Failure Prediction 4.1 Null Hypothesis 4.2 Failure Correlation 4.2.1 Node Vicinities 4.2.2 Impact of Vicinities 4.3 Anomaly Detection 4.3.1 Statistical Analysis (frequency) 4.3.2 Pattern Detection (order) 4.3.3 Machine Learning 4.4 Adaptive resilience 5 Results 5.1 Taurus System Logs 5.2 System-wide Failure Patterns 5.3 Failure Correlations 5.4 Taurus Failures Statistics 5.5 Jam-e Jam Prototype 5.6 Summary and Discussion 6 Conclusion and Future Works Bibliography List of Figures List of Tables Appendix A Neural Network Models Appendix B External Tools Appendix C Structure of Failure Metadata Databse Appendix D Reproducibility Appendix E Publicly Available HPC Monitoring Datasets Appendix F Glossary Appendix G Acronyms
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Zongze. "Event Sequence Identification and Deep Learning Classification for Anomaly Detection and Predication on High-Performance Computing Systems." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1609172/.

Full text
Abstract:
High-performance computing (HPC) systems continue growing in both scale and complexity. These large-scale, heterogeneous systems generate tens of millions of log messages every day. Effective log analysis for understanding system behaviors and identifying system anomalies and failures is highly challenging. Existing log analysis approaches use line-by-line message processing. They are not effective for discovering subtle behavior patterns and their transitions, and thus may overlook some critical anomalies. In this dissertation research, I propose a system log event block detection (SLEBD) method which can extract the log messages that belong to a component or system event into an event block (EB) accurately and automatically. At the event level, we can discover new event patterns, the evolution of system behavior, and the interaction among different system components. To find critical event sequences, existing sequence mining methods are mostly based on the a priori algorithm which is compute-intensive and runs for a long time. I develop a novel, topology-aware sequence mining (TSM) algorithm which is efficient to generate sequence patterns from the extracted event block lists. I also train a long short-term memory (LSTM) model to cluster sequences before specific events. With the generated sequence pattern and trained LSTM model, we can predict whether an event is going to occur normally or not. To accelerate such predictions, I propose a design flow by which we can convert recurrent neural network (RNN) designs into register-transfer level (RTL) implementations which are deployed on FPGAs. Due to its high parallelism and low power, FPGA achieves a greater speedup and better energy efficiency compared to CPU and GPU according to our experimental results.
APA, Harvard, Vancouver, ISO, and other styles
7

Spruth, Wilhelm G. "Enterprise Computing." Universitätsbibliothek Leipzig, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-126859.

Full text
Abstract:
Das vorliegende Buch entstand aus einer zweisemestrigen Vorlesung „Enterprise Computing“, die wir gemeinsam über viele Jahre als Teil des Bachelor- oder Master-Studienganges an der Universität Leipzig gehalten haben. Das Buch führt ein in die Welt des Mainframe und soll dem Leser einen einführenden Überblick geben. Band 1 ist der Einführung in z/OS gewidmet, während sich Band 2 mit der Internet Integration beschäftigt. Ergänzend werden in Band 3 praktische Übungen unter z/OS dargestellt.
APA, Harvard, Vancouver, ISO, and other styles
8

Dargie, Waltenegus. "A Distributed Architecture for Computing Context in Mobile Devices." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2006. http://nbn-resolving.de/urn:nbn:de:swb:14-1151308912028-83795.

Full text
Abstract:
Context-aware computing aims at making mobile devices sensitive to the social and physical settings in which they are used. A necessary requirement to achieve this goal is to enable those devices to establish a shared understanding of the desired settings. Establishing a shared understanding entails the need to manipulate sensed data in order to capture a real world situation wholly, conceptually, and meaningfully. Quite often, however, the data acquired from sensors can be inexact, incomplete, and/or uncertain. Inexact sensing arises mostly due to the inherent limitation of sensors to capture a real world phenomenon precisely. Incompleteness is caused by the absence of a mechanism to capture certain real-world aspects; and uncertainty stems from the lack of knowledge about the reliability of the sensing sources, such as their sensing range, accuracy, and resolution. The thesis identifies a set of criteria for a context-aware system to capture dynamic real-world situations. On the basis of these criteria, a distributed architecture is designed, implemented and tested. The architecture consists of Primitive Context Servers, which abstract the acquisition of primitive contexts from physical sensors; Aggregators, to minimise error caused by inconsistent sensing, and to gather correlated primitive contexts pertaining to a particular entity or situation; a Knowledge Base and an Empirical Ambient Knowledge Component, to model dynamic properties of entities with facts and beliefs; and a Composer, to reason about dynamic real-world situations on the basis of sensed data. Two additional components, namely, the Event Handler and the Rule Organiser, are responsible for dynamically generating context rules by associating decision events ? signifying a user?s activity ? with the context in which those decision events are produced. Context-rules are essential elements with which the behaviour of mobile devices can be controlled and useful services can be provided. Four estimation and recognition schemes, namely, Fuzzy Logic, Hidden Markov Models, Dempster-Schafer Theory of Evidence, and Bayesian Networks, are investigated, and their suitability for the implementation of the components of the architecture of the thesis is studied. Subsequently, fuzzy sets are chosen to model dynamic properties of entities. Dempster-Schafer?s combination theory is chosen for aggregating primitive contexts; and Bayesian Networks are chosen to reason about a higher-level context, which is an abstraction of a real-world situation. A Bayesian Composer is implemented to demonstrate the capability of the architecture in dealing with uncertainty, in revising the belief of the Empirical Ambient Knowledge Component, in dealing with the dynamics of primitive contexts and in dynamically defining contextual states. The Composer could be able to reason about the whereabouts of a person in the absence of any localisation sensor. Thermal, relative humidity, light intensity properties of a place as well as time information were employed to model and reason about a place. Consequently, depending on the variety and reliability of the sensors employed, the Composer could be able to discriminate between rooms, corridors, a building, or an outdoor place with different degrees of uncertainty. The Context-Aware E-Pad (CAEP) application is designed and implemented to demonstrate how applications can employ a higher-level context without the need to directly deal with its composition, and how a context rule can be generated by associating the activities (decision events) of a mobile user with the context in which the decision events are produced.
APA, Harvard, Vancouver, ISO, and other styles
9

Lehner, Wolfgang. "Energy-Efficient In-Memory Database Computing." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-115547.

Full text
Abstract:
The efficient and flexible management of large datasets is one of the core requirements of modern business applications. Having access to consistent and up-to-date information is the foundation for operational, tactical, and strategic decision making. Within the last few years, the database community sparked a large number of extremely innovative research projects to push the envelope in the context of modern database system architectures. In this paper, we outline requirements and influencing factors to identify some of the hot research topics in database management systems. We argue that—even after 30 years of active database research—the time is right to rethink some of the core architectural principles and come up with novel approaches to meet the requirements of the next decades in data management. The sheer number of diverse and novel (e.g., scientific) application areas, the existence of modern hardware capabilities, and the need of large data centers to become more energy-efficient will be the drivers for database research in the years to come.
APA, Harvard, Vancouver, ISO, and other styles
10

Dridi, Mourad. "Vers le support des systèmes à criticité mixte sur des architectures NoC Design and multi-abstraction-level evaluation of a NoC router for mixed-criticality real-time systems, in ACM Journal on Emerging Technologies in Computing Systems 15(1), February 2019 DTFM: a flexible model for schedulability analysis of real-time applications on NoC-based architectures, in ACM SIGBED Review 14(4), January 2018 NORTH : Non-intrusive observation and run time verification of cyber-physical systems, in Ada User Journal 39(4), December 2018." Thesis, Brest, 2019. http://www.theses.fr/2019BRES0051.

Full text
Abstract:
Nous nous intéressons dans le cadre de ce travail au challenge consistant à intégrer des systèmes à criticité mixte sur des architectures NoC. Cette intégration exige l'assurance des contraintes temporelles pour les applications critiques tout en minimisant l'impact de partage de ressources sur les applications non critiques. Afin d'exécuter des systèmes à criticité mixte sur des architectures NoC, nous avons proposé plusieurs contributions sous la forme d'un routeur, de modèles de tâches et de communications pour les architectures NoC. Nous avons proposé DAS, un routeur NoC conçu pour exécuter des systèmes à criticité mixte sur des architectures NoC. Il assure les contraintes temporelles pour les communications critiques tout en limitant la réservation des ressources pour les communications non critiques. DAS implante deux modes de fonctionnement, deux niveaux de préemption, deux techniques de contrôle de flux et deux étages d'arbitrage. Nous avons implanté DAS dans un simulateur de NoC appelé SHoC. Ensuite, DAS a été evalué sur plusieurs niveaux d'abstraction et selon plusieurs critères. Nous avons ensuite proposé DTFM : un modèle de tâche et de flux pour les systèmes temps réel déployés sur un NoC. À partir du modèle de tâches, du modèle de NoC et du placement des tâches, DTFM calcule automatiquement le modèle de flux correspondant.Finalement, nous avons proposé ECTM : un modèle de communications pour les architectures NoC. ECTM conduit à une analyse d'ordonnancement efficace. Il modélise les communications sous la forme d'un graphe de tâches tout en tenant compte du modèle de NoC utilisé. Nous avons implanté ECTM et DTFM dans un simulateur d'ordonnancement appelé Cheddar<br>This thesis addresses existing challenges that are associated with the implementation of Mixed Criticality Systems over NoC architectures. In such system, we must ensure the timing constraints for critical applications while limiting the bandwidth reservation for them.In order to run Mixed Criticality systems on NoC architectures, we have proposed several contributions in the form of a NoC router, a task and flow model, and a communications model.First, we propose a NoC router called DAS (Double Arbiter and Switching), designed to efficiently run mixed criticality applications on Network On Chip. To enforce MCS requirements, DAS implements automatic mode changes, two levels of preemption, two flow control techniques and two stages of arbitration. We have implemented DAS in the cycle accurate SystemC-TLM simulator SHOC. Then, we have evaluated DAS with several abstraction-level methods. Second, we propose DTFM, a Dual Task and Flow Model in order to overcome the limitation of existent task and flow models. DTFM allows us, from the task model, the NoC model and the task mapping, to compute the corresponding flow model. Finally, we propose a new NoC communication model called Exact Communication Time Model (ECTM) in order to analyze the scheduling of periodic tasks exchanging messages over a NoC. We have implemented our approach in a real-time scheduling simulator called Cheddar
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "ACM Computing Classification System"

1

Rutkowski, Leszek. New Soft Computing Techniques for System Modeling, Pattern Classification and Image Processing. Springer Berlin Heidelberg, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rutkowski, Leszek. New Soft Computing Techniques for System Modeling, Pattern Classification and Image Processing. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-40046-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

ACM/IFIP/USENIX International Middleware Conference (8th 2007 Newport Beach, Calif.). Middleware 2007: ACM/IFIP/USENIX 8th International Middleware Conference, Newport Beach, CA, USA, November 26-30, 2007 : proceedings. Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Office, General Accounting. Year 2000 computing challenge: Readiness of FBI's National Instant Criminal Background Check System can be improved : report to the Honorable Craig Thomas, U.S. Senate. The Office, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Proceedings of the 3rd ACM Workshop on System-Level Virtualization for High Performance Computing. Association for Computing Machinery, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

New Soft Computing Techniques for System Modeling, Pattern Classification and Image Processing (Studies in Fuzziness and Soft Computing). Springer, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

(Editor), Rajkumar Buyya, and Mark Baker (Editor), eds. Grid Computing - GRID 2000: First IEEE/ACM International Workshop Bangalore, India, December 17, 2000 Proceedings (Lecture Notes in Computer Science). Springer, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "ACM Computing Classification System"

1

Espínola, Moisés, José A. Piedra, Rosa Ayala, Luís Iribarne, Saturnino Leguizamón, and Massimo Menenti. "ACA Multiagent System for Satellite Image Classification." In Advances in Intelligent and Soft Computing. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-28795-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kurgalin, Sergei, and Sergei Borzunov. "Classification of Computing System Architectures." In A Practical Approach to High-Performance Computing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27558-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zeelan Basha, C. M. A. K., T. Maruthi Padmaja, and G. N. Balaji. "Automatic X-ray Image Classification System." In Smart Computing and Informatics. Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-5547-8_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Khandare, Anand, Mugdha Sawant, and Srushti Sankhe. "Analysis of Healthcare System Using Classification Algorithms." In Intelligent Computing and Networking. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3177-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Petkov, Nikolay. "Biologically motivated self-organising image classification system." In High-Performance Computing and Networking. Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0046746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kang, Hyun-Kyu, Yi-Gyu Hwang, and Pum-Mo Ryu. "An Effective Document Classification System Based on Concept Probability Vector." In Content Computing. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30483-8_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bereta, Michal, and Tadeusz Burczynski. "MAICS: Multilevel Artificial Immune Classification System." In Artificial Intelligence and Soft Computing – ICAISC 2006. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11785231_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Choubey, Dilip Kumar, Sanchita Paul, and Vinay Kumar Dhandhania. "GA_NN: An Intelligent Classification System for Diabetes." In Advances in Intelligent Systems and Computing. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1595-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gonçalves, Márcio Leandro. "A Neural System for Remote Sensing Multispectral Image Classification." In Perspectives in Neural Computing. Springer London, 1999. http://dx.doi.org/10.1007/978-1-4471-0811-5_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Abd El-aziz, Atrab A., Ashraf Darwish, Diego Oliva, and Aboul Ella Hassanien. "Machine Learning for Apple Fruit Diseases Classification System." In Advances in Intelligent Systems and Computing. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44289-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "ACM Computing Classification System"

1

Altayeb, Moez, and Marco Zennaro. "Detection and Classification of High Energy Cosmic Rays Using TinyML." In 2024 IEEE/ACM Symposium on Edge Computing (SEC). IEEE, 2024. https://doi.org/10.1109/sec62691.2024.00048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reddy, K. Ramakrishna, G. Indrani, N. Pavan Kumar, and K. Vamshi Krishna. "Image-based System for Blood Group Classification." In 2025 International Conference on Intelligent Computing and Control Systems (ICICCS). IEEE, 2025. https://doi.org/10.1109/iciccs65191.2025.10985254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tome, Paulo. "Implementation of Data Science Techniques in the ACM Computing Classification System." In 2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME). IEEE, 2022. http://dx.doi.org/10.1109/iceccme55909.2022.9988283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kembellec, Gerald, Imad Saleh, and Catherine Sauvaget. "A model of cross language retrieval for IT domain papers through a map of ACM Computing Classification System." In 2009 International Conference on Multimedia Computing and Systems (ICMCS). IEEE, 2009. http://dx.doi.org/10.1109/mmcs.2009.5256709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Run, Shirley Bian, Xiaofan Yu, Quanling Zhao, Le Zhang, and Tajana Rosing. "Poster: Resource-Efficient Environmental Sound Classification Using Hyperdimensional Computing." In SenSys '24: 22nd ACM Conference on Embedded Networked Sensor Systems. ACM, 2024. http://dx.doi.org/10.1145/3666025.3699427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

He, Suining, Jiajie Tan, and S. H. Gary Chan. "Towards area classification for large-scale fingerprint-based system." In UbiComp '16: The 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 2016. http://dx.doi.org/10.1145/2971648.2971689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xia, Chengshuo, Tsubasa Maruyama, Haruki Toda, Mitsunori Tada, Koji Fujita, and Yuta Sugiura. "Knee Osteoarthritis Classification System Examination on Wearable Daily-Use IMU Layout." In UbiComp/ISWC '22: The 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 2022. http://dx.doi.org/10.1145/3544794.3558459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Akash, B. S., Jathin Badam, KVLN Raju, and Dipanjan Chakraborty. "A Poster on Learnings from an Attempt to Build an NLP-based Fake News Classification system for Hindi." In COMPASS '21: ACM SIGCAS Conference on Computing and Sustainable Societies. ACM, 2021. http://dx.doi.org/10.1145/3460112.3471974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Alwin Yaoxian, Sean Shao Wei Lam, Nan Liu, Yan Pang, Ling Ling Chan, and Phua Hwee Tang. "Development of a Radiology Decision Support System for the Classification of MRI Brain Scans." In 2018 IEEE/ACM 5th International Conference on Big Data Computing Applications and Technologies (BDCAT). IEEE, 2018. http://dx.doi.org/10.1109/bdcat.2018.00021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Schmidt, Brian, Dionysios Kountanis, and Ala Al-Fuqaha. "A Biologically-Inspired Approach to Network Traffic Classification for Resource-Constrained Systems." In 2014 IEEE/ACM International Symposium on Big Data Computing (BDC). IEEE, 2014. http://dx.doi.org/10.1109/bdc.2014.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography