Literatura académica sobre el tema "Cluster analysis Pattern recognition systems. Machine learning"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Cluster analysis Pattern recognition systems. Machine learning".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Cluster analysis Pattern recognition systems. Machine learning"

1

Zimovets, V. I., S. V. Shamatrin, D. E. Olada y N. I. Kalashnykova. "Functional Diagnostic System for Multichannel Mine Lifting Machine Working in Factor Cluster Analysis Mode". Journal of Engineering Sciences 7, n.º 1 (2020): E20—E27. http://dx.doi.org/10.21272/jes.2020.7(1).e4.

Texto completo
Resumen
The primary direction of the increase of reliability of the automated control systems of complex electromechanical machines is the application of intelligent information technologies of the analysis of diagnostic information directly in the operating mode. Therefore, the creation of the basics of information synthesis of a functional diagnosis system (FDS) based on machine learning and pattern recognition is a topical task. In this case, the synthesized FDS must be adaptive to arbitrary initial conditions of the technological process and practically invariant to the multidimensionality of the space of diagnostic features, an alphabet of recognition classes, which characterize the possible technical states of the units and devices of the machine. Besides, an essential feature of FDS is the ability to retrain by increasing the power of the alphabet recognition classes. In the article, information synthesis of FDS is performed within the framework of information-extreme intellectual data analysis technology, which is based on maximizing the information capacity of the system in the process of machine learning. The idea of factor cluster analysis was realized by forming an additional training matrix of unclassified vectors of features of a new recognition class obtained during the operation of the FDS directly in the operating mode. The proposed algorithm allows performing factor cluster analysis in the case of structured feature vectors of several recognition classes. In this case, additional training matrices of the corresponding recognition classes are formed by the agglomerative method of cluster analysis using the k-means procedure. The proposed method of factor cluster analysis is implemented on the example of information synthesis of the FDS of a multi-core mine lifting machine. Keywords: information-extreme intelligent technology, a system of functional diagnostics, multichannel mine lifting machine, machine learning, factor cluster analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Vankayalapati, Revathi, Kalyani Balaso Ghutugade, Rekha Vannapuram y Bejjanki Pooja Sree Prasanna. "K-Means Algorithm for Clustering of Learners Performance Levels Using Machine Learning Techniques". Revue d'Intelligence Artificielle 35, n.º 1 (28 de febrero de 2021): 99–104. http://dx.doi.org/10.18280/ria.350112.

Texto completo
Resumen
Data Clustering is the process of grouping the objects in a way which is identical to the objects in the same group than in other classes. In this paper, the clustering of data is used as k-means to assess the output of students. Machine Learning is an area used in all systems. Machine learning is used in education, pattern recognition, sports, industrial applications. Its significance increases with the future of the students in the educational system. Data collection in education is very useful, as data volumes in the education system are growing each day. Higher education is relatively new, but due to the growing database its significance grows. There are several ways to assess the success of students. K-means is one of the best and most successful methods. The secret information in the database is extracted using data mining to increase the output of students. The decision tree is also a way to predict the success of the students. In recent years, educational institutions have the greatest challenges in increasing data growth and using it to increase efficiency, such that better decision-making can be made. Clustering is one of the most important methods used for the analysis of data sets. This trial uses cluster analyses according to their features for section students in various classes. Uncontrolled K-means algorithm is discussed. The mining of education data is used for the study of the knowledge available in the field of education in order to provide secret, significant and useful information. The proposed model considers K-means clustering model for analyzing learners performance. The outcomes and future of students can be strengthened with this support. The results show that the K-means cluster algorithm is useful for grouping students based on similar performance features.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Rudas, Imre J. "Intelligent Engineering Systems". Journal of Advanced Computational Intelligence and Intelligent Informatics 2, n.º 3 (20 de junio de 1998): 69–71. http://dx.doi.org/10.20965/jaciii.1998.p0069.

Texto completo
Resumen
Building intelligent systems has been one of the great challenges since the early days of human culture. From the second half of the 18th century, two revolutionary changes played the key role in technical development, hence in creating engineering and intelligent engineering systems. The industrial revolution was made possible through technical advances, and muscle power was replaced by machine power. The information revolution of our time, in turn, canbe characterized as the replacement of brain power by machine intelligence. The technique used to build engineering systems and replace muscle power can be termed "Hard Automation"1) and deals with industrial processes that are fixed and repetitive in nature. In hard automation, the system configuration and the operations are fixed and cannot be changed without considerable down-time and cost. It can be used, however, particularly in applications calling for fast, accurate operation, when manufacturing large batches of the same product. The "intelligent" area of automation is "Soft Automation," which involves the flexible, intelligent operation of an automated process. In flexible automation, the task is programmable and a work cell must be reconfigured quickly to accommodate a product change. It is particularly suitable for plant environments in which a variety of products is manufactured in small batches. Processes in flexible automation may have unexpected or previously unknown conditions, and would require a certain degree of "machine" intelligence to handle them.The term machine intelligence has been changing with time and is machinespecific, so intelligence in this context still remains more or less a mysterious phenomenon. Following Prof. Lotfi A. Zadeh,2) we consider a system intelligent if it has a high machine intelligence quotient (MIQ). As Prof. Zadeh stated, "MIQ is a measure of intelligence of man-made systems," and can be characterized by its well defined dimensions, such as planning, decision making, problem solving, learning reasoning, natural language understanding, speech recognition, handwriting recognition, pattern recognition, diagnostics, and execution of high level instructions.Engineering practice often involves complex systems having multiple variable and multiple parameter models, sometimes with nonlinear coupling. The conventional approaches for understanding and predicting the behavior of such systems based on analytical techniques can prove to be inadequate, even at the initial stages of setting up an appropriate mathematical model. The computational environment used in such an analytical approach is sometimes too categoric and inflexible in order to cope with the intricacy and complexity of real-world industrial systems. It turns out that, in dealing with such systems, one must face a high degree of uncertainty and tolerate great imprecision. Trying to increase precision can be very costly.In the face of the difficulties above, Prof. Zadeh proposes a different approach for Machine Intelligence. He separates Hard Computing techniques based Artificial Intelligence from Soft Computing techniques based Computational Intelligence.•Hard computing is oriented toward the analysis and design of physical processes and systems, and is characterized by precision, formality, and categorization. It is based on binary logic, crisp systems, numerical analysis, probability theory, differential equations, functional analysis, mathematical programming approximation theory, and crisp software.•Soft computing is oriented toward the analysis and design of intelligent systems. It is based on fuzzy logic, artificial neural networks, and probabilistic reasoning, including genetic algorithms, chaos theory, and parts of machine learning, and is characterized by approximation and dispositionality.In hard computing, imprecision and uncertainty are undesirable properties. In soft computing, the tolerance for imprecision and uncertainty is exploited to achieve an acceptable solution at low cost, tractability, and a high MIQ. Prof. Zadeh argues that soft rather than hard computing should be viewed as the foundation of real machine intelligence. A center has been established - the Berkeley Initiative for Soft Computing (BISC) - and he directs it at the University of California, Berkeley. BISC devotes its activities to this concept.3) Soft computing, as he explains2),•is a consortium of methodologies providing a foundation for the conception and design of intelligent systems,•is aimed at formalizing of the remarkable human ability to make rational decision in an uncertain, imprecise environment.The guiding principle of soft computing, given by Prof. Zadeh2) is: Exploit the tolerance for imprecision, uncertainty, and partial truth to achieve tractability, robustness, low solution cost, and better rapport with reality.Fuzzy logic is mainly concerned with imprecision and approximate reasoning, neurocomputing mainly with learning and curve fitting, genetic computation mainly with searching and optimization and probabilistic reasoning mainly with uncertainty and propagation of belief. The constituents of soft computing are complementary rather than competitive. Experience gained over the past decade indicates that it can be more effective to use them combined, rather than exclusively.Based on this approach, machine intelligence, including artificial intelligence and computational intelligence (soft computing techniques) is one pillar of Intelligent Engineering Systems. Hundreds of new results in this area are published in journals and international conference proceedings. One such conference, organized in Budapest, Hungary, on September 15-17, 1997, was titled'IEEE International Conference on Intelligent Engineering Systems 1997' (INES'97), sponsored by the IEEE Industrial Electronics Society, IEEE Hungary Section, Bá{a}nki Doná{a}t Polytechnic, Hungary, National Committee for Technological Development, Hungary, and in technical cooperation with the IEEE Robotics & Automation Society. It had around 100 participants from 29 countries. This special issue features papers selected from those papers presented during the conference. It should be pointed out that these papers are revised and expanded versions of those presented.The first paper discusses an intelligent control system of an automated guided vehicle used in container terminals. Container terminals, as the center of cargo transportation, play a key role in everyday cargo handling. Learning control has been applied to maintaining the vehicle's course and enabling it to stop at a designatedlocation. Speed control uses conventional control. System performance system was evaluated by simulation, and performance tests slated for a test vehicle.The second paper presents a real-time camera-based system designed for gaze tracking focused on human-computer communication. The objective was to equip computer systems with a tool that provides visual information about the user. The system detects the user's presence, then locates and tracks the face, nose and both eyes. Detection is enabled by combining image processing techniques and pattern recognition.The third paper discusses the application of soft computing techniques to solve modeling and control problems in system engineering. After the design of classical PID and fuzzy PID controllers for nonlinear systems with an approximately known dynamic model, the neural control of a SCARA robot is considered. Fuzzy control is discussed for a special class of MIMO nonlinear systems and the method of Wang generalized for such systems.The next paper describes fuzzy and neural network algorithms for word frequency prediction in document filtering. The two techniques presented are compared and an alternative neural network algoritm discussed.The fifth paper highlights the theory of common-sense knowledge in representation and reasoning. A connectionist model is proposed for common-sense knowledge representation and reasoning, and experimental results using this method presented.The next paper introduces an expert consulting system that employs software agents to manage distributed knowledge sources. These individual software agents solve users' problems either by themselves or thorough mutual cooperation.The last paper presents a methodology for creating and applying a generic manufacturing process model for mechanical parts. Based on the product model and other up-to-date approaches, the proposed model involves all possible manufacturing process variants for a cluster of manufacturing tasks. The application involves a four-level model structure and Petri net representation of manufacturing process entities. Creation and evaluation of model entities and representation of the knowledge built in the shape and manufacturing process models are emphasised. The proposed process model is applied in manufacturing process planning and production scheduling.References:1) C. W. De Silva, "Automation Intelligence," Engineering Application of Artificial Intelligence, 7-5, 471-477, (1994).2) L. A. Zadeh, "Fuzzy Logic, Neural Networks and Soft Computing," NATO Advanced Studies Institute on Soft Computing and Its Application, Antalya, Turkey, (1996).3) L. A. Zadeh, "Berkeley Initiative_in Soft Computing," IEEE Industrial Electronics Society Newsletter. 41-3, 8-10, (1994).
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

KODRATOFF, Y. y S. MOSCATELLI. "MACHINE LEARNING FOR OBJECT RECOGNITION AND SCENE ANALYSIS". International Journal of Pattern Recognition and Artificial Intelligence 08, n.º 01 (febrero de 1994): 259–304. http://dx.doi.org/10.1142/s0218001494000139.

Texto completo
Resumen
Learning is a critical research field for autonomous computer vision systems. It can bring solutions to the knowledge acquisition bottleneck of image understanding systems. Recent developments of machine learning for computer vision are reported in this paper. We describe several different approaches for learning at different levels of the image understanding process, including learning 2-D shape models, learning strategic knowledge for optimizing model matching, learning for adaptive target recognition systems, knowledge acquisition of constraint rules for labelling and automatic parameter optimization for vision systems. Each approach will be commented on and its strong and weak points will be underlined. In conclusion we will suggest what could be the “ideal” learning system for vision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Anam, Khairul y Adel Al-Jumaily. "Optimized Kernel Extreme Learning Machine for Myoelectric Pattern Recognition". International Journal of Electrical and Computer Engineering (IJECE) 8, n.º 1 (1 de febrero de 2018): 483. http://dx.doi.org/10.11591/ijece.v8i1.pp483-496.

Texto completo
Resumen
Myoelectric pattern recognition (MPR) is used to detect user’s intention to achieve a smooth interaction between human and machine. The performance of MPR is influenced by the features extracted and the classifier employed. A kernel extreme learning machine especially radial basis function extreme learning machine (RBF-ELM) has emerged as one of the potential classifiers for MPR. However, RBF-ELM should be optimized to work efficiently. This paper proposed an optimization of RBF-ELM parameters using hybridization of particle swarm optimization (PSO) and a wavelet function. These proposed systems are employed to classify finger movements on the amputees and able-bodied subjects using electromyography signals. The experimental results show that the accuracy of the optimized RBF-ELM is 95.71% and 94.27% in the healthy subjects and the amputees, respectively. Meanwhile, the optimization using PSO only attained the average accuracy of 95.53 %, and 92.55 %, on the healthy subjects and the amputees, respectively. The experimental results also show that SW-RBF-ELM achieved the accuracy that is better than other well-known classifiers such as support vector machine (SVM), linear discriminant analysis (LDA) and k-nearest neighbor (kNN).
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Nayyar, Anand, Pijush Kanti Dutta Pramankit y Rajni Mohana. "Introduction to the Special Issue on Evolving IoT and Cyber-Physical Systems: Advancements, Applications, and Solutions". Scalable Computing: Practice and Experience 21, n.º 3 (1 de agosto de 2020): 347–48. http://dx.doi.org/10.12694/scpe.v21i3.1568.

Texto completo
Resumen
Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision successful. The major challenges and research scopes include system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. It is our great privilege to present Volume 21, Issue 3 of Scalable Computing: Practice and Experience. We had received 30 research papers and out of which 14 papers are selected for publication. The objective of this special issue is to explore and report recent advances and disseminate state-of-the-art research related to IoT, CPS and the enabling and associated technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to IoT and CPS. Vivek Kumar Prasad and Madhuri D Bhavsar in the paper titled "Monitoring and Prediction of SLA for IoT based Cloud described the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. Kasture et al. in the paper titled "Comparative Study of Speaker Recognition Techniques in IoT Devices for Text Independent Negative Recognition" compared the performance of features which are used in state of art speaker recognition models and analyse variants of Mel frequency cepstrum coefficients (MFCC) predominantly used in feature extraction which can be further incorporated and used in various smart devices. Mahesh Kumar Singh and Om Prakash Rishi in the paper titled "Event Driven Recommendation System for E-Commerce using Knowledge based Collaborative Filtering Technique" proposed a novel system that uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. The proposed approach uses about 100 percent of users' participation in the form of activities during navigation of the web site. Thus, the system expects under the users' interest that is beneficial for both seller and buyer. The proposed system is compared with baseline methods in area of recommendation system using three parameters: precision, recall and NDGA through online and offline evaluation studies with user data and it is observed that proposed system is better as compared to other baseline systems. Benbrahim et al. in the paper titled "Deep Convolutional Neural Network with TensorFlow and Keras to Classify Skin Cancer" proposed a novel classification model to classify skin tumours in images using Deep Learning methodology and the proposed system was tested on HAM10000 dataset comprising of 10,015 dermatoscopic images and the results observed that the proposed system is accurate in order of 94.06\% in validation set and 93.93\% in the test set. Devi B et al. in the paper titled "Deadlock Free Resource Management Technique for IoT-Based Post Disaster Recovery Systems" proposed a new class of techniques that do not perform stringent testing before allocating the resources but still ensure that the system is deadlock-free and the overhead is also minimal. The proposed technique suggests reserving a portion of the resources to ensure no deadlock would occur. The correctness of the technique is proved in the form of theorems. The average turnaround time is approximately 18\% lower for the proposed technique over Banker's algorithm and also an optimal overhead of O(m). Deep et al. in the paper titled "Access Management of User and Cyber-Physical Device in DBAAS According to Indian IT Laws Using Blockchain" proposed a novel blockchain solution to track the activities of employees managing cloud. Employee authentication and authorization are managed through the blockchain server. User authentication related data is stored in blockchain. The proposed work assists cloud companies to have better control over their employee's activities, thus help in preventing insider attack on User and Cyber-Physical Devices. Sumit Kumar and Jaspreet Singh in paper titled "Internet of Vehicles (IoV) over VANETS: Smart and Secure Communication using IoT" highlighted a detailed description of Internet of Vehicles (IoV) with current applications, architectures, communication technologies, routing protocols and different issues. The researchers also elaborated research challenges and trade-off between security and privacy in area of IoV. Deore et al. in the paper titled "A New Approach for Navigation and Traffic Signs Indication Using Map Integrated Augmented Reality for Self-Driving Cars" proposed a new approach to supplement the technology used in self-driving cards for perception. The proposed approach uses Augmented Reality to create and augment artificial objects of navigational signs and traffic signals based on vehicles location to reality. This approach help navigate the vehicle even if the road infrastructure does not have very good sign indications and marking. The approach was tested locally by creating a local navigational system and a smartphone based augmented reality app. The approach performed better than the conventional method as the objects were clearer in the frame which made it each for the object detection to detect them. Bhardwaj et al. in the paper titled "A Framework to Systematically Analyse the Trustworthiness of Nodes for Securing IoV Interactions" performed literature on IoV and Trust and proposed a Hybrid Trust model that seperates the malicious and trusted nodes to secure the interaction of vehicle in IoV. To test the model, simulation was conducted on varied threshold values. And results observed that PDR of trusted node is 0.63 which is higher as compared to PDR of malicious node which is 0.15. And on the basis of PDR, number of available hops and Trust Dynamics the malicious nodes are identified and discarded. Saniya Zahoor and Roohie Naaz Mir in the paper titled "A Parallelization Based Data Management Framework for Pervasive IoT Applications" highlighted the recent studies and related information in data management for pervasive IoT applications having limited resources. The paper also proposes a parallelization-based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach. Patel et al. in the paper titled "Performance Analysis of Video ON-Demand and Live Video Streaming Using Cloud Based Services" presented a review of video analysis over the LVS \& VoDS video application. The researchers compared different messaging brokers which helps to deliver each frame in a distributed pipeline to analyze the impact on two message brokers for video analysis to achieve LVS & VoS using AWS elemental services. In addition, the researchers also analysed the Kafka configuration parameter for reliability on full-service-mode. Saniya Zahoor and Roohie Naaz Mir in the paper titled "Design and Modeling of Resource-Constrained IoT Based Body Area Networks" presented the design and modeling of a resource-constrained BAN System and also discussed the various scenarios of BAN in context of resource constraints. The Researchers also proposed an Advanced Edge Clustering (AEC) approach to manage the resources such as energy, storage, and processing of BAN devices while performing real-time data capture of critical health parameters and detection of abnormal patterns. The comparison of the AEC approach is done with the Stable Election Protocol (SEP) through simulations and empirical data analysis. The results show an improvement in energy, processing time and storage requirements for the processing of data on BAN devices in AEC as compared to SEP. Neelam Saleem Khan and Mohammad Ahsan Chishti in the paper titled "Security Challenges in Fog and IoT, Blockchain Technology and Cell Tree Solutions: A Review" outlined major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes. Furthermore, this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT. In addition, the researchers discussed the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology. The researchers also proposed Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area. Bhadwal et al. in the paper titled "A Machine Translation System from Hindi to Sanskrit Language Using Rule Based Approach" proposed a rule-based machine translation system to bridge the language barrier between Hindi and Sanskrit Language by converting any test in Hindi to Sanskrit. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94\% while the pragmatic analysis of 50 sentences produce an accuracy of around 86\%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant. Anshu Kumar Dwivedi and A.K. Sharma in the paper titled "NEEF: A Novel Energy Efficient Fuzzy Logic Based Clustering Protocol for Wireless Sensor Network" proposed a a deterministic novel energy efficient fuzzy logic-based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Зимовець, Вікторія Ігорівна, Олександр Сергійович Приходченко y Микита Ігорович Мироненко. "ІНФОРМАЦІЙНО-ЕКСТРЕМАЛЬНИЙ КЛАСТЕР-АНАЛІЗ ВХІДНИХ ДАНИХ ПРИ ФУНКЦІОНАЛЬНОМУ ДІАГНОСТУВАННІ". RADIOELECTRONIC AND COMPUTER SYSTEMS, n.º 4 (25 de diciembre de 2019): 105–15. http://dx.doi.org/10.32620/reks.2019.4.12.

Texto completo
Resumen
The study aims to increase the functional efficiency of machine learning of the functional diagnosis system of a multi-rope shaft hoist through cluster analysis of diagnostic features. To achieve the goal, it was necessary to solve the following tasks: formalize the formulation of the task of information synthesis, capable of learning a functional diagnosis system, which operates in the cluster-analysis mode of diagnostic signs; to propose a categorical model and, on its basis, to develop an algorithm for information-extreme cluster analysis of diagnostic signs in the process of information-extreme machine learning of a functional diagnostic system; carry out fuzzification of input fuzzy data by optimizing the geometric parameters of hyperspherical containers of recognition classes that characterize the possible technical conditions of the diagnostic object; to develop an algorithm and implement it on the example of information synthesis of the functional diagnostics system of a multi-rope mine hoisting machine. The object of the study is the processes of information synthesis of a functional diagnostic system capable of learning, integrated into the automated control system of a multi-rope mine hoisting machine. The subject of the study is categorical models, an information-extremal machine learning algorithm of a functional diagnostic system that operates in the cluster analysis model of diagnostic signs and constructs decision rules. The research methods are based on the ideas and methods of information-extreme intellectual data analysis technology, a theoretical-informational approach to assessing the functional effectiveness of machine learning and on the geometric approach of pattern recognition theory. As a result, the following results were obtained: a categorical model was proposed, and on its basis, an algorithm for information-extremal machine learning of the functional diagnostics system for a multi-rope mine hoist was developed and implemented, which allows you to automatically generate an input classified fuzzy training matrix, which significantly reduces time and material costs when creating incoming mathematical description. The obtained result was achieved by cluster analysis of structured vectors of diagnostic signs obtained from archival data for three recognition classes using the k-means procedure. As a criterion for optimizing machine learning parameters, we considered a modified Kullback measure in the form of a functional on the exact characteristics of diagnostic solutions and distance criteria for the proximity of recognition classes. Based on the optimal geometric parameters of the containers of recognition classes obtained during machine learning, decisive rules were constructed that allowed us to classify the vectors of diagnostic features of recognition classes with a rather high total probability of making the correct diagnostic decisions. Conclusions. The scientific novelty of the results obtained consists in the development of a new method for the information synthesis of the functional diagnostics system of a multi-rope mine hoisting machine, which operates in the cluster analysis model, which made it possible to automatically form an input classified fuzzy training matrix with its subsequent dephasification in the process of information-extreme machine learning system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Wolff, J. Gerard. "The Potential of the SP System in Machine Learning and Data Analysis for Image Processing". Big Data and Cognitive Computing 5, n.º 1 (23 de febrero de 2021): 7. http://dx.doi.org/10.3390/bdcc5010007.

Texto completo
Resumen
This paper aims to describe how pattern recognition and scene analysis may with advantage be viewed from the perspective of the SP system (meaning the SP theory of intelligence and its realisation in the SP computer model (SPCM), both described in an appendix), and the strengths and potential of the system in those areas. In keeping with evidence for the importance of information compression (IC) in human learning, perception, and cognition, IC is central in the structure and workings of the SPCM. Most of that IC is achieved via the powerful concept of SP-multiple-alignment, which is largely responsible for the AI-related versatility of the system. With examples from the SPCM, the paper describes: how syntactic parsing and pattern recognition may be achieved, with corresponding potential for visual parsing and scene analysis; how those processes are robust in the face of errors in input data; how in keeping with what people do, the SP system can “see” things in its data that are not objectively present; the system can recognise things at multiple levels of abstraction and via part-whole hierarchies, and via an integration of the two; the system also has potential for the creation of a 3D construct from pictures of a 3D object from different viewpoints, and for the recognition of 3D entities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Samiappan, Dhanalakshmi, S. Latha, T. Rama Rao, Deepak Verma y CSA Sriharsha. "Enhancing Machine Learning Aptitude Using Significant Cluster Identification for Augmented Image Refining". International Journal of Pattern Recognition and Artificial Intelligence 34, n.º 09 (12 de diciembre de 2019): 2051009. http://dx.doi.org/10.1142/s021800142051009x.

Texto completo
Resumen
Enhancing the image to remove noise, preserving the useful features and edges are the most important tasks in image analysis. In this paper, Significant Cluster Identification for Maximum Edge Preservation (SCI-MEP), which works in parallel with clustering algorithms and improved efficiency of the machine learning aptitude, is proposed. Affinity propagation (AP) is a base method to obtain clusters from a learnt dictionary, with an adaptive window selection, which are then refined using SCI-MEP to preserve the semantic components of the image. Since only the significant clusters are worked upon, the computational time drastically reduces. The flexibility of SCI-MEP allows it to be integrated with any clustering algorithm to improve its efficiency. The method is tested and verified to remove Gaussian noise, rain noise and speckle noise from images. Our results have shown that SCI-MEP considerably optimizes the existing algorithms in terms of performance evaluation metrics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Indra, Zul, Azhari Setiawan, Yessi Jusman y Arisman Adnan. "Machine learning deployment for arms dynamics pattern recognition in Southeast Asia region". Indonesian Journal of Electrical Engineering and Computer Science 23, n.º 3 (1 de septiembre de 2021): 1654. http://dx.doi.org/10.11591/ijeecs.v23.i3.pp1654-1662.

Texto completo
Resumen
<p>Finding the most significant determinant variable of arms dynamic is highly required due to strategic policies formulations and power mapping for academics and policy makers. Machine learning is still new or underdiscussed among the study of politics and international relations. Existing literature have much focus on using advanced quantitative methods by applying various types of regression analysis. This study analyzed the arms dynamic in Southeast Asia countries along with its some strategic partners such as United States, China, Russia, South Korea, and Japan by using ‘Decision Tree’ of machine learning algorithm. This study conducted a machine learning analysis on 55 variable items which is classified into 8 classes of variables videlicet defense budget, arms trade exports, arms trade imports, political posture, economic posture, security posture and defense priority, national capability, and direct contact,. The results suggest three findings: (1) state who perceives maritime as strategic drivers and forces will seek more power for its maritime defense posture which is translated to defense budget, (2) big size countries tend to be an arms exporter country, and (3) state’s energy dependence often leads to a higher volume of arms transfers between countries.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "Cluster analysis Pattern recognition systems. Machine learning"

1

Li, Na. "MMD and Ward criterion in a RKHS : application to Kernel based hierarchical agglomerative clustering". Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0033/document.

Texto completo
Resumen
La classification non supervisée consiste à regrouper des objets afin de former des groupes homogènes au sens d’une mesure de similitude. C’est un outil utile pour explorer la structure d’un ensemble de données non étiquetées. Par ailleurs, les méthodes à noyau, introduites initialement dans le cadre supervisé, ont démontré leur intérêt par leur capacité à réaliser des traitements non linéaires des données en limitant la complexité algorithmique. En effet, elles permettent de transformer un problème non linéaire en un problème linéaire dans un espace de plus grande dimension. Dans ce travail, nous proposons un algorithme de classification hiérarchique ascendante utilisant le formalisme des méthodes à noyau. Nous avons tout d’abord recherché des mesures de similitude entre des distributions de probabilité aisément calculables à l’aide de noyaux. Parmi celles-ci, la maximum mean discrepancy a retenu notre attention. Afin de pallier les limites inhérentes à son usage, nous avons proposé une modification qui conduit au critère de Ward, bien connu en classification hiérarchique. Nous avons enfin proposé un algorithme itératif de clustering reposant sur la classification hiérarchique à noyau et permettant d’optimiser le noyau et de déterminer le nombre de classes en présence
Clustering, as a useful tool for unsupervised classification, is the task of grouping objects according to some measured or perceived characteristics of them and it has owned great success in exploring the hidden structure of unlabeled data sets. Kernel-based clustering algorithms have shown great prominence. They provide competitive performance compared with conventional methods owing to their ability of transforming nonlinear problem into linear ones in a higher dimensional feature space. In this work, we propose a Kernel-based Hierarchical Agglomerative Clustering algorithms (KHAC) using Ward’s criterion. Our method is induced by a recently arisen criterion called Maximum Mean Discrepancy (MMD). This criterion has firstly been proposed to measure difference between different distributions and can easily be embedded into a RKHS. Close relationships have been proved between MMD and Ward's criterion. In our KHAC method, selection of the kernel parameter and determination of the number of clusters have been studied, which provide satisfactory performance. Finally an iterative KHAC algorithm is proposed which aims at determining the optimal kernel parameter, giving a meaningful number of clusters and partitioning the data set automatically
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wu, Zhili. "Kernel based learning methods for pattern and feature analysis". HKBU Institutional Repository, 2004. http://repository.hkbu.edu.hk/etd_ra/619.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Fredriksson, Tomas y Rickard Svensson. "Analysis of machine learning for human motion pattern recognition on embedded devices". Thesis, KTH, Mekatronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246087.

Texto completo
Resumen
With an increased amount of connected devices and the recent surge of artificial intelligence, the two technologies need more attention to fully bloom as a useful tool for creating new and exciting products. As machine learning traditionally is implemented on computers and online servers this thesis explores the possibility to extend machine learning to an embedded environment. This evaluation of existing machine learning in embedded systems with limited processing capa-bilities has been carried out in the specific context of an application involving classification of basic human movements. Previous research and implementations indicate that it is possible with some limitations, this thesis aims to answer which hardware limitation is affecting clas-sification and what classification accuracy the system can reach on an embedded device. The tests included human motion data from an existing dataset and included four different machine learning algorithms on three devices. Support Vector Machine (SVM) are found to be performing best com-pared to CART, Random Forest and AdaBoost. It reached a classification accuracy of 84,69% between six different included motions with a clas-sification time of 16,88 ms per classification on a Cortex M4 processor. This is the same classification accuracy as the one obtained on the host computer with more computational capabilities. Other hardware and machine learning algorithm combinations had a slight decrease in clas-sification accuracy and an increase in classification time. Conclusions could be drawn that memory on the embedded device affect which al-gorithms could be run and the complexity of data that can be extracted in form of features. Processing speed is mostly affecting classification time. Additionally the performance of the machine learning system is connected to the type of data that is to be observed, which means that the performance of different setups differ depending on the use case.
Antalet uppkopplade enheter ökar och det senaste uppsvinget av ar-tificiell intelligens driver forskningen framåt till att kombinera de två teknologierna för att både förbättra existerande produkter och utveckla nya. Maskininlärning är traditionellt sett implementerat på kraftfulla system så därför undersöker den här masteruppsatsen potentialen i att utvidga maskininlärning till att köras på inbyggda system. Den här undersökningen av existerande maskinlärningsalgoritmer, implemen-terade på begränsad hårdvara, har utförts med fokus på att klassificera grundläggande mänskliga rörelser. Tidigare forskning och implemen-tation visar på att det ska vara möjligt med vissa begränsningar. Den här uppsatsen vill svara på vilken hårvarubegränsning som påverkar klassificering mest samt vilken klassificeringsgrad systemet kan nå på den begränsande hårdvaran. Testerna inkluderade mänsklig rörelsedata från ett existerande dataset och inkluderade fyra olika maskininlärningsalgoritmer på tre olika system. SVM presterade bäst i jämförelse med CART, Random Forest och AdaBoost. Den nådde en klassifikationsgrad på 84,69% på de sex inkluderade rörelsetyperna med en klassifikationstid på 16,88 ms per klassificering på en Cortex M processor. Detta är samma klassifikations-grad som en vanlig persondator når med betydligt mer beräknings-resurserresurser. Andra hårdvaru- och algoritm-kombinationer visar en liten minskning i klassificeringsgrad och ökning i klassificeringstid. Slutsatser kan dras att minnet på det inbyggda systemet påverkar vilka algoritmer som kunde köras samt komplexiteten i datan som kunde extraheras i form av attribut (features). Processeringshastighet påverkar mest klassificeringstid. Slutligen är prestandan för maskininlärningsy-stemet bunden till typen av data som ska klassificeras, vilket betyder att olika uppsättningar av algoritmer och hårdvara påverkar prestandan olika beroende på användningsområde.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Janmohammadi, Siamak. "Classifying Pairwise Object Interactions: A Trajectory Analytics Approach". Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc801901/.

Texto completo
Resumen
We have a huge amount of video data from extensively available surveillance cameras and increasingly growing technology to record the motion of a moving object in the form of trajectory data. With proliferation of location-enabled devices and ongoing growth in smartphone penetration as well as advancements in exploiting image processing techniques, tracking moving objects is more flawlessly achievable. In this work, we explore some domain-independent qualitative and quantitative features in raw trajectory (spatio-temporal) data in videos captured by a fixed single wide-angle view camera sensor in outdoor areas. We study the efficacy of those features in classifying four basic high level actions by employing two supervised learning algorithms and show how each of the features affect the learning algorithms’ overall accuracy as a single factor or confounded with others.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Hernández-Vela, Antonio. "From pixels to gestures: learning visual representations for human analysis in color and depth data sequences". Doctoral thesis, Universitat de Barcelona, 2015. http://hdl.handle.net/10803/292488.

Texto completo
Resumen
The visual analysis of humans from images is an important topic of interest due to its relevance to many computer vision applications like pedestrian detection, monitoring and surveillance, human-computer interaction, e-health or content-based image retrieval, among others. In this dissertation in learning different visual representations of the human body that are helpful for the visual analysis of humans in images and video sequences. To that end, we analyze both RCB and depth image modalities and address the problem from three different research lines, at different levels of abstraction; from pixels to gestures: human segmentation, human pose estimation and gesture recognition. First, we show how binary segmentation (object vs. background) of the human body in image sequences is helpful to remove all the background clutter present in the scene. The presented method, based on “Graph cuts” optimization, enforces spatio-temporal consistency of the produced segmentation masks among consecutive frames. Secondly, we present a framework for multi-label segmentation for obtaining much more detailed segmentation masks: instead of just obtaining a binary representation separating the human body from the background, finer segmentation masks can be obtained separating the different body parts. At a higher level of abstraction, we aim for a simpler yet descriptive representation of the human body. Human pose estimation methods usually rely on skeletal models of the human body, formed by segments (or rectangles) that represent the body limbs, appropriately connected following the kinematic constraints of the human body, In practice, such skeletal models must fulfill some constraints in order to allow for efficient inference, while actually Iimiting the expressiveness of the model. In order to cope with this, we introduce a top-down approach for predicting the position of the body parts in the model, using a mid-level part representation based on Poselets. Finally, we propose a framework for gesture recognition based on the bag of visual words framework. We leverage the benefits of RGB and depth image modalities by combining modality-specific visual vocabularies in a late fusion fashion. A new rotation-variant depth descriptor is presented, yielding better results than other state-of-the-art descriptors. Moreover, spatio-temporal pyramids are used to encode rough spatial and temporal structure. In addition, we present a probabilistic reformulation of Dynamic Time Warping for gesture segmentation in video sequences, A Gaussian-based probabilistic model of a gesture is learnt, implicitly encoding possible deformations in both spatial and time domains.
L’anàlisi visual de persones a partir d'imatges és un tema de recerca molt important, atesa la rellevància que té a una gran quantitat d'aplicacions dins la visió per computador, com per exemple: detecció de vianants, monitorització i vigilància,interacció persona-màquina, “e-salut” o sistemes de recuperació d’matges a partir de contingut, entre d'altres. En aquesta tesi volem aprendre diferents representacions visuals del cos humà, que siguin útils per a la anàlisi visual de persones en imatges i vídeos. Per a tal efecte, analitzem diferents modalitats d'imatge com són les imatges de color RGB i les imatges de profunditat, i adrecem el problema a diferents nivells d'abstracció, des dels píxels fins als gestos: segmentació de persones, estimació de la pose humana i reconeixement de gestos. Primer, mostrem com la segmentació binària (objecte vs. fons) del cos humà en seqüències d'imatges ajuda a eliminar soroll pertanyent al fons de l'escena en qüestió. El mètode presentat, basat en optimització “Graph cuts”, imposa consistència espai-temporal a Ies màscares de segmentació obtingudes en “frames” consecutius. En segon lloc, presentem un marc metodològic per a la segmentació multi-classe, amb la qual podem obtenir una descripció més detallada del cos humà, en comptes d'obtenir una simple representació binària separant el cos humà del fons, podem obtenir màscares de segmentació més detallades, separant i categoritzant les diferents parts del cos. A un nivell d'abstraccíó més alt, tenim com a objectiu obtenir representacions del cos humà més simples, tot i ésser suficientment descriptives. Els mètodes d'estimació de la pose humana sovint es basen en models esqueletals del cos humà, formats per segments (o rectangles) que representen les extremitats del cos, connectades unes amb altres seguint les restriccions cinemàtiques del cos humà. A la pràctica, aquests models esqueletals han de complir certes restriccions per tal de poder aplicar mètodes d'inferència que permeten trobar la solució òptima de forma eficient, però a la vegada aquestes restriccions suposen una gran limitació en l'expressivitat que aques.ts models son capaços de capturar. Per tal de fer front a aquest problema, proposem un enfoc “top-down” per a predir la posició de les parts del cos del model esqueletal, introduïnt una representació de parts de mig nivell basada en “Poselets”. Finalment. proposem un marc metodològic per al reconeixement de gestos, basat en els “bag of visual words”. Aprofitem els avantatges de les imatges RGB i les imatges; de profunditat combinant vocabularis visuals específiques per a cada modalitat, emprant late fusion. Proposem un nou descriptor per a imatges de profunditat invariant a rotació, que millora l'estat de l'art, i fem servir piràmides espai-temporals per capturar certa estructura espaial i temporal dels gestos. Addicionalment, presentem una reformulació probabilística del mètode “Dynamic Time Warping” per al reconeixement de gestos en seqüències d'imatges. Més específicament, modelem els gestos amb un model probabilistic gaussià que implícitament codifica possibles deformacions tant en el domini espaial com en el temporal.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting". Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.

Texto completo
Resumen
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. The findings also demonstrate examples of ensemble classifiers being effectively consolidated into unified models using the distillation technique. An analogue arrangement with optimization against multiple output targets, incorporating additional information, showed accuracy gains comparable to ensembling. For use of the classifier on test data with statistics different than those of the dataset, results indicate that augmentation of the input data during classifier creation helps performance, but would, in the current case, likely need to be guided by information about the distribution shift to have sufficiently positive impact to enable a practical application. I suggest, for future development, updated architectures, automated hyperparameter search and leveraging the bountiful unlabeled data potentially available from production lines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Lagarde, Matthieu, Philippe Gaussier y Pierre Andry. "Apprentissage de nouveaux comportements: vers le développement épigénétique d'un robot autonome". Phd thesis, Université de Cergy Pontoise, 2010. http://tel.archives-ouvertes.fr/tel-00749761.

Texto completo
Resumen
La problématique de l'apprentissage de comportements sur un robot autonome soulève de nombreuses questions liées au contrôle moteur, à l'encodage du comportement, aux stratégies comportementales et à la sélection de l'action. Utiliser une approche développementale présente un intérêt tout particulier dans le cadre de la robotique autonome. Le comportement du robot repose sur des mécanismes de bas niveau dont les interactions permettent de faire émerger des comportements plus complexes. Le robot ne possède pas d'informations a priori sur ses caractéristiques physiques ou sur l'environnement, il doit apprendre sa propre dynamique sensori-motrice. J'ai débuté ma thèse par l'étude d'un modèle d'imitation bas niveau. Du point de vue du développement, l'imitation est présente dès la naissance et accompagne, sous de multiples formes, le développement du jeune enfant. Elle présente une fonction d'apprentissage et se révèle alors être un atout en terme de temps d'acquisition de comportements, ainsi qu'une fonction de communication participant à l'amorce et au maintien d'interactions non verbales et naturelles. De plus, même s'il n'y a pas de réelle intention d'imiter, l'observation d'un autre agent permet d'extraire suffisamment d'informations pour être capable de reproduire la tâche. Mon travail a donc dans un premier temps consisté à appliquer et tester un modèle développemental qui permet l'émergence de comportements d'imitation de bas niveau sur un robot autonome. Ce modèle est construit comme un homéostat qui tend à équilibrer par l'action ses informations perceptives frustres (détection du mouvement, détection de couleur, informations sur les angles des articulations d'un bras de robot). Ainsi, lorsqu'un humain bouge sa main dans le champ visuel du robot, l'ambigüité de la perception de ce dernier lui fait confondre la main de l'humain avec l'extrémité de son bras. De l'erreur qui en résulte émerge un comportement d'imitation immédiate des gestes de l'humain par action de l'homéostat. Bien sûr, un tel modèle implique que le robot soit capable d'associer au préalable les positions visuelles de son effecteur avec les informations proprioceptives de ses moteurs. Grace au comportement d'imitation, le robot réalise des mouvements qu'il peut ensuite apprendre pour construire des comportements plus complexes. Comment alors passer d'un simple mouvement à un geste plus complexe pouvant impliquer un objet ou un lieu ? Je propose une architecture qui permet à un robot d'apprendre un comportement sous forme de séquences temporelles complexes (avec répétition d'éléments) de mouvements. Deux modèles différents permettant l'apprentissage de séquences ont été développés et testés. Le premier apprend en ligne le timing de séquences temporelles simples. Ce modèle ne permettant pas d'apprendre des séquences complexes, le second modèle testé repose sur les propriétés d'un réservoir de dynamiques, il apprend en ligne des séquences complexes. A l'issue de ces travaux, une architecture apprenant le timing d'une séquence complexe a été proposée. Les tests en simulation et sur robot ont montré la nécessité d'ajouter un mécanisme de resynchronisation permettant de retrouver les bons états cachés pour permettre d'amorcer une séquence complexe par un état intermédiaire. Dans un troisième temps, mes travaux ont consisté à étudier comment deux stratégies sensorimotrices peuvent cohabiter dans le cadre d'une tâche de navigation. La première stratégie encode le comportement à partir d'informations spatiales alors que la seconde utilise des informations temporelles. Les deux architectures ont été testées indépendamment sur une même tâche. Ces deux stratégies ont ensuite été fusionnées et exécutées en parallèle. La fusion des réponses délivrées par les deux stratégies a été réalisée avec l'utilisation de champs de neurones dynamiques. Un mécanisme de "chunking" représentant l'état instantané du robot (le lieu courant avec l'action courante) permet de resynchroniser les dynamiques des séquences temporelles. En parallèle, un certain nombre de problème de programmation et de conception des réseaux de neurones sont apparus. En effet, nos réseaux peuvent compter plusieurs centaines de milliers de neurones. Il devient alors difficile de les exécuter sur une seule unité de calcul. Comment concevoir des architectures neuronales avec des contraintes de répartition de calcul, de communications réseau et de temps réel ? Une autre partie de mon travail a consisté à apporter des outils permettant la modélisation, la communication et l'exécution en temps réel d'architecture distribuées. Pour finir, dans le cadre du projet européen Feelix Growing, j'ai également participé à l'intégration de mes travaux avec ceux du laboratoire LASA de l'EPFL pour l'apprentissage de comportements complexes mêlant la navigation, le geste et l'objet. En conclusion, cette thèse m'a permis de développer à la fois de nouveaux modèles pour l'apprentissage de comportements - dans le temps et dans l'espace, de nouveaux outils pour maîtriser des réseaux de neurones de très grande taille et de discuter à travers les limitations du système actuel, les éléments importants pour un système de sélection de l'action.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Bilenko, Mikhail Yuryevich. "Learnable similarity functions and their application to record linkage and clustering". Thesis, 2006. http://hdl.handle.net/2152/2681.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Yu, Gary. "Identifying Patterns in Behavioral Public Health Data Using Mixture Modeling with an Informative Number of Repeated Measures". Thesis, 2014. https://doi.org/10.7916/D8F197VX.

Texto completo
Resumen
Finite mixture modeling is a useful statistical technique for clustering individuals based on patterns of responses. The fundamental idea of the mixture modeling approach is to assume there are latent clusters of individuals in the population which each generate their own distinct distribution of observations (multivariate or univariate) which are then mixed up together in the full population. Hence, the name mixture comes from the fact that what we observe is a mixture of distributions. The goal of this model-based clustering technique is to identify what the mixture of distributions is so that, given a particular response pattern, individuals can be clustered accordingly. Commonly, finite mixture models, as well as the special case of latent class analysis, are used on data that inherently involve repeated measures. The purpose of this dissertation is to extend the finite mixture model to allow for the number of repeated measures to be incorporated and contribute to the clustering of individuals rather than measures. The dimension of the repeated measures or simply the count of responses is assumed to follow a truncated Poisson distribution and this information can be incorporated into what we call a dimension informative finite mixture model (DIMM). The outline of this dissertation is as follows. Paper 1 is entitled, "Dimension Informative Mixture Modeling (DIMM) for questionnaire data with an informative number of repeated measures." This paper describes the type of data structures considered and introduces the dimension informative mixture model (DIMM). A simulation study is performed to examine how well the DIMM fits the known specified truth. In the first scenario, we specify a mixture of three univariate normal distributions with different means and similar variances with different and similar counts of repeated measurements. We found that the DIMM predicts the true underlying class membership better than the traditional finite mixture model using a predicted value metric score. In the second scenario, we specify a mixture of two univariate normal distributions with the same means and variances with different and similar counts of repeated measurements. We found that that the count-informative finite mixture model predicts the truth much better than the non-informative finite mixture model. Paper 2 is entitled, "Patterns of Physical Activity in the Northern Manhattan Study (NOMAS) Using Multivariate Finite Mixture Modeling (MFMM)." This is a study that applies a multivariate finite mixture modeling approach to examining and elucidating underlying latent clusters of different physical activity profiles based on four dimensions: total frequency of activities, average duration per activity, total energy expenditure and the total count of the number of different activities conducted. We found a five cluster solution to describe the complex patterns of physical activity levels, as measured by fifteen different physical activity items, among a US based elderly cohort. Adding in a class of individuals who were not doing any physical activity, the labels of these six clusters are: no exercise, very inactive, somewhat inactive, slightly under guidelines, meet guidelines and above guidelines. This methodology improves upon previous work which utilized only the total metabolic equivalent (a proxy of energy expenditure) to classify individuals into inactive, active and highly active. Paper 3 is entitled, "Complex Drug Use Patterns and Associated HIV Transmission Risk Behaviors in an Internet Sample of US Men Who Have Sex With Men." This is a study that applies the count-informative information into a latent class analysis on nineteen binary drug items of drugs consumed within the past year before a sexual encounter. In addition to the individual drugs used, the mixture model incorporated a count of the total number of drugs used. We found a six class solution: low drug use, some recreational drug use, nitrite inhalants (poppers) with prescription erectile dysfunction (ED) drug use, poppers with prescription/non-prescription ED drug use and high polydrug use. Compared to participants in the low drug use class, participants in the highest drug use class were 5.5 times more likely to report unprotected anal intercourse (UAI) in their last sexual encounter and approximately 4 times more likely to report a new sexually transmitted infection (STI) in the past year. Younger men were also less likely to report UAI than older men but more likely to report an STI.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Karimy, Dehkordy Hossein. "Automated image classification via unsupervised feature learning by K-means". Thesis, 2015. http://hdl.handle.net/1805/7964.

Texto completo
Resumen
Indiana University-Purdue University Indianapolis (IUPUI)
Research on image classification has grown rapidly in the field of machine learning. Many methods have already been implemented for image classification. Among all these methods, best results have been reported by neural network-based techniques. One of the most important steps in automated image classification is feature extraction. Feature extraction includes two parts: feature construction and feature selection. Many methods for feature extraction exist, but the best ones are related to deep-learning approaches such as network-in-network or deep convolutional network algorithms. Deep learning tries to focus on the level of abstraction and find higher levels of abstraction from the previous level by having multiple layers of hidden layers. The two main problems with using deep-learning approaches are the speed and the number of parameters that should be configured. Small changes or poor selection of parameters can alter the results completely or even make them worse. Tuning these parameters is usually impossible for normal users who do not have super computers because one should run the algorithm and try to tune the parameters according to the results obtained. Thus, this process can be very time consuming. This thesis attempts to address the speed and configuration issues found with traditional deep-network approaches. Some of the traditional methods of unsupervised learning are used to build an automated image-classification approach that takes less time both to configure and to run.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Cluster analysis Pattern recognition systems. Machine learning"

1

Gong, Yihong. Machine learning for multimedia content analysis. New York: Springer, 2007.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

De, Rajat K. Machine interpretation of patterns: Image analysis and data mining. Editado por Indian Statistical Institute. Singapore: World Scientific, 2010.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Petra, Perner, ed. Machine learning and data mining in pattern recognition: 5th international conference, MLDM 2007, Leipzig, Germany, July 18-20, 2007 ; proceedings. Berlin: Springer, 2007.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Hassanien, Aboul Ella. Advanced Machine Learning Technologies and Applications: First International Conference, AMLTA 2012, Cairo, Egypt, December 8-10, 2012. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Włodzisław, Duch, Érdi Péter, Masulli Francesco, Palm Günther y SpringerLink (Online service), eds. Artificial Neural Networks and Machine Learning – ICANN 2012: 22nd International Conference on Artificial Neural Networks, Lausanne, Switzerland, September 11-14, 2012, Proceedings, Part I. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Włodzisław, Duch, Érdi Péter, Masulli Francesco, Palm Günther y SpringerLink (Online service), eds. Artificial Neural Networks and Machine Learning – ICANN 2012: 22nd International Conference on Artificial Neural Networks, Lausanne, Switzerland, September 11-14, 2012, Proceedings, Part II. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

F, Costa José A., Barreto Guilherme y SpringerLink (Online service), eds. Intelligent Data Engineering and Automated Learning - IDEAL 2012: 13th International Conference, Natal, Brazil, August 29-31, 2012. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Xu, Wei y Yihong Gong. Machine Learning for Multimedia Content Analysis. Springer, 2010.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Guo, Yulan, Robert B. Fisher, Hamid Laga, Hedi Tabia y Mohammed Bennamoun. 3D Shape Analysis: Fundamentals, Theory, and Applications. Wiley & Sons, Incorporated, John, 2018.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Guo, Yulan, Robert B. Fisher, Hamid Laga, Hedi Tabia y Mohammed Bennamoun. 3D Shape Analysis: Fundamentals, Theory, and Applications. Wiley, 2019.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Cluster analysis Pattern recognition systems. Machine learning"

1

Sentz, Kari y François M. Hemez. "Information Gap Analysis for Decision Support Systems in Evidence-Based Medicine". En Machine Learning and Data Mining in Pattern Recognition, 543–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39712-7_42.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Hájek, Petr y Vladimír Olej. "Predicting Firms’ Credit Ratings Using Ensembles of Artificial Immune Systems and Machine Learning – An Over-Sampling Approach". En Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 29–38. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-662-44654-6_3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Wang, Yingxu y Omar A. Zatarain. "A Novel Machine Learning Algorithm for Cognitive Concept Elicitation by Cognitive Robots". En Cognitive Analytics, 638–54. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2460-2.ch033.

Texto completo
Resumen
Cognitive knowledge learning (CKL) is a fundamental methodology for cognitive robots and machine learning. Traditional technologies for machine learning deal with object identification, cluster classification, pattern recognition, functional regression and behavior acquisition. A new category of CKL is presented in this paper embodied by the Algorithm of Cognitive Concept Elicitation (ACCE). Formal concepts are autonomously generated based on collective intension (attributes) and extension (objects) elicited from informal descriptions in dictionaries. A system of formal concept generation by cognitive robots is implemented based on the ACCE algorithm. Experiments on machine learning for knowledge acquisition reveal that a cognitive robot is able to learn synergized concepts in human knowledge in order to build its own knowledge base. The machine–generated knowledge base demonstrates that the ACCE algorithm can outperform human knowledge expressions in terms of relevance, accuracy, quantification and cohesiveness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Sharma, Nitin, Pawan Kumar Dahiya y B. R. Marwah. "Comparative Analysis of Various Soft Computing Technique-Based Automatic Licence Plate Recognition Systems". En Handbook of Research on Machine Learning Techniques for Pattern Recognition and Information Security, 18–37. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3299-7.ch002.

Texto completo
Resumen
Traffic on Indian roads is growing day by day leading to accidents. The intelligent transport system is the solution to resolve the traffic problem on roads. One of the components of the intelligent transportation system is the monitoring of traffic by the automatic licence plate recognition system. In this chapter, a automatic licence plate recognition systems based on soft computing techniques is presented. Images of Indian vehicle licence plates are used as the dataset. Firstly, the licence plate region is extracted from the captured image, and thereafter, the characters are segmented. Then features are extracted from the segmented characters which are used for the recognition purpose. Furthermore, artificial neural network, support vector machine, and convolutional neural network are used and compared for the automatic licence plate recognition. The future scope is the hybrid technique solution to the problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Bilaiya, Riya, Priyanka Ahlawat y Rohit Bathla. "Intrusion Detection Systems". En Handbook of Research on Machine Learning Techniques for Pattern Recognition and Information Security, 235–54. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3299-7.ch014.

Texto completo
Resumen
The community is moving towards the cloud, and its security is important. An old vulnerability known by the attacker can be easily exploited. Security issues and intruders can be identified by the IDS (intrusion detection systems). Some of the solutions consist of network firewall, anti-malware. Malicious entities and fake traffic are detected through packet sniffing. This chapter surveys different approaches for IDS, compares them, and presents a comparative analysis based on their merits and demerits. The authors aim to present an exhaustive survey of current trends in IDS research along with some future challenges that are likely to be explored. They also discuss the implementation details of IDS with parameters used to evaluate their performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Leon-Medina, Jersson X., Maribel Anaya Vejar y Diego A. Tibaduiza. "Signal Processing and Pattern Recognition in Electronic Tongues". En Pattern Recognition Applications in Engineering, 84–108. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-1839-7.ch004.

Texto completo
Resumen
This chapter reviews the development of solutions related to the practical implementation of electronic tongue sensor arrays. Some of these solutions are associated with the use of data from different instrumentation and acquisition systems, which may vary depending on the type of data collected, the use and development of data pre-processing strategies, and their subsequent analysis through the development of pattern recognition methodologies. Most of the time, these methodologies for signal processing are composed of stages for feature selection, feature extraction, and finally, classification or regression through a machine learning algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Kiselyova, Nadezhda, Andrey Stolyarenko, Vladimir Ryazanov, Oleg Sen’ko y Alexandr Dokukin. "Application of Machine Training Methods to Design of New Inorganic Compounds". En Diagnostic Test Approaches to Machine Learning and Commonsense Reasoning Systems, 197–220. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-1900-5.ch009.

Texto completo
Resumen
The review of applications of machine training methods to inorganic chemistry and materials science is presented. The possibility of searching for classification regularities in large arrays of chemical information with the use precedent-based recognition methods is discussed. The system for computer-assisted design of inorganic compounds, with an integrated complex of databases for the properties of inorganic substances and materials, a subsystem for the analysis of data, based on computer training (including symbolic pattern recognition methods), a knowledge base, a predictions base, and a managing subsystem, has been developed. In many instances, the employment of the developed system makes it possible to predict new inorganic compounds and estimate various properties of those without experimental synthesis. The results of application of this information-analytical system to the computer-assisted design of inorganic compounds promising for the search for new materials for electronics are presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Saruhan-Ozdag, Feyzan, Derya Yiltas-Kaplan y Tolga Ensari. "Detection of Network Attacks With Artificial Immune System". En Pattern Recognition Applications in Engineering, 41–58. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-1839-7.ch002.

Texto completo
Resumen
Intrusion detection systems are one of the most important tools used against the threats to network security in ever-evolving network structures. Along with evolving technology, it has become a necessity to design powerful intrusion detection systems and integrate them into network systems. The main purpose of this research is to develop a new method by using different techniques together to increase the attack detection rates. Negative selection algorithm, a type of artificial immune system algorithms, is used and improved at the stage of detector generation. In phase of the preparation of the data, information gain is used as feature selection and principal component analysis is used as dimensionality reduction method. The first method is the random detector generation and the other one is the method developed by combining the information gain, principal component analysis, and genetic algorithm. The methods were tested using the KDD CUP 99 data set. Different performance values are measured, and the results are compared with different machine learning algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Rather, Sajad Ahmad y P. Shanthi Bala. "Analysis of Gravitation-Based Optimization Algorithms for Clustering and Classification". En Handbook of Research on Big Data Clustering and Machine Learning, 74–99. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0106-1.ch005.

Texto completo
Resumen
In recent years, various heuristic algorithms based on natural phenomena and swarm behaviors were introduced to solve innumerable optimization problems. These optimization algorithms show better performance than conventional algorithms. Recently, the gravitational search algorithm (GSA) is proposed for optimization which is based on Newton's law of universal gravitation and laws of motion. Within a few years, GSA became popular among the research community and has been applied to various fields such as electrical science, power systems, computer science, civil and mechanical engineering, etc. This chapter shows the importance of GSA, its hybridization, and applications in solving clustering and classification problems. In clustering, GSA is hybridized with other optimization algorithms to overcome the drawbacks such as curse of dimensionality, trapping in local optima, and limited search space of conventional data clustering algorithms. GSA is also applied to classification problems for pattern recognition, feature extraction, and increasing classification accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sambukova, Tatiana V. "Machine Learning in Studying the Organism’s Functional State of Clinically Healthy Individuals Depending on Their Immune Reactivity". En Diagnostic Test Approaches to Machine Learning and Commonsense Reasoning Systems, 221–48. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-1900-5.ch010.

Texto completo
Resumen
The work is devoted to the decision of two interconnected key problems of Data Mining: discretization of numerical attributes, and inferring pattern recognition rules (decision rules) from training set of examples with the use of machine learning methods. The method of discretization is based on a learning procedure of extracting attribute values’ intervals the bounds of which are chosen in such a manner that the distributions of attribute’s values inside of these intervals should differ in the most possible degree for two classes of samples given by an expert. The number of intervals is defined to be not more than 3. The application of interval data analysis allowed more fully than by traditional statistical methods of comparing distributions of data sets to describe the functional state of persons in healthy condition depending on the absence or presence in their life of the episodes of secondary deficiency of their immunity system. The interval data analysis gives the possibility (1) to make the procedure of discretization to be clear and controlled by an expert, (2) to evaluate the information gain index of attributes with respect to the distinguishing of given classes of persons before any machine learning procedure (3) to decrease crucially the machine learning computational complexity.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Cluster analysis Pattern recognition systems. Machine learning"

1

Ramos Gurjao, Kildare George, Eduardo Gildin, Richard Gibson y Mark Everett. "Mechanistic Modeling of Distributed Strain Sensing DSS and Distributed Acoustic Sensing DAS to Assist Machine Learning Schemes Interpreting Unconventional Reservoir Datasets". En SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/206049-ms.

Texto completo
Resumen
Abstract The use of fiber optics in reservoir surveillance is bringing valuable insights to fracture geometry and fracture-hit identification, stage communication and perforation cluster fluid distribution in many hydraulic fracturing processes. However, given the complexity associated with field data, its interpretation is a major challenge faced by engineers and geoscientists. In this work, we propose to generate Distributed Strain/Acoustic Sensing (DSS/DAS) synthetic data of a cross-well fiber deployment that incorporate the physics governing hydraulic fracturing treatments. Our forward modeling is accurate enough to be reliably used in tandem with data-driven (machine learning) interpretation methods. The forward modeling is based on analytical and numerical solutions. The analytical solution is developed integrating two models: 2D fracture (e.g. Khristianovic-Geertsma-de Klerk known as KGD) and induced stress (e.g. Sneddon, 1946). DSS is estimated using the plane strain approach that combines calculated stresses and rock properties (e.g. Young's modulus and Poisson ratio). On the other hand, the numerical solution is implemented using the Displacement Discontinuity Method (DDM), a type of Boundary Element Method (BEM), with net pressure and/or shear stress as boundary condition. In this case, fiber gauge length concept is incorporated deriving displacement (i.e. DDM output) in space to obtain DSS values. In both methods DAS is estimated by the differentiation of DSS in time. The analytical technique considers a single fracture opening and is used in a sensitivity analysis to evaluate the impact that rock/fluid parameters can promote on strain time histories. Moreover, advanced cases including multiple fractures failing in tensile or shear mode are simulated applying the numerical technique. Results indicate that our models are able to capture typical characteristics present in field data: heart-shaped pattern from a fracture approaching the fiber, stress shadow and fracture hits. In particular, the numerical methodology captures relevant phenomenon associated with hydraulic and natural fractures interaction, and provides a solid foundation for generating accurate and rich synthetic data that can be used to support a physics-based machine learning interpretation framework. The developed forward modeling, when embedded in a classification or regression artificial intelligence framework, will be an important tool adding substantial insights related to field fracture systems that ultimately can lead to production optimization. Also, the development of specific packages (commercial or otherwise) that explicitly model both DSS and DAS, incorporating the impact of fracture opening and slippage on strain and strain rate, is still in its infancy. This paper is novel in this regard and opens up new avenues of research and applications of synthetic DAS/DSS in hydraulic fracturing processes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Shin, Sungtae, Reza Langari y Reza Tafreshi. "A Performance Comparison of EMG Classification Methods for Hand and Finger Motion". En ASME 2014 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/dscc2014-5993.

Texto completo
Resumen
For recognizing human motion intent, electromyogram (EMG) based pattern recognition approaches have been studied for many years. A number of methods for classifying EMG patterns have been introduced in the literature. On the purpose of selecting the best performing method for the practical application, this paper compares EMG pattern recognition methods in terms of motion type, feature extraction, dimension reduction, and classification algorithm. Also, for more usability of this research, hand and finger EMG motion data set which had been published online was used. Time-domain, empirical mode decomposition, discrete wavelet transform, and wavelet packet transform were adopted as the feature extraction. Three cases, such as no dimension reduction, principal component analysis (PCA), and linear discriminant analysis (LDA), were compared. Six classification algorithms were also compared: naïve Bayes, k-nearest neighbor, quadratic discriminant analysis, support vector machine, multi-layer perceptron, and extreme machine learning. The performance of each case was estimated by three perspectives: classification accuracy, train time, and test (prediction) time. From the experimental results, the time-domain feature set and LDA were required for the highest classification accuracy. Fast train time and test time are dependent on the classification methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía