Siga este enlace para ver otros tipos de publicaciones sobre el tema: Cluster analysis Pattern recognition systems. Machine learning.

Artículos de revistas sobre el tema "Cluster analysis Pattern recognition systems. Machine learning"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Cluster analysis Pattern recognition systems. Machine learning".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Zimovets, V. I., S. V. Shamatrin, D. E. Olada y N. I. Kalashnykova. "Functional Diagnostic System for Multichannel Mine Lifting Machine Working in Factor Cluster Analysis Mode". Journal of Engineering Sciences 7, n.º 1 (2020): E20—E27. http://dx.doi.org/10.21272/jes.2020.7(1).e4.

Texto completo
Resumen
The primary direction of the increase of reliability of the automated control systems of complex electromechanical machines is the application of intelligent information technologies of the analysis of diagnostic information directly in the operating mode. Therefore, the creation of the basics of information synthesis of a functional diagnosis system (FDS) based on machine learning and pattern recognition is a topical task. In this case, the synthesized FDS must be adaptive to arbitrary initial conditions of the technological process and practically invariant to the multidimensionality of the space of diagnostic features, an alphabet of recognition classes, which characterize the possible technical states of the units and devices of the machine. Besides, an essential feature of FDS is the ability to retrain by increasing the power of the alphabet recognition classes. In the article, information synthesis of FDS is performed within the framework of information-extreme intellectual data analysis technology, which is based on maximizing the information capacity of the system in the process of machine learning. The idea of factor cluster analysis was realized by forming an additional training matrix of unclassified vectors of features of a new recognition class obtained during the operation of the FDS directly in the operating mode. The proposed algorithm allows performing factor cluster analysis in the case of structured feature vectors of several recognition classes. In this case, additional training matrices of the corresponding recognition classes are formed by the agglomerative method of cluster analysis using the k-means procedure. The proposed method of factor cluster analysis is implemented on the example of information synthesis of the FDS of a multi-core mine lifting machine. Keywords: information-extreme intelligent technology, a system of functional diagnostics, multichannel mine lifting machine, machine learning, factor cluster analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Vankayalapati, Revathi, Kalyani Balaso Ghutugade, Rekha Vannapuram y Bejjanki Pooja Sree Prasanna. "K-Means Algorithm for Clustering of Learners Performance Levels Using Machine Learning Techniques". Revue d'Intelligence Artificielle 35, n.º 1 (28 de febrero de 2021): 99–104. http://dx.doi.org/10.18280/ria.350112.

Texto completo
Resumen
Data Clustering is the process of grouping the objects in a way which is identical to the objects in the same group than in other classes. In this paper, the clustering of data is used as k-means to assess the output of students. Machine Learning is an area used in all systems. Machine learning is used in education, pattern recognition, sports, industrial applications. Its significance increases with the future of the students in the educational system. Data collection in education is very useful, as data volumes in the education system are growing each day. Higher education is relatively new, but due to the growing database its significance grows. There are several ways to assess the success of students. K-means is one of the best and most successful methods. The secret information in the database is extracted using data mining to increase the output of students. The decision tree is also a way to predict the success of the students. In recent years, educational institutions have the greatest challenges in increasing data growth and using it to increase efficiency, such that better decision-making can be made. Clustering is one of the most important methods used for the analysis of data sets. This trial uses cluster analyses according to their features for section students in various classes. Uncontrolled K-means algorithm is discussed. The mining of education data is used for the study of the knowledge available in the field of education in order to provide secret, significant and useful information. The proposed model considers K-means clustering model for analyzing learners performance. The outcomes and future of students can be strengthened with this support. The results show that the K-means cluster algorithm is useful for grouping students based on similar performance features.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Rudas, Imre J. "Intelligent Engineering Systems". Journal of Advanced Computational Intelligence and Intelligent Informatics 2, n.º 3 (20 de junio de 1998): 69–71. http://dx.doi.org/10.20965/jaciii.1998.p0069.

Texto completo
Resumen
Building intelligent systems has been one of the great challenges since the early days of human culture. From the second half of the 18th century, two revolutionary changes played the key role in technical development, hence in creating engineering and intelligent engineering systems. The industrial revolution was made possible through technical advances, and muscle power was replaced by machine power. The information revolution of our time, in turn, canbe characterized as the replacement of brain power by machine intelligence. The technique used to build engineering systems and replace muscle power can be termed "Hard Automation"1) and deals with industrial processes that are fixed and repetitive in nature. In hard automation, the system configuration and the operations are fixed and cannot be changed without considerable down-time and cost. It can be used, however, particularly in applications calling for fast, accurate operation, when manufacturing large batches of the same product. The "intelligent" area of automation is "Soft Automation," which involves the flexible, intelligent operation of an automated process. In flexible automation, the task is programmable and a work cell must be reconfigured quickly to accommodate a product change. It is particularly suitable for plant environments in which a variety of products is manufactured in small batches. Processes in flexible automation may have unexpected or previously unknown conditions, and would require a certain degree of "machine" intelligence to handle them.The term machine intelligence has been changing with time and is machinespecific, so intelligence in this context still remains more or less a mysterious phenomenon. Following Prof. Lotfi A. Zadeh,2) we consider a system intelligent if it has a high machine intelligence quotient (MIQ). As Prof. Zadeh stated, "MIQ is a measure of intelligence of man-made systems," and can be characterized by its well defined dimensions, such as planning, decision making, problem solving, learning reasoning, natural language understanding, speech recognition, handwriting recognition, pattern recognition, diagnostics, and execution of high level instructions.Engineering practice often involves complex systems having multiple variable and multiple parameter models, sometimes with nonlinear coupling. The conventional approaches for understanding and predicting the behavior of such systems based on analytical techniques can prove to be inadequate, even at the initial stages of setting up an appropriate mathematical model. The computational environment used in such an analytical approach is sometimes too categoric and inflexible in order to cope with the intricacy and complexity of real-world industrial systems. It turns out that, in dealing with such systems, one must face a high degree of uncertainty and tolerate great imprecision. Trying to increase precision can be very costly.In the face of the difficulties above, Prof. Zadeh proposes a different approach for Machine Intelligence. He separates Hard Computing techniques based Artificial Intelligence from Soft Computing techniques based Computational Intelligence.•Hard computing is oriented toward the analysis and design of physical processes and systems, and is characterized by precision, formality, and categorization. It is based on binary logic, crisp systems, numerical analysis, probability theory, differential equations, functional analysis, mathematical programming approximation theory, and crisp software.•Soft computing is oriented toward the analysis and design of intelligent systems. It is based on fuzzy logic, artificial neural networks, and probabilistic reasoning, including genetic algorithms, chaos theory, and parts of machine learning, and is characterized by approximation and dispositionality.In hard computing, imprecision and uncertainty are undesirable properties. In soft computing, the tolerance for imprecision and uncertainty is exploited to achieve an acceptable solution at low cost, tractability, and a high MIQ. Prof. Zadeh argues that soft rather than hard computing should be viewed as the foundation of real machine intelligence. A center has been established - the Berkeley Initiative for Soft Computing (BISC) - and he directs it at the University of California, Berkeley. BISC devotes its activities to this concept.3) Soft computing, as he explains2),•is a consortium of methodologies providing a foundation for the conception and design of intelligent systems,•is aimed at formalizing of the remarkable human ability to make rational decision in an uncertain, imprecise environment.The guiding principle of soft computing, given by Prof. Zadeh2) is: Exploit the tolerance for imprecision, uncertainty, and partial truth to achieve tractability, robustness, low solution cost, and better rapport with reality.Fuzzy logic is mainly concerned with imprecision and approximate reasoning, neurocomputing mainly with learning and curve fitting, genetic computation mainly with searching and optimization and probabilistic reasoning mainly with uncertainty and propagation of belief. The constituents of soft computing are complementary rather than competitive. Experience gained over the past decade indicates that it can be more effective to use them combined, rather than exclusively.Based on this approach, machine intelligence, including artificial intelligence and computational intelligence (soft computing techniques) is one pillar of Intelligent Engineering Systems. Hundreds of new results in this area are published in journals and international conference proceedings. One such conference, organized in Budapest, Hungary, on September 15-17, 1997, was titled'IEEE International Conference on Intelligent Engineering Systems 1997' (INES'97), sponsored by the IEEE Industrial Electronics Society, IEEE Hungary Section, Bá{a}nki Doná{a}t Polytechnic, Hungary, National Committee for Technological Development, Hungary, and in technical cooperation with the IEEE Robotics & Automation Society. It had around 100 participants from 29 countries. This special issue features papers selected from those papers presented during the conference. It should be pointed out that these papers are revised and expanded versions of those presented.The first paper discusses an intelligent control system of an automated guided vehicle used in container terminals. Container terminals, as the center of cargo transportation, play a key role in everyday cargo handling. Learning control has been applied to maintaining the vehicle's course and enabling it to stop at a designatedlocation. Speed control uses conventional control. System performance system was evaluated by simulation, and performance tests slated for a test vehicle.The second paper presents a real-time camera-based system designed for gaze tracking focused on human-computer communication. The objective was to equip computer systems with a tool that provides visual information about the user. The system detects the user's presence, then locates and tracks the face, nose and both eyes. Detection is enabled by combining image processing techniques and pattern recognition.The third paper discusses the application of soft computing techniques to solve modeling and control problems in system engineering. After the design of classical PID and fuzzy PID controllers for nonlinear systems with an approximately known dynamic model, the neural control of a SCARA robot is considered. Fuzzy control is discussed for a special class of MIMO nonlinear systems and the method of Wang generalized for such systems.The next paper describes fuzzy and neural network algorithms for word frequency prediction in document filtering. The two techniques presented are compared and an alternative neural network algoritm discussed.The fifth paper highlights the theory of common-sense knowledge in representation and reasoning. A connectionist model is proposed for common-sense knowledge representation and reasoning, and experimental results using this method presented.The next paper introduces an expert consulting system that employs software agents to manage distributed knowledge sources. These individual software agents solve users' problems either by themselves or thorough mutual cooperation.The last paper presents a methodology for creating and applying a generic manufacturing process model for mechanical parts. Based on the product model and other up-to-date approaches, the proposed model involves all possible manufacturing process variants for a cluster of manufacturing tasks. The application involves a four-level model structure and Petri net representation of manufacturing process entities. Creation and evaluation of model entities and representation of the knowledge built in the shape and manufacturing process models are emphasised. The proposed process model is applied in manufacturing process planning and production scheduling.References:1) C. W. De Silva, "Automation Intelligence," Engineering Application of Artificial Intelligence, 7-5, 471-477, (1994).2) L. A. Zadeh, "Fuzzy Logic, Neural Networks and Soft Computing," NATO Advanced Studies Institute on Soft Computing and Its Application, Antalya, Turkey, (1996).3) L. A. Zadeh, "Berkeley Initiative_in Soft Computing," IEEE Industrial Electronics Society Newsletter. 41-3, 8-10, (1994).
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

KODRATOFF, Y. y S. MOSCATELLI. "MACHINE LEARNING FOR OBJECT RECOGNITION AND SCENE ANALYSIS". International Journal of Pattern Recognition and Artificial Intelligence 08, n.º 01 (febrero de 1994): 259–304. http://dx.doi.org/10.1142/s0218001494000139.

Texto completo
Resumen
Learning is a critical research field for autonomous computer vision systems. It can bring solutions to the knowledge acquisition bottleneck of image understanding systems. Recent developments of machine learning for computer vision are reported in this paper. We describe several different approaches for learning at different levels of the image understanding process, including learning 2-D shape models, learning strategic knowledge for optimizing model matching, learning for adaptive target recognition systems, knowledge acquisition of constraint rules for labelling and automatic parameter optimization for vision systems. Each approach will be commented on and its strong and weak points will be underlined. In conclusion we will suggest what could be the “ideal” learning system for vision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Anam, Khairul y Adel Al-Jumaily. "Optimized Kernel Extreme Learning Machine for Myoelectric Pattern Recognition". International Journal of Electrical and Computer Engineering (IJECE) 8, n.º 1 (1 de febrero de 2018): 483. http://dx.doi.org/10.11591/ijece.v8i1.pp483-496.

Texto completo
Resumen
Myoelectric pattern recognition (MPR) is used to detect user’s intention to achieve a smooth interaction between human and machine. The performance of MPR is influenced by the features extracted and the classifier employed. A kernel extreme learning machine especially radial basis function extreme learning machine (RBF-ELM) has emerged as one of the potential classifiers for MPR. However, RBF-ELM should be optimized to work efficiently. This paper proposed an optimization of RBF-ELM parameters using hybridization of particle swarm optimization (PSO) and a wavelet function. These proposed systems are employed to classify finger movements on the amputees and able-bodied subjects using electromyography signals. The experimental results show that the accuracy of the optimized RBF-ELM is 95.71% and 94.27% in the healthy subjects and the amputees, respectively. Meanwhile, the optimization using PSO only attained the average accuracy of 95.53 %, and 92.55 %, on the healthy subjects and the amputees, respectively. The experimental results also show that SW-RBF-ELM achieved the accuracy that is better than other well-known classifiers such as support vector machine (SVM), linear discriminant analysis (LDA) and k-nearest neighbor (kNN).
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Nayyar, Anand, Pijush Kanti Dutta Pramankit y Rajni Mohana. "Introduction to the Special Issue on Evolving IoT and Cyber-Physical Systems: Advancements, Applications, and Solutions". Scalable Computing: Practice and Experience 21, n.º 3 (1 de agosto de 2020): 347–48. http://dx.doi.org/10.12694/scpe.v21i3.1568.

Texto completo
Resumen
Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision successful. The major challenges and research scopes include system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. It is our great privilege to present Volume 21, Issue 3 of Scalable Computing: Practice and Experience. We had received 30 research papers and out of which 14 papers are selected for publication. The objective of this special issue is to explore and report recent advances and disseminate state-of-the-art research related to IoT, CPS and the enabling and associated technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to IoT and CPS. Vivek Kumar Prasad and Madhuri D Bhavsar in the paper titled "Monitoring and Prediction of SLA for IoT based Cloud described the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. Kasture et al. in the paper titled "Comparative Study of Speaker Recognition Techniques in IoT Devices for Text Independent Negative Recognition" compared the performance of features which are used in state of art speaker recognition models and analyse variants of Mel frequency cepstrum coefficients (MFCC) predominantly used in feature extraction which can be further incorporated and used in various smart devices. Mahesh Kumar Singh and Om Prakash Rishi in the paper titled "Event Driven Recommendation System for E-Commerce using Knowledge based Collaborative Filtering Technique" proposed a novel system that uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. The proposed approach uses about 100 percent of users' participation in the form of activities during navigation of the web site. Thus, the system expects under the users' interest that is beneficial for both seller and buyer. The proposed system is compared with baseline methods in area of recommendation system using three parameters: precision, recall and NDGA through online and offline evaluation studies with user data and it is observed that proposed system is better as compared to other baseline systems. Benbrahim et al. in the paper titled "Deep Convolutional Neural Network with TensorFlow and Keras to Classify Skin Cancer" proposed a novel classification model to classify skin tumours in images using Deep Learning methodology and the proposed system was tested on HAM10000 dataset comprising of 10,015 dermatoscopic images and the results observed that the proposed system is accurate in order of 94.06\% in validation set and 93.93\% in the test set. Devi B et al. in the paper titled "Deadlock Free Resource Management Technique for IoT-Based Post Disaster Recovery Systems" proposed a new class of techniques that do not perform stringent testing before allocating the resources but still ensure that the system is deadlock-free and the overhead is also minimal. The proposed technique suggests reserving a portion of the resources to ensure no deadlock would occur. The correctness of the technique is proved in the form of theorems. The average turnaround time is approximately 18\% lower for the proposed technique over Banker's algorithm and also an optimal overhead of O(m). Deep et al. in the paper titled "Access Management of User and Cyber-Physical Device in DBAAS According to Indian IT Laws Using Blockchain" proposed a novel blockchain solution to track the activities of employees managing cloud. Employee authentication and authorization are managed through the blockchain server. User authentication related data is stored in blockchain. The proposed work assists cloud companies to have better control over their employee's activities, thus help in preventing insider attack on User and Cyber-Physical Devices. Sumit Kumar and Jaspreet Singh in paper titled "Internet of Vehicles (IoV) over VANETS: Smart and Secure Communication using IoT" highlighted a detailed description of Internet of Vehicles (IoV) with current applications, architectures, communication technologies, routing protocols and different issues. The researchers also elaborated research challenges and trade-off between security and privacy in area of IoV. Deore et al. in the paper titled "A New Approach for Navigation and Traffic Signs Indication Using Map Integrated Augmented Reality for Self-Driving Cars" proposed a new approach to supplement the technology used in self-driving cards for perception. The proposed approach uses Augmented Reality to create and augment artificial objects of navigational signs and traffic signals based on vehicles location to reality. This approach help navigate the vehicle even if the road infrastructure does not have very good sign indications and marking. The approach was tested locally by creating a local navigational system and a smartphone based augmented reality app. The approach performed better than the conventional method as the objects were clearer in the frame which made it each for the object detection to detect them. Bhardwaj et al. in the paper titled "A Framework to Systematically Analyse the Trustworthiness of Nodes for Securing IoV Interactions" performed literature on IoV and Trust and proposed a Hybrid Trust model that seperates the malicious and trusted nodes to secure the interaction of vehicle in IoV. To test the model, simulation was conducted on varied threshold values. And results observed that PDR of trusted node is 0.63 which is higher as compared to PDR of malicious node which is 0.15. And on the basis of PDR, number of available hops and Trust Dynamics the malicious nodes are identified and discarded. Saniya Zahoor and Roohie Naaz Mir in the paper titled "A Parallelization Based Data Management Framework for Pervasive IoT Applications" highlighted the recent studies and related information in data management for pervasive IoT applications having limited resources. The paper also proposes a parallelization-based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach. Patel et al. in the paper titled "Performance Analysis of Video ON-Demand and Live Video Streaming Using Cloud Based Services" presented a review of video analysis over the LVS \& VoDS video application. The researchers compared different messaging brokers which helps to deliver each frame in a distributed pipeline to analyze the impact on two message brokers for video analysis to achieve LVS & VoS using AWS elemental services. In addition, the researchers also analysed the Kafka configuration parameter for reliability on full-service-mode. Saniya Zahoor and Roohie Naaz Mir in the paper titled "Design and Modeling of Resource-Constrained IoT Based Body Area Networks" presented the design and modeling of a resource-constrained BAN System and also discussed the various scenarios of BAN in context of resource constraints. The Researchers also proposed an Advanced Edge Clustering (AEC) approach to manage the resources such as energy, storage, and processing of BAN devices while performing real-time data capture of critical health parameters and detection of abnormal patterns. The comparison of the AEC approach is done with the Stable Election Protocol (SEP) through simulations and empirical data analysis. The results show an improvement in energy, processing time and storage requirements for the processing of data on BAN devices in AEC as compared to SEP. Neelam Saleem Khan and Mohammad Ahsan Chishti in the paper titled "Security Challenges in Fog and IoT, Blockchain Technology and Cell Tree Solutions: A Review" outlined major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes. Furthermore, this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT. In addition, the researchers discussed the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology. The researchers also proposed Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area. Bhadwal et al. in the paper titled "A Machine Translation System from Hindi to Sanskrit Language Using Rule Based Approach" proposed a rule-based machine translation system to bridge the language barrier between Hindi and Sanskrit Language by converting any test in Hindi to Sanskrit. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94\% while the pragmatic analysis of 50 sentences produce an accuracy of around 86\%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant. Anshu Kumar Dwivedi and A.K. Sharma in the paper titled "NEEF: A Novel Energy Efficient Fuzzy Logic Based Clustering Protocol for Wireless Sensor Network" proposed a a deterministic novel energy efficient fuzzy logic-based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Зимовець, Вікторія Ігорівна, Олександр Сергійович Приходченко y Микита Ігорович Мироненко. "ІНФОРМАЦІЙНО-ЕКСТРЕМАЛЬНИЙ КЛАСТЕР-АНАЛІЗ ВХІДНИХ ДАНИХ ПРИ ФУНКЦІОНАЛЬНОМУ ДІАГНОСТУВАННІ". RADIOELECTRONIC AND COMPUTER SYSTEMS, n.º 4 (25 de diciembre de 2019): 105–15. http://dx.doi.org/10.32620/reks.2019.4.12.

Texto completo
Resumen
The study aims to increase the functional efficiency of machine learning of the functional diagnosis system of a multi-rope shaft hoist through cluster analysis of diagnostic features. To achieve the goal, it was necessary to solve the following tasks: formalize the formulation of the task of information synthesis, capable of learning a functional diagnosis system, which operates in the cluster-analysis mode of diagnostic signs; to propose a categorical model and, on its basis, to develop an algorithm for information-extreme cluster analysis of diagnostic signs in the process of information-extreme machine learning of a functional diagnostic system; carry out fuzzification of input fuzzy data by optimizing the geometric parameters of hyperspherical containers of recognition classes that characterize the possible technical conditions of the diagnostic object; to develop an algorithm and implement it on the example of information synthesis of the functional diagnostics system of a multi-rope mine hoisting machine. The object of the study is the processes of information synthesis of a functional diagnostic system capable of learning, integrated into the automated control system of a multi-rope mine hoisting machine. The subject of the study is categorical models, an information-extremal machine learning algorithm of a functional diagnostic system that operates in the cluster analysis model of diagnostic signs and constructs decision rules. The research methods are based on the ideas and methods of information-extreme intellectual data analysis technology, a theoretical-informational approach to assessing the functional effectiveness of machine learning and on the geometric approach of pattern recognition theory. As a result, the following results were obtained: a categorical model was proposed, and on its basis, an algorithm for information-extremal machine learning of the functional diagnostics system for a multi-rope mine hoist was developed and implemented, which allows you to automatically generate an input classified fuzzy training matrix, which significantly reduces time and material costs when creating incoming mathematical description. The obtained result was achieved by cluster analysis of structured vectors of diagnostic signs obtained from archival data for three recognition classes using the k-means procedure. As a criterion for optimizing machine learning parameters, we considered a modified Kullback measure in the form of a functional on the exact characteristics of diagnostic solutions and distance criteria for the proximity of recognition classes. Based on the optimal geometric parameters of the containers of recognition classes obtained during machine learning, decisive rules were constructed that allowed us to classify the vectors of diagnostic features of recognition classes with a rather high total probability of making the correct diagnostic decisions. Conclusions. The scientific novelty of the results obtained consists in the development of a new method for the information synthesis of the functional diagnostics system of a multi-rope mine hoisting machine, which operates in the cluster analysis model, which made it possible to automatically form an input classified fuzzy training matrix with its subsequent dephasification in the process of information-extreme machine learning system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Wolff, J. Gerard. "The Potential of the SP System in Machine Learning and Data Analysis for Image Processing". Big Data and Cognitive Computing 5, n.º 1 (23 de febrero de 2021): 7. http://dx.doi.org/10.3390/bdcc5010007.

Texto completo
Resumen
This paper aims to describe how pattern recognition and scene analysis may with advantage be viewed from the perspective of the SP system (meaning the SP theory of intelligence and its realisation in the SP computer model (SPCM), both described in an appendix), and the strengths and potential of the system in those areas. In keeping with evidence for the importance of information compression (IC) in human learning, perception, and cognition, IC is central in the structure and workings of the SPCM. Most of that IC is achieved via the powerful concept of SP-multiple-alignment, which is largely responsible for the AI-related versatility of the system. With examples from the SPCM, the paper describes: how syntactic parsing and pattern recognition may be achieved, with corresponding potential for visual parsing and scene analysis; how those processes are robust in the face of errors in input data; how in keeping with what people do, the SP system can “see” things in its data that are not objectively present; the system can recognise things at multiple levels of abstraction and via part-whole hierarchies, and via an integration of the two; the system also has potential for the creation of a 3D construct from pictures of a 3D object from different viewpoints, and for the recognition of 3D entities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Samiappan, Dhanalakshmi, S. Latha, T. Rama Rao, Deepak Verma y CSA Sriharsha. "Enhancing Machine Learning Aptitude Using Significant Cluster Identification for Augmented Image Refining". International Journal of Pattern Recognition and Artificial Intelligence 34, n.º 09 (12 de diciembre de 2019): 2051009. http://dx.doi.org/10.1142/s021800142051009x.

Texto completo
Resumen
Enhancing the image to remove noise, preserving the useful features and edges are the most important tasks in image analysis. In this paper, Significant Cluster Identification for Maximum Edge Preservation (SCI-MEP), which works in parallel with clustering algorithms and improved efficiency of the machine learning aptitude, is proposed. Affinity propagation (AP) is a base method to obtain clusters from a learnt dictionary, with an adaptive window selection, which are then refined using SCI-MEP to preserve the semantic components of the image. Since only the significant clusters are worked upon, the computational time drastically reduces. The flexibility of SCI-MEP allows it to be integrated with any clustering algorithm to improve its efficiency. The method is tested and verified to remove Gaussian noise, rain noise and speckle noise from images. Our results have shown that SCI-MEP considerably optimizes the existing algorithms in terms of performance evaluation metrics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Indra, Zul, Azhari Setiawan, Yessi Jusman y Arisman Adnan. "Machine learning deployment for arms dynamics pattern recognition in Southeast Asia region". Indonesian Journal of Electrical Engineering and Computer Science 23, n.º 3 (1 de septiembre de 2021): 1654. http://dx.doi.org/10.11591/ijeecs.v23.i3.pp1654-1662.

Texto completo
Resumen
<p>Finding the most significant determinant variable of arms dynamic is highly required due to strategic policies formulations and power mapping for academics and policy makers. Machine learning is still new or underdiscussed among the study of politics and international relations. Existing literature have much focus on using advanced quantitative methods by applying various types of regression analysis. This study analyzed the arms dynamic in Southeast Asia countries along with its some strategic partners such as United States, China, Russia, South Korea, and Japan by using ‘Decision Tree’ of machine learning algorithm. This study conducted a machine learning analysis on 55 variable items which is classified into 8 classes of variables videlicet defense budget, arms trade exports, arms trade imports, political posture, economic posture, security posture and defense priority, national capability, and direct contact,. The results suggest three findings: (1) state who perceives maritime as strategic drivers and forces will seek more power for its maritime defense posture which is translated to defense budget, (2) big size countries tend to be an arms exporter country, and (3) state’s energy dependence often leads to a higher volume of arms transfers between countries.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Puskarczyk, Edyta. "Artificial neural networks as a tool for pattern recognition and electrofacies analysis in Polish palaeozoic shale gas formations". Acta Geophysica 67, n.º 6 (20 de septiembre de 2019): 1991–2003. http://dx.doi.org/10.1007/s11600-019-00359-2.

Texto completo
Resumen
Abstract Unconventional oil and gas reservoirs from the lower Palaeozoic basin at the western slope of the East European Craton were taken into account in this study. The aim was to supply and improve standard well logs interpretation based on machine learning methods, especially ANNs. ANNs were used on standard well logging data, e.g. P-wave velocity, density, resistivity, neutron porosity, radioactivity and photoelectric factor. During the calculations, information about lithology or stratigraphy was not taken into account. We apply different methods of classification: cluster analysis, support vector machine and artificial neural network—Kohonen algorithm. We compare the results and analyse obtained electrofacies. Machine learning method–support vector machine SVM was used for classification. For the same data set, SVM algorithm application results were compared to the results of the Kohonen algorithm. The results were very similar. We obtained very good agreement of results. Kohonen algorithm (ANN) was used for pattern recognition and identification of electrofacies. Kohonen algorithm was also used for geological interpretation of well logs data. As a result of Kohonen algorithm application, groups corresponding to the gas-bearing intervals were found. Analysis showed diversification between gas-bearing formations and surrounding beds. It is also shown that internal diversification in gas-saturated beds is present. It is concluded that ANN appeared to be a useful and quick tool for preliminary classification of members and gas-saturated identification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Ye, Jun. "Clustering Methods Using Distance-Based Similarity Measures of Single-Valued Neutrosophic Sets". Journal of Intelligent Systems 23, n.º 4 (1 de diciembre de 2014): 379–89. http://dx.doi.org/10.1515/jisys-2013-0091.

Texto completo
Resumen
AbstractClustering plays an important role in data mining, pattern recognition, and machine learning. Single-valued neutrosophic sets (SVNSs) are useful means to describe and handle indeterminate and inconsistent information that fuzzy sets and intuitionistic fuzzy sets cannot describe and deal with. To cluster the data represented by single-valued neutrosophic information, this article proposes single-valued neutrosophic clustering methods based on similarity measures between SVNSs. First, we define a generalized distance measure between SVNSs and propose two distance-based similarity measures of SVNSs. Then, we present a clustering algorithm based on the similarity measures of SVNSs to cluster single-valued neutrosophic data. Finally, an illustrative example is given to demonstrate the application and effectiveness of the developed clustering methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Zhang, Jianhua, Jianrong Li y Rubin Wang. "Instantaneous mental workload assessment using time–frequency analysis and semi-supervised learning". Cognitive Neurodynamics 14, n.º 5 (12 de mayo de 2020): 619–42. http://dx.doi.org/10.1007/s11571-020-09589-3.

Texto completo
Resumen
Abstract The real-time assessment of mental workload (MWL) is critical for development of intelligent human–machine cooperative systems in various safety–critical applications. Although data-driven machine learning (ML) approach has shown promise in MWL recognition, there is still difficulty in acquiring a sufficient number of labeled data to train the ML models. This paper proposes a semi-supervised extreme learning machine (SS-ELM) algorithm for MWL pattern classification requiring only a small number of labeled data. The measured data analysis results show that the proposed SS-ELM paradigm can effectively improve the accuracy and efficiency of MWL classification and thus provide a competitive ML approach to utilizing a large number of unlabeled data which are available in many real-world applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Hossain, Kabir, Frederik Villebro y Søren Forchhammer. "UAV image analysis for leakage detection in district heating systems using machine learning". Pattern Recognition Letters 140 (diciembre de 2020): 158–64. http://dx.doi.org/10.1016/j.patrec.2020.05.024.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Xie, Hui, Li Wei, Dong Liu y Luda Wang. "Task Scheduling in Heterogeneous Computing Systems Based on Machine Learning Approach". International Journal of Pattern Recognition and Artificial Intelligence 34, n.º 12 (11 de mayo de 2020): 2051012. http://dx.doi.org/10.1142/s021800142051012x.

Texto completo
Resumen
Task scheduling problem of heterogeneous computing system (HCS), which with increasing popularity, nowadays has become a research hotspot in this domain. The task scheduling problem of HCS, which can be described essentially as assigning tasks to the proper processor for executing, has been shown to be NP-complete. However, the existing scheduling algorithm suffers from an inherent limitation of lacking global view. Here, we reported a novel task scheduling algorithm based on Multi-Logistic Regression theory (called MLRS) in heterogeneous computing environment. First, we collected the best scheduling plans as the historical training set, and then a scheduling model was established by which we could predict the following schedule action. Through the analysis of experimental results, it is interpreted that the proposed algorithm has better optimization effect and robustness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Rahangdale, Ashwini y Shital Raut. "Clustering-Based Transductive Semi-Supervised Learning for Learning-to-Rank". International Journal of Pattern Recognition and Artificial Intelligence 33, n.º 12 (noviembre de 2019): 1951007. http://dx.doi.org/10.1142/s0218001419510078.

Texto completo
Resumen
Learning-to-rank (LTR) is a very hot topic of research for information retrieval (IR). LTR framework usually learns the ranking function using available training data that are very cost-effective, time-consuming and biased. When sufficient amount of training data is not available, semi-supervised learning is one of the machine learning paradigms that can be applied to get pseudo label from unlabeled data. Cluster and label is a basic approach for semi-supervised learning to identify the high-density region in data space which is mainly used to support the supervised learning. However, clustering with conventional method may lead to prediction performance which is worse than supervised learning algorithms for application of LTR. Thus, we propose rank preserving clustering (RPC) with PLocalSearch and get pseudo label for unlabeled data. We present semi-supervised learning that adopts clustering-based transductive method and combine it with nonmeasure specific listwise approach to learn the LTR model. Moreover, each cluster follows the multi-task learning to avoid optimization of multiple loss functions. It reduces the training complexity of adopted listwise approach from an exponential order to a polynomial order. Empirical analysis on the standard datasets (LETOR) shows that the proposed model gives better results as compared to other state-of-the-arts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

J, Dr Chandrika. "A Novel Machine Learning based Hybrid Model for Webshell Detection". International Journal for Research in Applied Science and Engineering Technology 9, n.º VI (30 de junio de 2021): 3128–35. http://dx.doi.org/10.22214/ijraset.2021.35644.

Texto completo
Resumen
Webshell attack has become a greater cause of concern while major episodes are shifting online. Today different forms of webshell attacks and attack inducing tools are available to hamper the security of computer systems. These attacks strongly escalate the requisite for Machine Learning based detection. In this work, we are going to obtain behavioral-pattern that may be achieved through static or dynamic analysis, afterward we can apply dissimilar ML techniques to identify whether it's web shell or not. Behavioral based Detection methods will be discussed to take advantage from ML algorithms so as to frame social-based web shell recognition and classification model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Thomas, Philip S., Bruno Castro da Silva, Andrew G. Barto, Stephen Giguere, Yuriy Brun y Emma Brunskill. "Preventing undesirable behavior of intelligent machines". Science 366, n.º 6468 (21 de noviembre de 2019): 999–1004. http://dx.doi.org/10.1126/science.aag3311.

Texto completo
Resumen
Intelligent machines using machine learning algorithms are ubiquitous, ranging from simple data analysis and pattern recognition tools to complex systems that achieve superhuman performance on various tasks. Ensuring that they do not exhibit undesirable behavior—that they do not, for example, cause harm to humans—is therefore a pressing problem. We propose a general and flexible framework for designing machine learning algorithms. This framework simplifies the problem of specifying and regulating undesirable behavior. To show the viability of this framework, we used it to create machine learning algorithms that precluded the dangerous behavior caused by standard machine learning algorithms in our experiments. Our framework for designing machine learning algorithms simplifies the safe and responsible application of machine learning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Heumer, Guido, Heni Ben Amor y Bernhard Jung. "Grasp Recognition for Uncalibrated Data Gloves: A Machine Learning Approach". Presence: Teleoperators and Virtual Environments 17, n.º 2 (1 de abril de 2008): 121–42. http://dx.doi.org/10.1162/pres.17.2.121.

Texto completo
Resumen
This paper presents a comparison of various machine learning methods applied to the problem of recognizing grasp types involved in object manipulations performed with a data glove. Conventional wisdom holds that data gloves need calibration in order to obtain accurate results. However, calibration is a time-consuming process, inherently user-specific, and its results are often not perfect. In contrast, the present study aims at evaluating recognition methods that do not require prior calibration of the data glove. Instead, raw sensor readings are used as input features that are directly mapped to different categories of hand shapes. An experiment was carried out in which test persons wearing a data glove had to grasp physical objects of different shapes corresponding to the various grasp types of the Schlesinger taxonomy. The collected data was comprehensively analyzed using numerous classification techniques provided in an open-source machine learning toolbox. Evaluated machine learning methods are composed of (a) 38 classifiers including different types of function learners, decision trees, rule-based learners, Bayes nets, and lazy learners; (b) data preprocessing using principal component analysis (PCA) with varying degrees of dimensionality reduction; and (c) five meta-learning algorithms under various configurations where selection of suitable base classifier combinations was informed by the results of the foregoing classifier evaluation. Classification performance was analyzed in six different settings, representing various application scenarios with differing generalization demands. The results of this work are twofold: (1) We show that a reasonably good to highly reliable recognition of grasp types can be achieved—depending on whether or not the glove user is among those training the classifier—even with uncalibrated data gloves. (2) We identify the best performing classification methods for the recognition of various grasp types. To conclude, cumbersome calibration processes before productive usage of data gloves can be spared in many situations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Tan, Fei y Xiaoqing Xie. "Recognition Technology of Athlete’s Limb Movement Combined Based on the Integrated Learning Algorithm". Journal of Sensors 2021 (6 de septiembre de 2021): 1–9. http://dx.doi.org/10.1155/2021/3057557.

Texto completo
Resumen
Human motion recognition based on inertial sensor is a new research direction in the field of pattern recognition. It carries out preprocessing, feature selection, and feature selection by placing inertial sensors on the surface of the human body. Finally, it mainly classifies and recognizes the extracted features of human action. There are many kinds of swing movements in table tennis. Accurately identifying these movement modes is of great significance for swing movement analysis. With the development of artificial intelligence technology, human movement recognition has made many breakthroughs in recent years, from machine learning to deep learning, from wearable sensors to visual sensors. However, there is not much work on movement recognition for table tennis, and the methods are still mainly integrated into the traditional field of machine learning. Therefore, this paper uses an acceleration sensor as a motion recording device for a table tennis disc and explores the three-axis acceleration data of four common swing motions. Traditional machine learning algorithms (decision tree, random forest tree, and support vector) are used to classify the swing motion, and a classification algorithm based on the idea of integration is designed. Experimental results show that the ensemble learning algorithm developed in this paper is better than the traditional machine learning algorithm, and the average recognition accuracy is 91%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Ye, Jun. "Single-Valued Neutrosophic Minimum Spanning Tree and Its Clustering Method". Journal of Intelligent Systems 23, n.º 3 (1 de septiembre de 2014): 311–24. http://dx.doi.org/10.1515/jisys-2013-0075.

Texto completo
Resumen
AbstractClustering plays an important role in data mining, pattern recognition, and machine learning. Then, single-valued neutrosophic sets (SVNSs) are a useful means to describe and handle indeterminate and inconsistent information, which fuzzy sets and intuitionistic fuzzy sets cannot describe and deal with. To cluster the data represented by single-value neutrosophic information, the article proposes a single-valued neutrosophic minimum spanning tree (SVNMST) clustering algorithm. Firstly, we defined a generalized distance measure between SVNSs. Then, we present an SVNMST clustering algorithm for clustering single-value neutrosophic data based on the generalized distance measure of SVNSs. Finally, two illustrative examples are given to demonstrate the application and effectiveness of the developed approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Kapp, Vadim, Marvin Carl May, Gisela Lanza y Thorsten Wuest. "Pattern Recognition in Multivariate Time Series: Towards an Automated Event Detection Method for Smart Manufacturing Systems". Journal of Manufacturing and Materials Processing 4, n.º 3 (5 de septiembre de 2020): 88. http://dx.doi.org/10.3390/jmmp4030088.

Texto completo
Resumen
This paper presents a framework to utilize multivariate time series data to automatically identify reoccurring events, e.g., resembling failure patterns in real-world manufacturing data by combining selected data mining techniques. The use case revolves around the auxiliary polymer manufacturing process of drying and feeding plastic granulate to extrusion or injection molding machines. The overall framework presented in this paper includes a comparison of two different approaches towards the identification of unique patterns in the real-world industrial data set. The first approach uses a subsequent heuristic segmentation and clustering approach, the second branch features a collaborative method with a built-in time dependency structure at its core (TICC). Both alternatives have been facilitated by a standard principle component analysis PCA (feature fusion) and a hyperparameter optimization (TPE) approach. The performance of the corresponding approaches was evaluated through established and commonly accepted metrics in the field of (unsupervised) machine learning. The results suggest the existence of several common failure sources (patterns) for the machine. Insights such as these automatically detected events can be harnessed to develop an advanced monitoring method to predict upcoming failures, ultimately reducing unplanned machine downtime in the future.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Leon-Medina, Jersson X., Leydi J. Cardenas-Flechas y Diego A. Tibaduiza. "A data-driven methodology for the classification of different liquids in artificial taste recognition applications with a pulse voltammetric electronic tongue". International Journal of Distributed Sensor Networks 15, n.º 10 (octubre de 2019): 155014771988160. http://dx.doi.org/10.1177/1550147719881601.

Texto completo
Resumen
Electronic tongue-type sensor arrays are devices used to determine the quality of substances and seek to imitate the main components of the human sense of taste. For this purpose, an electronic tongue-based system makes use of sensors, data acquisition systems, and a pattern recognition system. Particularly, in the latter, machine learning techniques are useful in data analysis and have been used to solve classification and regression problems. However, one of the problems in the use of this kind of device is associated with the development of reliable pattern recognition algorithms and robust data analysis. In this sense, this work introduces a taste recognition methodology, which is composed of several steps including unfolding data, data normalization, principal component analysis for compressing the data, and classification through different machine learning models. The proposed methodology is tested using data from an electronic tongue with 13 different liquid substances; this electronic tongue uses multifrequency large amplitude pulse signal voltammetry. Results show that the methodology is able to perform the classification accurately and the best results are obtained when it includes the use of K-nearest neighbor machine in terms of accuracy compared with other kinds of machine learning approaches. Besides, the comparison to evaluate the methodology is made with different classification performance measures that show the behavior of the process in a single number.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

YEN, GARY G. y QIANG FU. "AUTOMATIC FROG CALLS MONITORING SYSTEM: A MACHINE LEARNING APPROACH". International Journal of Computational Intelligence and Applications 01, n.º 02 (junio de 2001): 165–86. http://dx.doi.org/10.1142/s1469026801000184.

Texto completo
Resumen
Automatic recognition of frog vocalization is considered a valuable tool for a variety of biological research and environmental monitoring applications. In this research an automatic monitoring system, which can recognize the vocalizations of four species of frogs and can identify different individuals within the species of interest, is proposed. For the desired monitoring system, species identification is performed first with the proposed filtering and grouping algorithm. Individual identification, which can estimate frog population within the specific species, is performed in the second stage. Digital signal pre-processing, feature extraction, dimensionality reduction, and neural network pattern classification are performed step by step in this stage. Wavelet Packet feature extraction together with two different dimension reduction algorithms are synergistically integrated to produce final feature vectors, which are to be fed into a neural network classifier. The simulation results show the promising future of deploying an array of continuous, on-line environmental monitoring systems based upon nonintrusive analysis of animal calls.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Nicholson, Andrew A., Maria Densmore, Margaret C. McKinnon, Richard W. J. Neufeld, Paul A. Frewen, Jean Théberge, Rakesh Jetly, J. Donald Richardson y Ruth A. Lanius. "Machine learning multivariate pattern analysis predicts classification of posttraumatic stress disorder and its dissociative subtype: a multimodal neuroimaging approach". Psychological Medicine 49, n.º 12 (11 de octubre de 2018): 2049–59. http://dx.doi.org/10.1017/s0033291718002866.

Texto completo
Resumen
AbstractBackgroundThe field of psychiatry would benefit significantly from developing objective biomarkers that could facilitate the early identification of heterogeneous subtypes of illness. Critically, although machine learning pattern recognition methods have been applied recently to predict many psychiatric disorders, these techniques have not been utilized to predict subtypes of posttraumatic stress disorder (PTSD), including the dissociative subtype of PTSD (PTSD + DS).MethodsUsing Multiclass Gaussian Process Classification within PRoNTo, we examined the classification accuracy of: (i) the mean amplitude of low-frequency fluctuations (mALFF; reflecting spontaneous neural activity during rest); and (ii) seed-based amygdala complex functional connectivity within 181 participants [PTSD (n = 81); PTSD + DS (n = 49); and age-matched healthy trauma-unexposed controls (n = 51)]. We also computed mass-univariate analyses in order to observe regional group differences [false-discovery-rate (FDR)-cluster corrected p < 0.05, k = 20].ResultsWe found that extracted features could predict accurately the classification of PTSD, PTSD + DS, and healthy controls, using both resting-state mALFF (91.63% balanced accuracy, p < 0.001) and amygdala complex connectivity maps (85.00% balanced accuracy, p < 0.001). These results were replicated using independent machine learning algorithms/cross-validation procedures. Moreover, areas weighted as being most important for group classification also displayed significant group differences at the univariate level. Here, whereas the PTSD + DS group displayed increased activation within emotion regulation regions, the PTSD group showed increased activation within the amygdala, globus pallidus, and motor/somatosensory regions.ConclusionThe current study has significant implications for advancing machine learning applications within the field of psychiatry, as well as for developing objective biomarkers indicative of diagnostic heterogeneity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Yaman, Mehmet, Abdulhamit Subasi y Frank Rattay. "Comparison of Random Subspace and Voting Ensemble Machine Learning Methods for Face Recognition". Symmetry 10, n.º 11 (19 de noviembre de 2018): 651. http://dx.doi.org/10.3390/sym10110651.

Texto completo
Resumen
Biometry based authentication and recognition have attracted greater attention due to numerous applications for security-conscious societies, since biometrics brings accurate and consistent identification. Face biometry possesses the merits of low intrusiveness and high precision. Despite the presence of several biometric methods, like iris scan, fingerprints, and hand geometry, the most effective and broadly utilized method is face recognition, because it is reasonable, natural, and non-intrusive. Face recognition is a part of the pattern recognition that is applied for identifying or authenticating a person that is extracted from a digital image or a video automatically. Moreover, current innovations in big data analysis, cloud computing, social networks, and machine learning have allowed for a straightforward understanding of how different challenging issues in face recognition might be solved. Effective face recognition in the enormous data concept is a crucial and challenging task. This study develops an intelligent face recognition framework that recognizes faces through efficient ensemble learning techniques, which are Random Subspace and Voting, in order to improve the performance of biometric systems. Furthermore, several methods including skin color detection, histogram feature extraction, and ensemble learner-based face recognition are presented. The proposed framework, which has a symmetric structure, is found to have high potential for biometrics. Hence, the proposed framework utilizing histogram feature extraction with Random Subspace and Voting ensemble learners have presented their superiority over two different databases as compared with state-of-art face recognition. This proposed method has reached an accuracy of 99.25% with random forest, combined with both ensemble learners on the FERET face database.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Takama, Yasufumi, Yuna Tanaka, Yoshiyuki Mori y Hiroki Shibata. "Treemap-Based Cluster Visualization and its Application to Text Data Analysis". Journal of Advanced Computational Intelligence and Intelligent Informatics 25, n.º 4 (20 de julio de 2021): 498–507. http://dx.doi.org/10.20965/jaciii.2021.p0498.

Texto completo
Resumen
This paper proposes Treemap-based visualization for supporting cluster analysis of multi-dimensional data. It is important to grasp data distribution in a target dataset for such tasks as machine learning and cluster analysis. When dealing with multi-dimensional data such as statistical data and document datasets, dimensionality reduction algorithms are usually applied to project original data to lower-dimensional space. However, dimensionality reduction tends to lose the characteristics of data in the original space. In particular, the border between different data groups could not be represented correctly in lower-dimensional space. To overcome this problem, the proposed visualization method applies Fuzzy c-Means to target data and visualizes the result on the basis of the highest and the second-highest membership values with Treemap. Visualizing the information about not only the closest clusters but also the second closest ones is expected to be useful for identifying objects around the border between different clusters, as well as for understanding the relationship between different clusters. A prototype interface is implemented, of which the effectiveness is investigated with a user experiment on a news articles dataset. As another kind of text data, a case study of applying it to a word embedding space is also shown.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Khalyasmaa, Alexandra. "Machine learning as a tool of high-voltage electrical equipment lifecycle control enhancement". Proceedings of Irkutsk State Technical University 24, n.º 5 (octubre de 2020): 1093–104. http://dx.doi.org/10.21285/1814-3520-2020-5-1093-1104.

Texto completo
Resumen
The purpose of the study is to analyze the practical implementation of high-voltage electrical equipment technical state estimation subsystems as a part of solving the lifecycle management problem based on machine learning methods and taking into account the effect of the adjacent power system operation modes. To deal with the problem of power equipment technical state analysis, i.e. power equipment state pattern recognition, XGBoost based on gradient boosting decision tree algorithm is used. Its main advantages are the ability to process gapped data and efficient operation with tabular data for solving classification and regression problems. The author suggests the formation procedure of correct and sufficient initial database for high-voltage equipment state pattern recognition based on its technical diagnostic data and the algorithm for training and testing sets creation in order to improve the identification accuracy of power equipment actual state. The description and justification of the machine learning method and corresponding error metrics are also provided. Based on the actual states of power transformers and circuit breakers the sets of technical diagnostic parameters that have the greatest impact on the accuracy of state identification are formed. The effectiveness of using power systems operation parameters as additional features is also confirmed. It is determined that the consideration of operation parameters obtained by calculation as a part of the training set for high-voltage equipment technical state identification makes it possible to improve the tuning accuracy. The developed structure and approaches to power equipment technical state analysis supplemented by power system operation mode data and diagnostic results provide an information link between the tasks of technological and dispatch control. This allows us to consider the task of power system operation mode planning from the standpoint of power equipment technical state and identify the priorities in repair and maintenance to eliminate power network “bottlenecks”.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Kanksha, Aman Bhaskar, Sagar Pande, Rahul Malik y Aditya Khamparia. "An intelligent unsupervised technique for fraud detection in health care systems". Intelligent Decision Technologies 15, n.º 1 (24 de marzo de 2021): 127–39. http://dx.doi.org/10.3233/idt-200052.

Texto completo
Resumen
Healthcare is an essential part of people’s lives, particularly for the elderly population, and also should be economical. Medicare is one particular healthcare plan. Claims fraud is a significant contributor to increased healthcare expenses, though the effect of it could be lessened by fraud detection. In this paper, an analysis of various machine learning techniques was done to identify Medicare fraud. The isolated forest an unsupervised machine learning algorithm which improves overall performance while detecting fraud based upon outliers. The goal of this specific paper is generally to show probable dishonest providers on the ground of their allegations. Obtained results were found more promising compared to existing techniques. Around 98.76% accuracy is obtained using an isolated forest algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Mani, S., W. R. Shankle y M. J. Pazzani. "Acceptance of Rules Generated by Machine Learning among Medical Experts". Methods of Information in Medicine 40, n.º 05 (2001): 380–85. http://dx.doi.org/10.1055/s-0038-1634196.

Texto completo
Resumen
Summary Objectives: The aim was to evaluate the potential for monotonicity constraints to bias machine learning systems to learn rules that were both accurate and meaningful. Methods: Two data sets, taken from problems as diverse as screening for dementia and assessing the risk of mental retardation, were collected and a rule learning system, with and without monotonicity constraints, was run on each. The rules were shown to experts, who were asked how willing they would be to use such rules in practice. The accuracy of the rules was also evaluated. Results: Rules learned with monotonicity constraints were at least as accurate as rules learned without such constraints. Experts were, on average, more willing to use the rules learned with the monotonicity constraints. Conclusions: The analysis of medical databases has the potential of improving patient outcomes and/or lowering the cost of health care delivery. Various techniques, from statistics, pattern recognition, machine learning, and neural networks, have been proposed to “mine” this data by uncovering patterns that may be used to guide decision making. This study suggests cognitive factors make learned models coherent and, therefore, credible to experts. One factor that influences the acceptance of learned models is consistency with existing medical knowledge.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

PUDIMAT, RAINER, ROLF BACKOFEN y ERNST G. SCHUKAT-TALAMAZZINI. "FAST FEATURE SUBSET SELECTION IN BIOLOGICAL SEQUENCE ANALYSIS". International Journal of Pattern Recognition and Artificial Intelligence 23, n.º 02 (marzo de 2009): 191–207. http://dx.doi.org/10.1142/s0218001409007107.

Texto completo
Resumen
Biological research produces a wealth of measured data. Neither it is easy for biologists to postulate hypotheses about the behavior or structure of the observed entity because the relevant properties measured are not seen in the ocean of measurements. Nor is it easy to design machine learning algorithms to classify or cluster the data items for the same reason. Algorithms for automatically selecting a highly predictive subset of the measured features can help to overcome these difficulties. We present an efficient feature selection strategy which can be applied to arbitrary feature selection problems. The core technique is a new method for estimating the quality of subsets from previously calculated qualities for smaller subsets by minimizing the mean standard error of estimated values with an approach common to support vector machines. This method can be integrated in many feature subset search algorithms. We have applied it with sequential search algorithms and have been able to reduce the number of quality calculations for finding accurate feature subsets by about 70%. We show these improvements by applying our approach to the problem of finding highly predictive feature subsets for transcription factor binding sites.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Yu, Yibin, Min Yang, Yulan Zhang y Shifang Yuan. "Compact dictionary pair learning and refining based on principal components analysis". International Journal of Wavelets, Multiresolution and Information Processing 17, n.º 05 (septiembre de 2019): 1950033. http://dx.doi.org/10.1142/s0219691319500334.

Texto completo
Resumen
Although traditional dictionary learning (DL) methods have made great success in pattern recognition and machine learning, it is extremely time-consuming, especially in the training stage. The projective dictionary pair learning (DPL) learned the synthesis dictionary and the analysis dictionary jointly to achieve a fast and accurate classifier. However, the dictionary pair is initialized as random matrices without using any data samples information, it required many iterations to ensure convergence. In this paper, we propose a novel compact DPL and refining method based on the observation that the eigenvalue curve of sample data covariance matrix usually decrease very fast, which means we can compact the synthesis dictionary and analysis dictionary. For each class of the data samples, we utilize the principal components analysis (PCA) to retain global important information and compact the row space of a synthesis dictionary and the column space of an analysis dictionary in the first stage. We further refine the learned dictionary pair to achieve a more accurate classifier during compact dictionary pair refining, which combines the orthogonality of PCA with the redundancy of DL. We solve this refining problem in closed-form completely, naturally reducing the computation complexity significantly. Experimental results on the Extended YaleB database and AR database show that the proposed method achieves competitive accuracy and low computational complexity compared with other state-of-the-art methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Hesse, Sebastian, Christoph Klein, Rappsilber Juri, Yoko Mizoguchi, Monika Linder, Piotr Grabowski, Zahra Alizadeh et al. "Machine Learning Unveils Proteotypic Mimicry in Genetically Defined SCN Variants". Blood 134, Supplement_1 (13 de noviembre de 2019): 3580. http://dx.doi.org/10.1182/blood-2019-131136.

Texto completo
Resumen
Background Novel computational algorithms for multi-omics analysis bear great potential to highlight pathomechanisms of monogenic diseases. We recently defined the in-depth proteome of primary human neutrophil granulocytes (PMID 30630937). Here, we ask the question whether proteotypic patterns differ between defined genetic subtypes associated with severe congenital neutropenia (SCN). We focus on two novel genetic variants in constituents of the signal recognition particle (SRPRA and SRP19) and previously reported SCN genotypes SRP54, HAX1, and ELANE. Methods We analyzed proteomes of highly purified neutrophil granulocytes from a total of 26 SCN patients, including 5 with homozygous splice site mutations in SRP19, one patient with a de-novo heterozygous missense mutation in SRPRA (using 5 biological replicates collected months apart) as well as 6 patients with SRP54, 8 with HAX1 and 6 with ELANE mutations. Samples of 70 healthy donors (HD) served as controls. Whole cell proteome analysis was based on data-independent acquisition using a Thermo Fisher QExactive HF mass spectrometer. Data analysis was performed in R and Cytoscape, machine learning approaches included lasso regression and random forest. Results Differential expression analysis in comparison to HD showed in all genotypes overexpression of ribosomes, the translational apparatus, mitochondria, cell-substrate junctions and response to unfolded proteins. Underexpressed proteins showed genotype specific enrichment for granule subsets. Whereas ELANE showed deficiency of primary and secretory granules, HAX1 showed deficiency of specific, tertiary and secretory granules. All SRP genotypes showed markedly reduced abundance of proteins in all granule subsets. Principal component analysis showed clear separation of healthy and diseased proteotypes on the first component, whereas the separation of patient genotypes became clear only using five dimensions. We derived genotype specific proteome signatures by lasso regression, consisting of 26 (minimal specific set) to 128 (comprehensive signature) proteins, and a signature of 48 proteins when joining the SRP genotypes as one group. This signatures allow for perfect separation of the genotypes, demonstrating a clear genotype specific effect on protein abundance levels. We asked the question if the genotypes SRP19 and SRPRA show more similar proteomic profile to SRP54 than the other genotypes (ELANE, HAX1) by training a random forest model on proteome data from SRP54, HAX1, ELANE and HD and subsequently testing if the other SRP samples get classified as SRP54. We observe only few misclassifications using either all proteins (7/10) or using the lasso derived genotype defining proteins (8/10). This strongly supports our hypothesis that mutations in different subunits of the same complex lead to similar proteotype changes, a phenomenon we propose to call "proteotypic mimicry". For a systems biology perspective we selected proteins that were exclusively regulated in the SRP genotypes and restricted a network of interactions to these proteins together with their direct interactors (based on APID level 1). The resulting network contained 464 proteins and 3587 interactions. MCODE analysis identified 16 clusters that were consequently annotated using BINGO enrichment analysis. The SRP specific network shows features of the translational apparatus, the proteasome, the septin complex, splicing and cell-metabolic processes. Further studies to dissect specific pathomechanisms are under way. Conclusion Here we provide for the first time evidence for the correlation between SCN causing genotypes and their corresponding neutrophil proteotypes. In particular, we demonstrate significant overlap of all SRP related proteotypes, indicating a phenomenon we propose to be called "proteotypic mimicry". Studies on similarities and disparities of neutrophil proteotypes will help to raise new hypothesis on distinct cellular dysfunction in defined genetic defects of neutrophil granulocytes. Disclosures No relevant conflicts of interest to declare.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

H. Rashed, Ansam y Muthana H. Hamd. "ROBUST DETECTION AND RECOGNITION SYSTEM BASED ON FACIAL EXTRACTION AND DECISION TREE". Journal of Engineering and Sustainable Development 25, n.º 4 (1 de julio de 2021): 40–50. http://dx.doi.org/10.31272/jeasd.25.4.4.

Texto completo
Resumen
Automatic face recognition system is suggested in this work on the basis of appearance based features focusing on the whole image as well as local based features focusing on critical face points like eyes, mouth, and nose for generating further details. Face detection is the major phase in face recognition systems, certain method for face detection (Viola-Jones) has the ability to process images efficiently and achieve high rates of detection in real time systems. Dimension reduction and feature extraction approaches are going to be utilized on the cropped image caused by detection. One of the simple, yet effective ways for extracting image features is the Local Binary Pattern Histogram (LBPH), while the technique of Principal Component Analysis (PCA) was majorly utilized in pattern recognition. Also, the technique of Linear Discriminant Analysis (LDA) utilized for overcoming PCA limitations was efficiently used in face recognition. Furthermore, classification is going to be utilized following the feature extraction. The utilized machine learning algorithms are PART and J48. The suggested system is showing high accuracy for detection with Viola-Jones 98.75, whereas the features which are extracted by means of LDA with J48 provided the best results of (F-measure, Recall, and Precision).
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Bezdek, James C. "Generalized C-Means Algorithms for Medical Image Analysis". Proceedings, annual meeting, Electron Microscopy Society of America 48, n.º 1 (12 de agosto de 1990): 448–49. http://dx.doi.org/10.1017/s0424820100180999.

Texto completo
Resumen
Diagnostic machine vision systems that attempt to interpret medical imagery almost always include (and depend upon) one or more pattern recognition algorithms (cluster analysis and classifier design) for low and intermediate level image data processing. This includes, of course, image data collected by electron microscopes. Approaches based on both statistical and fuzzy models are found in the texts by Bezdek, Duda and Hart, Dubes and Jain,and Pao. Our talk examines the c-means families as they relate to medical image processing. We discuss and exemplify applications in segmentation (MRI data); clustering (flow cytometry data); and boundary analysis.The structure of partition spaces underlying clustering algorithms is described briefly. Let (c) be an integer, 1<c>n and let X = {x1, x2, ..., xn} denote a set of (n) column vectors in Rs. X is numerical object data; the k-th object (some physical entity such as a medical patient, PAP smear image, color photograph, etc.) has xk as its numerical representation; xkj is the j-th characteristic (or feature) associated with object k.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Satapathy, Santosh, D. Loganathan, Hari Kishan Kondaveeti y RamaKrushna Rath. "Performance analysis of machine learning algorithms on automated sleep staging feature sets". CAAI Transactions on Intelligence Technology 6, n.º 2 (20 de abril de 2021): 155–74. http://dx.doi.org/10.1049/cit2.12042.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Fuentes, Sigfredo, Eden Jane Tongson, Roberta De Bei, Claudia Gonzalez Viejo, Renata Ristic, Stephen Tyerman y Kerry Wilkinson. "Non-Invasive Tools to Detect Smoke Contamination in Grapevine Canopies, Berries and Wine: A Remote Sensing and Machine Learning Modeling Approach". Sensors 19, n.º 15 (30 de julio de 2019): 3335. http://dx.doi.org/10.3390/s19153335.

Texto completo
Resumen
Bushfires are becoming more frequent and intensive due to changing climate. Those that occur close to vineyards can cause smoke contamination of grapevines and grapes, which can affect wines, producing smoke-taint. At present, there are no available practical in-field tools available for detection of smoke contamination or taint in berries. This research proposes a non-invasive/in-field detection system for smoke contamination in grapevine canopies based on predictable changes in stomatal conductance patterns based on infrared thermal image analysis and machine learning modeling based on pattern recognition. A second model was also proposed to quantify levels of smoke-taint related compounds as targets in berries and wines using near-infrared spectroscopy (NIR) as inputs for machine learning fitting modeling. Results showed that the pattern recognition model to detect smoke contamination from canopies had 96% accuracy. The second model to predict smoke taint compounds in berries and wine fit the NIR data with a correlation coefficient (R) of 0.97 and with no indication of overfitting. These methods can offer grape growers quick, affordable, accurate, non-destructive in-field screening tools to assist in vineyard management practices to minimize smoke taint in wines with in-field applications using smartphones and unmanned aerial systems (UAS).
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Kawabata, Kuniaki, Zhi-Wei Luo y Jie Huang. "Special Issue on Machine Intelligence for Robotics and Mechatronics". Journal of Robotics and Mechatronics 22, n.º 4 (20 de agosto de 2010): 417. http://dx.doi.org/10.20965/jrm.2010.p0417.

Texto completo
Resumen
Machine intelligence is important in realizing intelligent recognition, control, and task execution in robotics and mechatronics research. One major approach involves developing machine learning / computational intelligence. This exciting field displays continuous dramatic progress based on new computer performance advances and trends. The 15 papers in this special issue present the latest machine intelligence for robotics and mechatronics and their applications. The first four papers propose interactive human-machine systems and human interfacing supporting human activities and service operations. One example of the major applications of robotics and mechatronics research is supporting daily life and work. The next four papers cover the issues of multiagents and multirobot systems, including intelligent design approach to control based on advanced distributed computational intelligence. Two papers on visual/pattern recognition discuss the asbestos fiber counting problem in qualitative analysis as a typical machine intelligence application. The next two papers deal with bio-related issues - social insects (termites) inspiring labor control of multirobots and “nonsocial” insects (crickets) inspiring a novel experimental interactive robot-insect tool. The last three papers present intelligent control of robot manipulators, mainly using learning algorithms as computational intelligence. All explore cutting-edge research machine intelligence for robotics and mechatronics. We thank the authors for their invaluable contributions in submitting their most recent research results to this issue. We are grateful to the reviewers for their generous time and effort. We also thank the Editorial Board member of the Journal of Robotics and Mechatronics for helping to make this issue possible.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Cilia, Nicole Dalia, Claudio De Stefano, Francesco Fontanella, Claudio Marrocco, Mario Molinara y Alessandra Scotto di Freca. "An Experimental Comparison between Deep Learning and Classical Machine Learning Approaches for Writer Identification in Medieval Documents". Journal of Imaging 6, n.º 9 (4 de septiembre de 2020): 89. http://dx.doi.org/10.3390/jimaging6090089.

Texto completo
Resumen
In the framework of palaeography, the availability of both effective image analysis algorithms, and high-quality digital images has favored the development of new applications for the study of ancient manuscripts and has provided new tools for decision-making support systems. The quality of the results provided by such applications, however, is strongly influenced by the selection of effective features, which should be able to capture the distinctive aspects to which the paleography expert is interested in. This process is very difficult to generalize due to the enormous variability in the type of ancient documents, produced in different historical periods with different languages and styles. The effect is that it is very difficult to define standard techniques that are general enough to be effectively used in any case, and this is the reason why ad-hoc systems, generally designed according to paleographers’ suggestions, have been designed for the analysis of ancient manuscripts. In recent years, there has been a growing scientific interest in the use of techniques based on deep learning (DL) for the automatic processing of ancient documents. This interest is not only due to their capability of designing high-performance pattern recognition systems, but also to their ability of automatically extracting features from raw data, without using any a priori knowledge. Moving from these considerations, the aim of this study is to verify if DL-based approaches may actually represent a general methodology for automatically designing machine learning systems for palaeography applications. To this purpose, we compared the performance of a DL-based approach with that of a “classical” machine learning one, in a particularly unfavorable case for DL, namely that of highly standardized schools. The rationale of this choice is to compare the obtainable results even when context information is present and discriminating: this information is ignored by DL approaches, while it is used by machine learning methods, making the comparison more significant. The experimental results refer to the use of a large sets of digital images extracted from an entire 12th-century Bibles, the “Avila Bible”. This manuscript, produced by several scribes who worked in different periods and in different places, represents a severe test bed to evaluate the efficiency of scribe identification systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Hafezi, Mohammad Hesam, Lei Liu y Hugh Millward. "Identification of Representative Patterns of Time Use Activity Through Fuzzy C-Means Clustering". Transportation Research Record: Journal of the Transportation Research Board 2668, n.º 1 (enero de 2017): 38–50. http://dx.doi.org/10.3141/2668-05.

Texto completo
Resumen
Analysis of the time use activity patterns of urbanites will contribute greatly to the modeling of urban transportation demands by linking activity generation and activity scheduling modules in the overall activity-based modeling framework. This paper develops a framework for novel pattern recognition modeling to identify groups of individuals with homogeneous daily activity patterns. The framework consists of four modules: initialization of the total cluster number and cluster centroids, identification of individuals with homogeneous activity patterns and grouping of them into clusters, identification of sets of representative activity patterns, and exploration of interdependencies among the attributes in each identified cluster. Numerous new machine-learning techniques, such as the fuzzy C-means clustering algorithm and the classification and regression tree classifier, are employed in the process of pattern recognition. The 24-h activity patterns are split into 288 intervals of 5-min duration. Each interval includes information on activity types, duration, start time, location, and travel mode, if applicable. Aggregated statistical evaluation and Kolmogorov–Smirnov tests are performed to determine statistical significance of clustered data. Results show a heterogeneous diversity in eight identified clusters in relation to temporal distribution and significant differences in a variety of sociodemographic variables. The insights gained from this study include important information on activities—such as activity type, start time, duration, location, and travel distance—that are essential for the scheduling phase of the activity-based model. Finally, the results of this paper are expected to be implemented within the activity-based travel demand model for Halifax, Nova Scotia.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

SHUAI, DIANXUN y XUE FANGLIANG. "GENERALIZED PARTICLE MODEL USED FOR DATA CLUSTERING". International Journal of Pattern Recognition and Artificial Intelligence 20, n.º 07 (noviembre de 2006): 1001–28. http://dx.doi.org/10.1142/s0218001406005101.

Texto completo
Resumen
Data clustering has been widely used in many areas, such as data mining, statistics, machine learning and so on. A variety of clustering approaches have been proposed so far, but most of them are not qualified to quickly cluster a large-scale high-dimensional database. This paper is devoted to a novel data clustering approach based on a generalized particle model (GPM). The GPM transforms the data clustering process into a stochastic process over the configuration space on a GPM array. The proposed approach is characterized by the self-organizing clustering and many advantages in terms of the insensitivity to noise, quality robustness to clustered data, suitability for high-dimensional and massive data sets, learning ability, openness and easier hardware implementation with the VLSI systolic technology. The analysis and simulations have shown the effectiveness and good performance of the proposed GPM approach to data clustering.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Shi, Leixin, Hongji Xu, Beibei Zhang, Xiaojie Sun, Juan Li y Shidi Fan. "Adaptive Multi-state Pipe Framework Based on Set Pair Analysis". International Journal of Machine Learning and Computing 11, n.º 2 (marzo de 2021): 158–63. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1029.

Texto completo
Resumen
Human Activity Recognition (HAR) is one of the main research fields in pattern recognition. In recent years, machine learning and deep learning have played important roles in Artificial Intelligence (AI) fields, and are proven to be very successful in classification tasks of HAR. However, there are two drawbacks of the mainstream frameworks: 1) all inputs are processed with the same parameters, which would cause the framework to incorrectly assign an unrealistic label to the object; 2) these frameworks lack generality in different application scenarios. In this paper, an adaptive multi-state pipe framework based on Set Pair Analysis (SPA) is presented, where pipes are mainly divided into three kinds of types: main pipe, sub-pipe and fusion pipe. In the main pipe, the input of classification tasks is preprocessed by SPA to obtain the Membership Belief Matrix (MBM). The sub-pipe shunt processing is performed according to the membership belief. The results are merged through the fusion pipe in the end. To test the performance of the proposed framework, we attempt to find the best configuration set that yields the optimal performance and evaluate the effectiveness of the new approach on the popular benchmark dataset WISDM. Experimental results demonstrate that the proposed framework can get the good performance by achieving a result of 1.4% test error.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Ahmed, Mona A. y Abdel-Badeeh M. Salem. "Intelligent Technique for Human Authentication using Fusion of Finger and Dorsal Hand Veins". WSEAS TRANSACTIONS ON INFORMATION SCIENCE AND APPLICATIONS 18 (9 de julio de 2021): 91–101. http://dx.doi.org/10.37394/23209.2021.18.12.

Texto completo
Resumen
Multimodal biometric systems have been widely used to achieve high recognition accuracy. This paper presents a new multimodal biometric system using intelligent technique to authenticate human by fusion of finger and dorsal hand veins pattern. We developed an image analysis technique to extract region of interest (ROI) from finger and dorsal hand veins image. After extracting ROI we design a sequence of preprocessing steps to improve finger and dorsal hand veins images using Median filter, Wiener filter and Contrast Limited Adaptive Histogram Equalization (CLAHE) to enhance vein image. Our smart technique is based on the following intelligent algorithms, namely; principal component analysis (PCA) algorithm for feature extraction and k-Nearest Neighbors (K-NN) classifier for matching operation. The database chosen was the Shandong University Machine Learning and Applications - Homologous Multi-modal Traits (SDUMLA-HMT) and Bosphorus Hand Vein Database. The achieved result for the fusion of both biometric traits was Correct Recognition Rate (CRR) is 96.8%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Modi, Rohan. "Transcript Anatomization with Multi-Linguistic and Speech Synthesis Features". International Journal for Research in Applied Science and Engineering Technology 9, n.º VI (20 de junio de 2021): 1755–58. http://dx.doi.org/10.22214/ijraset.2021.35371.

Texto completo
Resumen
Handwriting Detection is a process or potential of a computer program to collect and analyze comprehensible input that is written by hand from various types of media such as photographs, newspapers, paper reports etc. Handwritten Text Recognition is a sub-discipline of Pattern Recognition. Pattern Recognition is refers to the classification of datasets or objects into various categories or classes. Handwriting Recognition is the process of transforming a handwritten text in a specific language into its digitally expressible script represented by a set of icons known as letters or characters. Speech synthesis is the artificial production of human speech using Machine Learning based software and audio output based computer hardware. While there are many systems which convert normal language text in to speech, the aim of this paper is to study Optical Character Recognition with speech synthesis technology and to develop a cost effective user friendly image based offline text to speech conversion system using CRNN neural networks model and Hidden Markov Model. The automated interpretation of text that has been written by hand can be very useful in various instances where processing of great amounts of handwritten data is required, such as signature verification, analysis of various types of documents and recognition of amounts written on bank cheques by hand.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Ali, Mohammed. "The Human Intelligence vs. Artificial Intelligence: Issues and Challenges in Computer Assisted Language Learning". International Journal of English Linguistics 8, n.º 5 (21 de junio de 2018): 259. http://dx.doi.org/10.5539/ijel.v8n5p259.

Texto completo
Resumen
In this study, the researcher has advocated the importance of human intelligence in language learning since software or any Learning Management System (LMS) cannot be programmed to understand the human context as well as all the linguistic structures contextually. This study examined the extent to which language learning is perilous to machine learning and its programs such as Artificial Intelligence (AI), Pattern Recognition, and Image Analysis used in much assistive learning techniques such as voice detection, face detection and recognition, personalized assistants, besides language learning programs. The researchers argue that language learning is closely associated with human intelligence, human neural networks and no computers or software can claim to replace or replicate those functions of human brain. This study thus posed a challenge to natural language processing (NLP) techniques that claimed having taught a computer how to understand the way humans learn, to understand text without any clue or calculation, to realize the ambiguity in human languages in terms of the juxtaposition between the context and the meaning, and also to automate the language learning process between computers and humans. The study cites evidence of deficiencies in such machine learning software and gadgets to prove that in spite of all technological advancements there remain areas of human brain and human intelligence where a computer or its software cannot enter. These deficiencies highlight the limitations of AI and super intelligence systems of machines to prove that human intelligence would always remain superior.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Yang, Shu Xiang, W. D. Jiao y Z. T. Wu. "Combination of ICA and SOM for Classification of Machine Condition Patterns". Key Engineering Materials 295-296 (octubre de 2005): 643–48. http://dx.doi.org/10.4028/www.scientific.net/kem.295-296.643.

Texto completo
Resumen
Nonlinear independent component analysis (NICA) is a powerful method for analyzing nonlinear and nongaussian data. Artificial neural network (ANN), especially self-organizing map (SOM) based on unsupervised learning, is an excellent tool for pattern clustering and recognition. A novel multi-NICA network is proposed for feature extraction of different mechanical patterns, followed by a typical ANN that is one of Multi-Layer Perceptron (MLP), or Radial Basis Function Network (RBFN), or self-organizing map (SOM), which implements the final classification. Using NICA and appropriate strategies for further feature extraction, nonlinear and higher than second order features embedded in multi-channel vibration measurements can be captured effectively. Mechanical fault patterns can be recognized correctly. Results from the contrast classification experiments show that the new compound ICA-SOM classifier can be constructed in a simpler way and it can classify various fault patterns with high accuracy, both of which imply a great potential in health condition monitoring of machine systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Piltan, Farzin, Bach Phi Duong y Jong-Myon Kim. "Deep Learning-Based Adaptive Neural-Fuzzy Structure Scheme for Bearing Fault Pattern Recognition and Crack Size Identification". Sensors 21, n.º 6 (17 de marzo de 2021): 2102. http://dx.doi.org/10.3390/s21062102.

Texto completo
Resumen
Bearings are complex components with onlinear behavior that are used to mitigate the effects of inertia. These components are used in various systems, including motors. Data analysis and condition monitoring of the systems are important methods for bearing fault diagnosis. Therefore, a deep learning-based adaptive neural-fuzzy structure technique via a support vector autoregressive-Laguerre model is presented in this study. The proposed scheme has three main steps. First, the support vector autoregressive-Laguerre is introduced to approximate the vibration signal under normal conditions and extract the state-space equation. After signal modeling, an adaptive neural-fuzzy structure observer is designed using a combination of high-order variable structure techniques, the support vector autoregressive-Laguerre model, and adaptive neural-fuzzy inference mechanism for normal and abnormal signal estimation. The adaptive neural-fuzzy structure observer is the main part of this work because, based on the difference between signal estimation accuracy, it can be used to identify faults in the bearings. Next, the residual signals are generated, and the signal conditions are detected and identified using a convolution neural network (CNN) algorithm. The effectiveness of the proposed deep learning-based adaptive neural-fuzzy structure technique by support vector autoregressive-Laguerre model was analyzed using the Case Western Reverse University (CWRU) bearing vibration dataset. The proposed scheme is compared to five state-of-the-art techniques. The proposed algorithm improved the average pattern recognition and crack size identification accuracy by 1.99%, 3.84%, 15.75%, 5.87%, 30.14%, and 35.29% compared to the combination of the high-order variable structure technique with the support vector autoregressive-Laguerre model and CNN, the combination of the variable structure technique with the support vector autoregressive-Laguerre model and CNN, the combination of RAW signal and CNN, the combination of the adaptive neural-fuzzy structure technique with the support vector autoregressive-Laguerre model and support vector machine (SVM), the combination of the high-order variable structure technique with the support vector autoregressive-Laguerre model and SVM, and the combination of the variable structure technique with the support vector autoregressive-Laguerre model and SVM, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Huang, Zhen, Chengkang Li, Qiang Lv, Rijian Su y Kaibo Zhou. "Automatic Recognition of Communication Signal Modulation Based on the Multiple-Parallel Complex Convolutional Neural Network". Wireless Communications and Mobile Computing 2021 (9 de junio de 2021): 1–11. http://dx.doi.org/10.1155/2021/5006248.

Texto completo
Resumen
This paper implements a deep learning-based modulation pattern recognition algorithm for communication signals using a convolutional neural network architecture as a modulation recognizer. In this paper, a multiple-parallel complex convolutional neural network architecture is proposed to meet the demand of complex baseband processing of all-digital communication signals. The architecture learns the structured features of the real and imaginary parts of the baseband signal through parallel branches and fuses them at the output according to certain rules to obtain the final output, which realizes the fitting process to the complex numerical mapping. By comparing and analyzing several commonly used time-frequency analysis methods, a time-frequency analysis method that can well highlight the differences between different signal modulation patterns is selected to convert the time-frequency map into a digital image that can be processed by a deep network. In order to fully extract the spatial and temporal characteristics of the signal, the CLP algorithm of the CNN network and LSTM network in parallel is proposed. The CNN network and LSTM network are used to extract the spatial features and temporal features of the signal, respectively, and the fusion of the two features as well as the classification is performed. Finally, the optimal model and parameters are obtained through the design of the modulation recognizer based on the convolutional neural network and the performance analysis of the convolutional neural network model. The simulation experimental results show that the improved convolutional neural network can produce certain performance gains in radio signal modulation style recognition. This promotes the application of machine learning algorithms in the field of radio signal modulation pattern recognition.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Borodinov, A. A. y V. V. Myasnikov. "Analysis of the preferences of public transport passengers in the task of building a personalized recommender system". Information Technology and Nanotechnology, n.º 2391 (2019): 198–205. http://dx.doi.org/10.18287/1613-0073-2019-2391-198-205.

Texto completo
Resumen
The paper presents the theoretical and algorithmic aspects for making a personalized recommender system (mobile service) designed for public route transport users. The main focus is on identifying and formalizing the concept of "user preferences", which is the basis of modern personalized recommender systems. Informal (verbal) and formal (mathematical) formulations of the corresponding problems of determining "user preferences" in a specific spatial-temporal context are presented: the preferred stops definition and the preferred "transport correspondence" definition. The first task can be represented as a well-known classification problem. Thus, it can be formulated and solved using well-known pattern recognition and machine learning methods. The second is reduced to the construction of dynamic graphs series. The experiments were conducted on data from the mobile application "Pribyvalka-63". The application is the tosamara.ru service part, currently used to inform Samara residents about the public transport movement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

CRAWFORD, STUART L. y STEVEN K. SOUDERS. "A COMPARISON OF TWO NEW TECHNIQUES FOR CONCEPTUAL CLUSTERING". International Journal of Pattern Recognition and Artificial Intelligence 04, n.º 03 (septiembre de 1990): 409–28. http://dx.doi.org/10.1142/s0218001490000253.

Texto completo
Resumen
Clustering, pattern recognition, and classification are important components of many artificial intelligence systems, especially those designed to classify new observations. Often, one has access to a history of old observations that have been previously classified and, under these circumstances, the old data may be used as a training set from which one may obtain rules for the subsequent classification of new data. The techniques used to obtain these rules may be traditional statistical methods or modern computer-intensive techniques. Sometimes, however, the history of old observations has not been previously classified. Under these circumstances, the analyst simply wishes to uncover structure in the data and ascertain whether the structure is apparent or real. When the analyst is searching for clusters, statistical clustering methodologies are often used. Although effective at locating clusters, such approaches leave the interpretation of the clusters as a task for the human analyst. A relatively new class of “conceptual” clustering techniques have emerged from the discipline of machine learning. These techniques attempt to both locate and explain clusters among the data. In this way, the explanations of cluster membership may be used to construct rules for the subsequent classification of new data. The generation of interpretable rules, whether by the use of classification algorithms or conceptual clustering algorithms is of considerable importance in reducing the knowledge acquisition “bottleneck” that often impedes progress towards the building of rule-based systems. In this paper, two new techniques for conceptual clustering are introduced and compared.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía