Siga este enlace para ver otros tipos de publicaciones sobre el tema: Cluster analysis Pattern recognition systems.

Artículos de revistas sobre el tema "Cluster analysis Pattern recognition systems"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Cluster analysis Pattern recognition systems".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

NATH, RAJIV KUMAR. "FINGERPRINT RECOGNITION USING MULTIPLE CLASSIFIER SYSTEM". Fractals 15, n.º 03 (septiembre de 2007): 273–78. http://dx.doi.org/10.1142/s0218348x07003605.

Texto completo
Resumen
In this paper, the human fingerprint, which is independent of rotation and scaling, is recognized. The multiple classification technique, based on wavelet and fractal analysis, is used. It is shown that systematic incorporation of decision from various classifiers leads to a better decision rather than simply fusing them. Multiple classifiers can serve as a means of enhancing the performance of pattern recognition problems. Multiple classifier system design involves the problem of classifier fusion. This paper deals with multi-classifier systems in which each classifier uses its own representation of the input pattern, based on features collected from multiple sources. The multiple feature sources considered here are multi-fractals, wavelets and fast Fourier transforms coefficients. A clustering algorithm is used to observe the efficacy of the feature sources. The multiple sources were graded according to their effectiveness of providing more non-overlapping clusters for different groups into which the samples are to be separated. This approach first considers the best source for the feature parameters. If this feature classifies the test sample into more than one cluster, then the feature next to the best is summoned to finish up the remaining part of the classification process. The continuation of this process along with the judicious selection of classifiers succeeds in identifying a single cluster for the test sample. The results obtained after the experiments on a set of fingerprint images shows that this novel technique can go a long way in avoiding ambiguity and thus limiting the need for use of soft-computing tools for making decisions. Our method provides a hard, concrete and accurate solution to pattern recognition problems employing multiple classifiers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zimovets, V. I., S. V. Shamatrin, D. E. Olada y N. I. Kalashnykova. "Functional Diagnostic System for Multichannel Mine Lifting Machine Working in Factor Cluster Analysis Mode". Journal of Engineering Sciences 7, n.º 1 (2020): E20—E27. http://dx.doi.org/10.21272/jes.2020.7(1).e4.

Texto completo
Resumen
The primary direction of the increase of reliability of the automated control systems of complex electromechanical machines is the application of intelligent information technologies of the analysis of diagnostic information directly in the operating mode. Therefore, the creation of the basics of information synthesis of a functional diagnosis system (FDS) based on machine learning and pattern recognition is a topical task. In this case, the synthesized FDS must be adaptive to arbitrary initial conditions of the technological process and practically invariant to the multidimensionality of the space of diagnostic features, an alphabet of recognition classes, which characterize the possible technical states of the units and devices of the machine. Besides, an essential feature of FDS is the ability to retrain by increasing the power of the alphabet recognition classes. In the article, information synthesis of FDS is performed within the framework of information-extreme intellectual data analysis technology, which is based on maximizing the information capacity of the system in the process of machine learning. The idea of factor cluster analysis was realized by forming an additional training matrix of unclassified vectors of features of a new recognition class obtained during the operation of the FDS directly in the operating mode. The proposed algorithm allows performing factor cluster analysis in the case of structured feature vectors of several recognition classes. In this case, additional training matrices of the corresponding recognition classes are formed by the agglomerative method of cluster analysis using the k-means procedure. The proposed method of factor cluster analysis is implemented on the example of information synthesis of the FDS of a multi-core mine lifting machine. Keywords: information-extreme intelligent technology, a system of functional diagnostics, multichannel mine lifting machine, machine learning, factor cluster analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

LBOV, G. S. "LOGICAL DECISION RULES FOR AUTOMATIC DISCOVERY OF KNOWLEDGE IN EXPERT SYSTEMS DATABASE". International Journal of Pattern Recognition and Artificial Intelligence 03, n.º 01 (marzo de 1989): 135–45. http://dx.doi.org/10.1142/s0218001489000127.

Texto completo
Resumen
We consider the class of logical decision rules and its applications for the solution of various problems of multivariate statistical analysis: discriminant and regression analysis, and cluster analysis. Some useful properties of the statistical analysis methods using the class under consideration are shown. Particular attention is paid to the possibility of presenting statistical results (unlike all other methods) in a language close to a natural language of logical statements.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Tang, Hongyan, Ying Li, Tong Jia, Xiaoyong Yuan y Zhonghai Wu. "Analysis of Frequently Failing Tasks and Rescheduling Strategy in the Cloud System". International Journal of Distributed Systems and Technologies 9, n.º 1 (enero de 2018): 16–38. http://dx.doi.org/10.4018/ijdst.2018010102.

Texto completo
Resumen
To better understand task failures in cloud computing systems, the authors analyze failure frequency of tasks based on Google cluster dataset, and find some frequently failing tasks that suffer from long-term failures and repeated rescheduling, which are called killer tasks as they can be a big concern of cloud systems. Hence there is a need to analyze killer tasks thoroughly and recognize them precisely. In this article, the authors first investigate resource usage pattern of killer tasks and analyze rescheduling strategies of killer tasks in Google cluster to find that repeated rescheduling causes large amount of resource wasting. Based on the above observations, they then propose an online killer task recognition service to recognize killer tasks at the very early stage of their occurrence so as to avoid unnecessary resource wasting. The experiment results show that the proposed service performs a 93.6% accuracy in recognizing killer tasks with an 87% timing advance and 86.6% resource saving for the cloud system averagely.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kano, Makoto, Kunihiro Nishimura, Shuichi Tsutsumi, Hiroyuki Aburatani, Koichi Hirota y Michitaka Hirose. "Cluster Overlap Distribution Map: Visualization for Gene Expression Analysis Using Immersive Projection Technology". Presence: Teleoperators and Virtual Environments 12, n.º 1 (febrero de 2003): 96–109. http://dx.doi.org/10.1162/105474603763835369.

Texto completo
Resumen
In this paper, we discuss possible applications of virtual reality technologies, such as immersive projection technology (IPT), in the field of genome science, and propose cluster-oriented visualization that attaches importance to data separation of large gene data sets with multiple variables. Based on these strategies, we developed the cluster overlap distribution map (CDCM), which is a visualization methodology using IPT for pairwise comparison between cluster sets generated from different gene expression data sets. This methodology effectively provides the user with indications of gene clusters that are worth a close examination. In addition, by using the plate window manager system, which enables the user to manipulate existing 2D GUI applications in the virtual 3D space, we developed the virtual environment for the comprehensive analysis from providing the indications to further examination by referring to the database on Web sites. Our system was applied in the comparison between the gene expression data sets of hepatocellular carcinomas and hepatoblastomas, and the effectiveness of the system was confirmed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Huang, Mingxia, Xuebo Yan, Zhu Bai, Haiqiang Zhang y Zeen Xu. "Key Technologies of Intelligent Transportation Based on Image Recognition and Optimization Control". International Journal of Pattern Recognition and Artificial Intelligence 34, n.º 10 (9 de enero de 2020): 2054024. http://dx.doi.org/10.1142/s0218001420540245.

Texto completo
Resumen
With the development of digital image processing technology, the application scope of image recognition is more and more wide, involving all aspects of life. In particular, the rapid development of urbanization and the popularization and application of automobiles in recent years have led to a sharp increase in traffic problems in various countries, resulting in intelligent transportation technology based on image processing optimization control becoming an important research field of intelligent systems. Aiming at the application demand analysis of intelligent transportation system, this paper designs a set of high-definition bayonet systems for intelligent transportation. It combines data mining technology and distributed parallel Hadoop technology to design the architecture and analysis of intelligent traffic operation state data analysis. The mining algorithm suitable for the system proves the feasibility of the intelligent traffic operation state data analysis system with the actual traffic big data experiment, and aims to provide decision-making opinions for the traffic state. Using the deployed Hadoop server cluster and the AdaBoost algorithm of the improved MapReduce programming model, the example runs large traffic data, performs traffic analysis and speed–overspeed analysis, and extracts information conducive to traffic control. It proves the feasibility and effectiveness of using Hadoop platform to mine massive traffic information.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Wanner, Franz, Wolfgang Jentner, Tobias Schreck, Andreas Stoffel, Lyubka Sharalieva y Daniel A. Keim. "Integrated visual analysis of patterns in time series and text data - Workflow and application to financial data analysis". Information Visualization 15, n.º 1 (1 de abril de 2015): 75–90. http://dx.doi.org/10.1177/1473871615576925.

Texto completo
Resumen
In this article, we describe a workflow and tool that allows a flexible formation of hypotheses about text features and their combinations, which are significantly connected in time to quantitative phenomena observed in stock data. To support such an analysis, we combine the analysis steps of frequent quantitative and text-oriented data using an existing a priori method. First, based on heuristics, we extract interesting intervals and patterns in large time series data. The visual analysis supports the analyst in exploring parameter combinations and their results. The identified time series patterns are then input for the second analysis step, in which all identified intervals of interest are analyzed for frequent patterns co-occurring with financial news. An a priori method supports the discovery of such sequential temporal patterns. Then, various text features such as the degree of sentence nesting, noun phrase complexity, and the vocabulary richness, are extracted from the news items to obtain meta-patterns. Meta-patterns are defined by a specific combination of text features which significantly differ from the text features of the remaining news data. Our approach combines a portfolio of visualization and analysis techniques, including time, cluster, and sequence visualization and analysis functionality. We provide a case study and an evaluation on financial data where we identify important future work. The workflow could be generalized to other application domains such as data analysis of smart grids, cyber physical systems, or the security of critical infrastructure, where the data consist of a combination of quantitative and textual time series data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

BOIKO, O. V., V. V. TOMAREVA-PATLAHOVA, IU А. BONDAR y M. S. KARPUNINA. "METHODICAL APPROACH TO ENSURING CLUSTER AND LOGISTICS DEVELOPMENT OF THE MARKET OF TRANSPORT SYSTEMS OF UKRAINE". Economic innovations 22, n.º 4(77) (20 de diciembre de 2020): 29–38. http://dx.doi.org/10.31520/ei.2020.22.4(77).29-38.

Texto completo
Resumen
Topicality. In solving socio-economic problems in recent years, increasing contradictions between market participants at all levels of the national economy; This is especially true when it comes to the justification and practical implementation of possible options based on the analysis and further interpretation of empirical data relating to certain areas of development of territorial production systems or locally established industry markets, including transport services markets (RTP). within certain regions of the country. In this regard, there is a need to form a methodological framework for the possibility of implementing a cluster-logistics approach to the development of RTP. Aim and tasks. The purpose of writing this work is to develop a methodological support for cluster-logistics approach to the development of RTP with the definition of organizational forms of interaction of regional markets in the form of transport and logistics clusters (TLC). Research results. When analyzing and forecasting socio-economic phenomena, the researcher often encounters the multidimensionality of their description. Methods of multidimensional analysis are the most effective quantitative tool for the study of socio-economic processes, described by a large number of characteristics. These include cluster analysis, taxonomy, pattern recognition, factor analysis, and more. Cluster analysis most clearly reflects the features of multidimensional analysis in the classification, and factor analysis in the study of communication. The main purpose of cluster analysis is the breakdown of the set of studied objects and features into homogeneous groups in the appropriate sense of clusters. In cluster analysis, the concept of metrics is introduced to quantify similarity, and the similarity or difference between classified objects is set depending on the metric distance between them.In this paper, a single connection within a group of algorithms using a quadratic Euclidean distance is used. Cluster analysis most clearly reflects the features of multidimensional analysis in the classification. That is why a dimensionless model based on the use of relative coefficients of hierarchical agglomerative type is proposed to analyze the development of RTP and its transport infrastructure. The use of multidimensional classification methods allowed to group districts, in contrast to the traditional geographical or administrative division, by level of socio-economic development, which determines the needs of districts in transport infrastructure and cooperation, which, in particular through TLC, will ensure maximum use of existing economic potential. economy and equalization of living conditions of the population in different territories.Conclusion. Thus, the transport and socio-economic potential of the regions was analyzed, as a result of which two formed TLCs and five nuclei were identified, on the basis of which it is proposed to develop TLCs of appropriate types by joining regions with medium and low transport potential.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

YANG, MING-DER, TUNG-CHING SU, NANG-FEI PAN y PEI LIU. "FEATURE EXTRACTION OF SEWER PIPE DEFECTS USING WAVELET TRANSFORM AND CO-OCCURRENCE MATRIX". International Journal of Wavelets, Multiresolution and Information Processing 09, n.º 02 (marzo de 2011): 211–25. http://dx.doi.org/10.1142/s0219691311004055.

Texto completo
Resumen
In general, the sewer inspection usually employs a great number of CCTV images to discover sewer failures by human interpretation. A computer-aided program remains to be developed due to human's fatigue and subjectivity. To enhance the efficiency of sewer inspection, this paper attends to apply artificial intelligence to extract the failure features of the sewer systems that is demonstrated on the sewer system in the eastern Taichung City, Taiwan. Wavelet transform and gray-level co-occurrence matrix, which have been widely applied in many texture analyses, are adopted in this research to generate extracted features, which are the most valuable information in pattern recognition of failures on CCTV images. Wavelet transform is capable of dividing an image into four sub-images including approximation sub-image, horizontal detail sub-image, vertical detail sub-image, and diagonal detail sub-image. The co-occurrence matrices of horizontal orientation, vertical orientation, and 45° and 135° orientations, respectively, were calculated for the horizontal, vertical, and diagonal detail sub-images. Subsequently, the features including angular second moment, entropy, contrast, homogeneity, dissimilarity, correlation, and cluster tendency, can be obtained from the co-occurrence matrices. However, redundant features either decrease the accuracy of texture description or increase the difficulty of pattern recognition. Thus, the correlations of the features are estimated to search the appropriate feature sets according to the correlation coefficients between the features. In addition, a discriminant analysis was used to evaluate the discriminability of the features for the pipe failure defection, and entropy, correlation, and cluster tendency were found to be the best features based on the discriminant accuracy through an error matrix analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Chen, Min y Simone A. Ludwig. "Particle Swarm Optimization Based Fuzzy Clustering Approach to Identify Optimal Number of Clusters". Journal of Artificial Intelligence and Soft Computing Research 4, n.º 1 (1 de enero de 2014): 43–56. http://dx.doi.org/10.2478/jaiscr-2014-0024.

Texto completo
Resumen
Abstract Fuzzy clustering is a popular unsupervised learning method that is used in cluster analysis. Fuzzy clustering allows a data point to belong to two or more clusters. Fuzzy c-means is the most well-known method that is applied to cluster analysis, however, the shortcoming is that the number of clusters need to be predefined. This paper proposes a clustering approach based on Particle Swarm Optimization (PSO). This PSO approach determines the optimal number of clusters automatically with the help of a threshold vector. The algorithm first randomly partitions the data set within a preset number of clusters, and then uses a reconstruction criterion to evaluate the performance of the clustering results. The experiments conducted demonstrate that the proposed algorithm automatically finds the optimal number of clusters. Furthermore, to visualize the results principal component analysis projection, conventional Sammon mapping, and fuzzy Sammon mapping were used
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Zeng, Xiang-Yan, Yen-Wei Chen y Zensho Nakao. "Classification of Remotely Sensed Images Using Independent Component Analysis and Spatial Consistency". Journal of Advanced Computational Intelligence and Intelligent Informatics 8, n.º 2 (20 de marzo de 2004): 216–22. http://dx.doi.org/10.20965/jaciii.2004.p0216.

Texto completo
Resumen
We apply independent component analysis (ICA) to learn efficient color representation of remotely sensed images. Among the three basis functions obtained from RGB color space, two are in an opposing-color model by which the responses of R, G and B cones are combined in opposing fashions. This is coincident with the idea of contrasting reflected in many color systems. The interesting point is that there is no summation component that corresponds to illumination in other transforms. Spectral independent components are then used to cluster pixels. After pixel-based classification, we segment an image on the basis of regions by spatial consistency. Experimental results show that this method considerably improves the classification performance of multispectral remotely sensed images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Bezdek, James C. "Generalized C-Means Algorithms for Medical Image Analysis". Proceedings, annual meeting, Electron Microscopy Society of America 48, n.º 1 (12 de agosto de 1990): 448–49. http://dx.doi.org/10.1017/s0424820100180999.

Texto completo
Resumen
Diagnostic machine vision systems that attempt to interpret medical imagery almost always include (and depend upon) one or more pattern recognition algorithms (cluster analysis and classifier design) for low and intermediate level image data processing. This includes, of course, image data collected by electron microscopes. Approaches based on both statistical and fuzzy models are found in the texts by Bezdek, Duda and Hart, Dubes and Jain,and Pao. Our talk examines the c-means families as they relate to medical image processing. We discuss and exemplify applications in segmentation (MRI data); clustering (flow cytometry data); and boundary analysis.The structure of partition spaces underlying clustering algorithms is described briefly. Let (c) be an integer, 1<c>n and let X = {x1, x2, ..., xn} denote a set of (n) column vectors in Rs. X is numerical object data; the k-th object (some physical entity such as a medical patient, PAP smear image, color photograph, etc.) has xk as its numerical representation; xkj is the j-th characteristic (or feature) associated with object k.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Kanirajan, P., M. Joly y T. Eswaran. "Recognition of Power Quality Disturbances using Fuzzy Expert Systems". WSEAS TRANSACTIONS ON SIGNAL PROCESSING 16 (19 de enero de 2021): 166–77. http://dx.doi.org/10.37394/232014.2020.16.18.

Texto completo
Resumen
This paper presents a new approach to detect and classify power quality disturbances in the power system using Fuzzy C-means clustering, Fuzzy logic (FL) and Radial basis Function Neural Networks (RBFNN). Feature extracted through wavelet is used for training, after training, the obtained weight is used to classify the power quality problems in RBFNN, but it suffers from extensive computation and low convergence speed. Then to detect and classify the events, FL is proposed, the extracted characters are used to find out membership functions and fuzzy rules being determined from the power quality inherence. For the classification,5 types of disturbance are taken in to account. The classification performance of FL is compared with RBFNN.The clustering analysis is used to group the data in to clusters to identifying the class of the data with Fuzzy C-means algorithm. The classification accuracy of FL and Fuzzy C-means clustering is improved with the help of cognitive as well as the social behavior of particles along with fitness value using Particle swarm optimization (PSO),just by determining the ranges of the feature of the membership funtion for each rules to identify each disturbance specifically.The simulation result using Fuzzy C-means clustering possess significant improvements and gives classification results in less than a cycle when compared over other considered approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Etemadpour, Ronak y Angus Graeme Forbes. "Density-based motion". Information Visualization 16, n.º 1 (26 de julio de 2016): 3–20. http://dx.doi.org/10.1177/1473871615606187.

Texto completo
Resumen
A common strategy for encoding multidimensional data for visual analysis is to use dimensionality reduction techniques that project data from higher dimensions onto a lower-dimensional space. This article examines the use of motion to retain an accurate representation of the point density of clusters that might otherwise be lost when a multidimensional dataset is projected into a two-dimensional space. Specifically, we consider different types of density-based motion, where the magnitude of the motion is directly related to the density of the clusters. We investigate how users interpret motion in two-dimensional scatterplots and whether or not they are able to effectively interpret the point density of the clusters through motion. We conducted a series of user studies with both synthetic and real-world datasets to explore how motion can help users in completing various multidimensional data analysis tasks. Our findings indicate that for some tasks, motion outperforms the static scatterplots; circular path motions in particular give significantly better results compared to the other motions. We also found that users were easily able to distinguish clusters with different densities as long the magnitudes of motion were above a particular threshold. Our results indicate that incorporating density-based motion into visualization analytics systems effectively enables the exploration and analysis of multidimensional datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Summers, Kenneth L., Thomas Preston Caudell, Kathryn Berkbigler, Brian Bush, Kei Davis y Steve Smith. "Graph Visualization for the Analysis of the Structure and Dynamics of Extreme-Scale Supercomputers". Information Visualization 3, n.º 3 (8 de julio de 2004): 209–22. http://dx.doi.org/10.1057/palgrave.ivs.9500079.

Texto completo
Resumen
We are exploring the development and application of information visualization techniques for the analysis of new massively parallel supercomputer architectures. Modern supercomputers typically comprise very large clusters of commodity SMPs interconnected by possibly dense and often non-standard networks. The scale, complexity, and inherent non-locality of the structure and dynamics of this hardware, and the operating systems and applications distributed over them, challenge traditional analysis methods. As part of the á la carte (A Los Alamos Computer Architecture Toolkit for Extreme-Scale Architecture Simulation) team at Los Alamos National Laboratory, who are simulating these new architectures, we are exploring advanced visualization techniques and creating tools to enhance analysis of these simulations with intuitive three-dimensional representations and interfaces. This work complements existing and emerging algorithmic analysis tools. In this paper, we give background on the problem domain, a description of a prototypical computer architecture of interest (on the order of 10,000 processors connected by a quaternary fat-tree communications network), and a presentation of three classes of visualizations that clearly display the switching fabric and the flow of information in the interconnecting network.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Chornous, Galyna O. "PROACTIVE DECISION-MAKING MECHANISM BASED ON MINING TECHNOLOGY". Ekonomika 91, n.º 1 (1 de enero de 2012): 105–17. http://dx.doi.org/10.15388/ekon.2012.0.904.

Texto completo
Resumen
The main idea of this study is to connect the possibilities of mining technology with the methodology of proactive management by social and economic systems. The permanent process of complication of all spheres of social life requires improving the management forms and methods. Modern methods of decision support, appropriate information technology make it possible to improve the classical approaches, one of which is proactive management. Taking into account the limits of classical methods, proactive management should be chosen as an appropriate mining technology that can automatically extract the new non-trivial knowledge from data in the form of patterns, relationships, laws, etc. This synthetic technology combines the latest achievements of artificial intelligence, mathematics, statistics, heuristic approaches, including Data Mining, OLAP and others. Using the mining technology enables: to implement data monitoring, preparation and analysis (collection and presentation of data, detection of situations), to identify problem situations (to recognize patterns of problem situations; to correlate the pattern of the current situation with patterns of problem situations; to determine the structure of the problem situation, to identify factors and relationships), to prioritize the problems, trends and challenges, their expectations, effects (to predict the situation development with managerial influence and without it), to pose the tasks (to analyze deviations in terms of activity; to define goals, criteria, operating conditions) and so on. The following models (using the methods of “nearest neighbour”, rules induction, causal networks, statistical methods, associations, neural networks, decision trees, etc.) can be used: cluster allocation situations, classification of patterns, models of situation identification, pattern recognition models, prediction models, optimization models, causal relationships models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Kuravsky, L. S., G. A. Yuryev y V. I. Zlatomrezhev. "New approaches for assessing the activities of operators of complex technical systems". Experimental Psychology (Russia) 12, n.º 4 (2019): 27–49. http://dx.doi.org/10.17759/exppsy.2019120403.

Texto completo
Resumen
Presented are new approaches for supporting the outcome grading for activities of operators of complex technical systems, which are based on comparisons of current exercises with the activity database patterns in both the wavelet representation metric associated with time series of activity parameters and the likelihood metric of eigenvalue trajectories for these parameters transforms as well as on probabilistic assessments of skill class recognition using sample distribution functions of exercise distances to cluster centers in a scaling space and Bayesian likelihood estimations with the aid of probabilistic profile of staying in activity parameter ranges. These techniques have demonstrated the capabilities of recognizing sets of abnormal exercises and detection of parameters characterizing operator mistakes to reveal the causes of abnormality. The techniques in question overcome limitations of existing methods and provide advantages over manual data analysis since they greatly reduce the combinatorial enumeration of the options considered.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Bustamam, Alhadi, Devvi Sarwinda y Gianinna Ardenaswari. "Texture and Gene Expression Analysis of the MRI Brain in Detection of Alzheimer’s Disease". Journal of Artificial Intelligence and Soft Computing Research 8, n.º 2 (1 de abril de 2018): 111–20. http://dx.doi.org/10.1515/jaiscr-2018-0008.

Texto completo
Resumen
Abstract Alzheimer’s disease is a type of dementia that can cause problems with human memory, thinking and behavior. This disease causes cell death and nerve tissue damage in the brain. The brain damage can be detected using brain volume, whole brain form, and genetic testing. In this research, we propose texture analysis of the brain and genomic analysis to detect Alzheimer’s disease. 3D MRI images were chosen to analyze the texture of the brain, and microarray data were chosen to analyze gene expression. We classified Alzheimer’s disease into three types: Alzheimer’s, Mild Cognitive Impairment (MCI), and Normal. In this study, texture analysis was carried out by using the Advanced Local Binary Pattern (ALBP) and the Gray Level Co-occurrence Matrix (GLCM). We also propose the bi-clustering method to analyze microarray data. The experimental results from texture analysis show that ALBP had better performance than GLCM in classification of Alzheimer’s disease. The ALBP method achieved an average value of accuracy of between 75% - 100% for binary classification of the whole brain data. Furthermore, Biclustering method with microarray data shows good performance gene expression, where this information show influence Alzheimer’s disease with total of bi-cluster is 6.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Lozano, Vincent, Stéphane Ubéda y Xavier Vigouroux. "Parallel Tools for Colored Image Processing". International Journal of Pattern Recognition and Artificial Intelligence 11, n.º 07 (noviembre de 1997): 1065–84. http://dx.doi.org/10.1142/s0218001497000482.

Texto completo
Resumen
Over recent years clusters of workstations have begun to replace systems of mainframes. These workstations can be used to behave as a parallel machine, and then could supply supercomputing performance at modest cost. We report an experiment of such a network within PVM framework for an industrial image analysis application. The goal of the application is to build an automated process of textile color pattern analysis. We plan to build several tools for parallel image analysis, and among those tools, we present here first the parallelization of a quantization algorithm and a way to parallelize a color image pyramid using the Parallel Virtual Machine environment. One important point is the use of the distributed storage that represents a set of workstations. This distribution allows the complete parallelization of the Read/Write operations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Gardner, Henry J. y Michael A. Martin. "Analyzing Ordinal Scales in Studies of Virtual Environments: Likert or Lump It!" Presence: Teleoperators and Virtual Environments 16, n.º 4 (1 de agosto de 2007): 439–46. http://dx.doi.org/10.1162/pres.16.4.439.

Texto completo
Resumen
Likert scaled data, which are frequently collected in studies of interaction in virtual environments, demand specialized statistical tools for analysis. The routine use of statistical methods appropriate for continuous data in this context can lead to significant inferential flaws. Likert scaled data are ordinal rather than interval scaled and need to be analyzed using rank based statistical procedures that are widely available. Likert scores are “lumpy” in the sense that they cluster around a small number of fixed values. This lumpiness is made worse by the tendency for subjects to cluster towards either the middle or the extremes of the scale. We suggest an ad hoc method to deal with such data which can involve a further lumping of the results followed by the application of nonparametric statistics. Averaging Likert scores over several different survey questions, which is sometimes done in studies of interaction in virtual environments, results in a different sort of lumpiness. The lumped variables which are obtained in this manner can be quite murky and should be used with great caution, if at all, particularly if the number of questions over which such averaging is carried out is small.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Artacho, Miguel Ángel, Enrique Alcántara y Natividad Martínez. "Multisensory Analysis of Consumer–Product Interaction During Ceramic Tile Shopping Experiences". Multisensory Research 33, n.º 2 (8 de enero de 2020): 213–49. http://dx.doi.org/10.1163/22134808-20191391.

Texto completo
Resumen
Abstract The need to design products that engage several senses has being increasingly recognised by design and marketing professionals. Many works analyse the impact of sensory stimuli on the hedonic, cognitive, and emotional responses of consumers, as well as on their satisfaction and intention to purchase. However, there is much less information about the utilitarian dimension related to a sensory non-reflective analysis of the tangible elements of the experience, the sequential role played by different senses, and their relative importance. This work analyses the sensorial dimension of consumer interactions in shops. Consumers were filmed in two ceramic tile shops and their behaviour was analysed according to a previously validated checklist. Sequence of actions, their frequency of occurrence, and the duration of inspections were recorded, and consumers were classified according to their sensory exploration strategies. Results show that inspection patterns are intentional but shifting throughout the interaction. Considering the whole sequence, vision is the dominant sense followed by touch. However, sensory dominance varies throughout the sequence. The dominance differences appear between all senses and within the senses of vision, touch and audition. Cluster analysis classified consumers into two groups, those who were more interactive and those who were visual and passive evaluators. These results are very important for understanding consumer interaction patterns, which senses are involved (including their importance and hierarchy), and which sensory properties of tiles are evaluated during the shopping experience. Moreover, this information is crucial for setting design guidelines to improve sensory interactions and bridge sensory demands with product features.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Yulin, Sergey S. y Irina N. Palamar. "Probability Model Based on Cluster Analysis to Classify Sequences of Observations for Small Training Sets". Statistics, Optimization & Information Computing 8, n.º 1 (18 de febrero de 2020): 296–303. http://dx.doi.org/10.19139/soic-2310-5070-690.

Texto completo
Resumen
The problem of recognizing patterns, when there are few training data available, is particularly relevant and arises in cases when collection of training data is expensive or essentially impossible. The work proposes a new probability model MC&CL (Markov Chain and Clusters) based on a combination of markov chain and algorithm of clustering (self-organizing map of Kohonen, k-means method), to solve a problem of classifying sequences of observations, when the amount of training dataset is low. An original experimental comparison is made between the developed model (MC&CL) and a number of the other popular models to classify sequences: HMM (Hidden Markov Model), HCRF (Hidden Conditional Random Fields),LSTM (Long Short-Term Memory), kNN+DTW (k-Nearest Neighbors algorithm + Dynamic Time Warping algorithm). A comparison is made using synthetic random sequences, generated from the hidden markov model, with noise added to training specimens. The best accuracy of classifying the suggested model is shown, as compared to those under review, when the amount of training data is low.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Witmer, Bob G., Christian J. Jerome y Michael J. Singer. "The Factor Structure of the Presence Questionnaire". Presence: Teleoperators and Virtual Environments 14, n.º 3 (junio de 2005): 298–312. http://dx.doi.org/10.1162/105474605323384654.

Texto completo
Resumen
Constructing a valid measure of presence and discovering the factors that contribute to presence have been much sought after goals of presence researchers and at times have generated controversy among them. This paper describes the results of principal-components analyses of Presence Questionnaire (PQ) data from 325 participants following exposure to immersive virtual environments. The analyses suggest that a 4-factor model provides the best fit to our data. The factors are Involvement, Adaptation/Immersion, Sensory Fidelity, and Interface Quality. Except for the Adaptation/Immersion factor, these factors corresponded to those identified in a cluster analysis of data from an earlier version of the questionnaire. The existence of an Adaptation/Immersion factor leads us to postulate that immersion is greater for those individuals who rapidly and easily adapt to the virtual environment. The magnitudes of the correlations among the factors indicate moderately strong relationships among the 4 factors. Within these relationships, Sensory Fidelity items seem to be more closely related to Involvement, whereas Interface Quality items appear to be more closely related to Adaptation/Immersion, even though there is a moderately strong relationship between the Involvement and Adaptation/Immersion factors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hachaj, Tomasz y Patryk Mazurek. "Comparative Analysis of Supervised and Unsupervised Approaches Applied to Large-Scale “In The Wild” Face Verification". Symmetry 12, n.º 11 (5 de noviembre de 2020): 1832. http://dx.doi.org/10.3390/sym12111832.

Texto completo
Resumen
Deep learning-based feature extraction methods and transfer learning have become common approaches in the field of pattern recognition. Deep convolutional neural networks trained using tripled-based loss functions allow for the generation of face embeddings, which can be directly applied to face verification and clustering. Knowledge about the ground truth of face identities might improve the effectiveness of the final classification algorithm; however, it is also possible to use ground truth clusters previously discovered using an unsupervised approach. The aim of this paper is to evaluate the potential improvement of classification results of state-of-the-art supervised classification methods trained with and without ground truth knowledge. In this study, we use two sufficiently large data sets containing more than 200,000 “taken in the wild” images, each with various resolutions, visual quality, and face poses which, in our opinion, guarantee the statistical significance of the results. We examine several clustering and supervised pattern recognition algorithms and find that knowledge about the ground truth has a very small influence on the Fowlkes–Mallows score (FMS) of the classification algorithm. In the case of the classification algorithm that obtained the highest accuracy in our experiment, the FMS improved by only 5.3% (from 0.749 to 0.791) in the first data set and by 6.6% (from 0.652 to 0.718) in the second data set. Our results show that, beside highly secure systems in which face verification is a key component, face identities discovered by unsupervised approaches can be safely used for training supervised classifiers. We also found that the Silhouette Coefficient (SC) of unsupervised clustering is positively correlated with the Adjusted Rand Index, V-measure score, and Fowlkes–Mallows score and, so, we can use the SC as an indicator of clustering performance when the ground truth of face identities is not known. All of these conclusions are important findings for large-scale face verification problems. The reason for this is the fact that skipping the verification of people’s identities before supervised training saves a lot of time and resources.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Vankayalapati, Revathi, Kalyani Balaso Ghutugade, Rekha Vannapuram y Bejjanki Pooja Sree Prasanna. "K-Means Algorithm for Clustering of Learners Performance Levels Using Machine Learning Techniques". Revue d'Intelligence Artificielle 35, n.º 1 (28 de febrero de 2021): 99–104. http://dx.doi.org/10.18280/ria.350112.

Texto completo
Resumen
Data Clustering is the process of grouping the objects in a way which is identical to the objects in the same group than in other classes. In this paper, the clustering of data is used as k-means to assess the output of students. Machine Learning is an area used in all systems. Machine learning is used in education, pattern recognition, sports, industrial applications. Its significance increases with the future of the students in the educational system. Data collection in education is very useful, as data volumes in the education system are growing each day. Higher education is relatively new, but due to the growing database its significance grows. There are several ways to assess the success of students. K-means is one of the best and most successful methods. The secret information in the database is extracted using data mining to increase the output of students. The decision tree is also a way to predict the success of the students. In recent years, educational institutions have the greatest challenges in increasing data growth and using it to increase efficiency, such that better decision-making can be made. Clustering is one of the most important methods used for the analysis of data sets. This trial uses cluster analyses according to their features for section students in various classes. Uncontrolled K-means algorithm is discussed. The mining of education data is used for the study of the knowledge available in the field of education in order to provide secret, significant and useful information. The proposed model considers K-means clustering model for analyzing learners performance. The outcomes and future of students can be strengthened with this support. The results show that the K-means cluster algorithm is useful for grouping students based on similar performance features.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Vázquez, M. J., R. A. Lorenzo y R. Cela. "Chlorophenols identification in water using an electronic nose and ANNs (artiÞcial neural networks) classification". Water Science and Technology 49, n.º 9 (1 de mayo de 2004): 99–105. http://dx.doi.org/10.2166/wst.2004.0544.

Texto completo
Resumen
Electronic artificial noses are being developed as systems for the automated detection and classification of odours, vapors and gases. In the food industry, such devices are used as aids for quality control or process-monitoring tools. An electronic nose (EN) is generally composed of a chemical sensing system and a pattern recognition system (e.g. artificial neural network). An EN based on a non-specific conducting polymer array was used to monitor chlorophenols in water samples. Operational parameters for the EN were optimized by a Plackett-Burman factorial design. The experimental parameters studied were: sample volume, platen temperature, sample equilibration time, loop fill time, sample pressurization time and injection time. Optimal experimental conditions were applied to chlorophenols determination and differentiation in ultrapure water samples spiked with the EPA listed chlorophenols. Data analysis was carried out using principal component analysis (PCA) and artificial neural networks (ANNs) to predict the chlorophenols presence in water samples. The obtained results showed that it was possible to differentiate the five chlorophenol groups: monochlorophenol, dichlorophenol, trichlorophenol, tetrachlorophenol and pentachlorophenol. Differentiation of chlorophenol groups was based on Mahalanobis distance between the formed clusters. This Mahalanobis distance is designated by the Quality Factor, a value &gt;2 for this quality factor means a good differentiation between the clusters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Zhao, Qianqian, Weixiang Zhang, Runmiao Wang, Yitao Wang y Defang Ouyang. "Research Advances in Molecular Modeling in Cyclodextrins". Current Pharmaceutical Design 23, n.º 3 (20 de febrero de 2017): 522–31. http://dx.doi.org/10.2174/1381612822666161208142617.

Texto completo
Resumen
Background: Cyclodextrins (CDs), as one type of the novel pharmaceutical excipients, have been widely used in drug delivery and pharmaceutical industry. Over the past decades, a large amount of molecular modeling studies in CDs were reported for profound understanding of structural, dynamic and energetic features of CDs systems. Thus, this review is focused on qualitative and quantitative analysis of research outputs on molecular modeling in CDs. Methods: The original data were collected from Web of Science and analyzed by scientific knowledge mapping tools, including Citespace, Science of Science, VOSviewer, GPSvisualizer and Gephi software. Scientific knowledge mapping, as an emerging approach for literature analysis, was employed to identify the knowledge structure and capture the development of the science in a visual way. Results: The results of analysis included research outputs landscape, collaboration patterns, knowledge structure and research frontiers shift with time. China had the largest contributions to the publication number in this area, while USA dominated the high quality research outputs. International collaboration between USA and Europe was much stronger than that within Asia. J American Chemical Society, as one of the most important journals, played a pivotal role in linking different research fields. Furthermore, seven important thematic clusters were identified by the research cluster analysis with visualization tools and demonstrated from three different perspectives including: (1) the mostly-used CD molecules: β-Cyclodextrin, (2) preferred modeling tools: docking calculation and molecular dynamic, (3) hot research fields: structural properties, solubility, chiral recognition and solidstate inclusion complexes. Moreover, research frontier shift in the past three decades was traced by detecting keywords bursts with high citation. Conclusion: The current review provided us a macro-perspective and intellectual landscape to molecular modeling in CDs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

BOGUŚ, Piotr. "Misfire detection in a diesel engine using clustering in a short-time analysis of vibroacoustic signals". Combustion Engines 123, n.º 4 (1 de noviembre de 2005): 31–40. http://dx.doi.org/10.19206/ce-117367.

Texto completo
Resumen
The paper presents some results of the research on new diagnostic methods in combustion engines. It describes the application of short-time signal analysis together with pattern recognition techniques in the diagnosis of misfire in Diesel engines through vibroacoustic signals. One considered Diesel locomotive in particular. In the area of the nonroad sources of combustion gases the locomotives rate relatively high as air polluters. There are some regulations in the area of locomotives (e.g. Cart UIC 623 1-2-3 in Europe) but we still observe a lack of obligatory requirements for systems monitoring emission critical damage. Such obligatory on-board diagnostic systems were introduced for passenger cars (OBD II, EOBD). The OBD system performs a continuous monitoring of basic system parameters and one of its most important tasks is misfire detection. All these facts inclined the author to research the new relevant detection methods. The main aim of the research is to distinguish between two states: normal engine operation and the state of misfire. The general idea of the method was taken from the short-time Fourier analysis. The method is based on calculation of the values of some selected parameters in the time window sliding along the signal. For each window position one has a set of parameter values which gives the point in a corresponding multidimensional parameter space. Hence, the time evolution of the signal can be observed as the evolution plot in the parameter space. We suspect that the different system states (misfire) can be distinguished by the different position of points in the parameter space. In order to detect them, the clustering in the parameter space was performed. The first results show the possibility of distinguishing some different clusters within the parameter space which may correspond to different engine states.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Jaremenko, Christian, Nishant Ravikumar, Emanuela Affronti, Marion Merklein y Andreas Maier. "Determination of Forming Limits in Sheet Metal Forming Using Deep Learning". Materials 12, n.º 7 (30 de marzo de 2019): 1051. http://dx.doi.org/10.3390/ma12071051.

Texto completo
Resumen
The forming limit curve (FLC) is used to model the onset of sheet metal instability during forming processes e.g., in the area of finite element analysis, and is usually determined by evaluation of strain distributions, derived with optical measurement systems during Nakajima tests. Current methods comprise of the standardized DIN EN ISO 12004-2 or time-dependent approaches that heuristically limit the evaluation area to a fraction of the available information and show weaknesses in the context of brittle materials without a pronounced necking phase. To address these limitations, supervised and unsupervised pattern recognition methods were introduced recently. However, these approaches are still dependent on prior knowledge, time, and localization information. This study overcomes these limitations by adopting a Siamese convolutional neural network (CNN), as a feature extractor. Suitable features are automatically learned using the extreme cases of the homogeneous and inhomogeneous forming phase in a supervised setup. Using robust Student’s t mixture models, the learned features are clustered into three distributions in an unsupervised manner that cover the complete forming process. Due to the location and time independency of the method, the knowledge learned from formed specimen up until fracture can be transferred on to other forming processes that were prematurely stopped and assessed using metallographic examinations, enabling probabilistic cluster membership assignments for each frame of the forming sequence. The generalization of the method to unseen materials is evaluated in multiple experiments, and additionally tested on an aluminum alloy AA5182, which is characterized by Portevin-LE Chatlier effects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Rudas, Imre J. "Intelligent Engineering Systems". Journal of Advanced Computational Intelligence and Intelligent Informatics 2, n.º 3 (20 de junio de 1998): 69–71. http://dx.doi.org/10.20965/jaciii.1998.p0069.

Texto completo
Resumen
Building intelligent systems has been one of the great challenges since the early days of human culture. From the second half of the 18th century, two revolutionary changes played the key role in technical development, hence in creating engineering and intelligent engineering systems. The industrial revolution was made possible through technical advances, and muscle power was replaced by machine power. The information revolution of our time, in turn, canbe characterized as the replacement of brain power by machine intelligence. The technique used to build engineering systems and replace muscle power can be termed "Hard Automation"1) and deals with industrial processes that are fixed and repetitive in nature. In hard automation, the system configuration and the operations are fixed and cannot be changed without considerable down-time and cost. It can be used, however, particularly in applications calling for fast, accurate operation, when manufacturing large batches of the same product. The "intelligent" area of automation is "Soft Automation," which involves the flexible, intelligent operation of an automated process. In flexible automation, the task is programmable and a work cell must be reconfigured quickly to accommodate a product change. It is particularly suitable for plant environments in which a variety of products is manufactured in small batches. Processes in flexible automation may have unexpected or previously unknown conditions, and would require a certain degree of "machine" intelligence to handle them.The term machine intelligence has been changing with time and is machinespecific, so intelligence in this context still remains more or less a mysterious phenomenon. Following Prof. Lotfi A. Zadeh,2) we consider a system intelligent if it has a high machine intelligence quotient (MIQ). As Prof. Zadeh stated, "MIQ is a measure of intelligence of man-made systems," and can be characterized by its well defined dimensions, such as planning, decision making, problem solving, learning reasoning, natural language understanding, speech recognition, handwriting recognition, pattern recognition, diagnostics, and execution of high level instructions.Engineering practice often involves complex systems having multiple variable and multiple parameter models, sometimes with nonlinear coupling. The conventional approaches for understanding and predicting the behavior of such systems based on analytical techniques can prove to be inadequate, even at the initial stages of setting up an appropriate mathematical model. The computational environment used in such an analytical approach is sometimes too categoric and inflexible in order to cope with the intricacy and complexity of real-world industrial systems. It turns out that, in dealing with such systems, one must face a high degree of uncertainty and tolerate great imprecision. Trying to increase precision can be very costly.In the face of the difficulties above, Prof. Zadeh proposes a different approach for Machine Intelligence. He separates Hard Computing techniques based Artificial Intelligence from Soft Computing techniques based Computational Intelligence.•Hard computing is oriented toward the analysis and design of physical processes and systems, and is characterized by precision, formality, and categorization. It is based on binary logic, crisp systems, numerical analysis, probability theory, differential equations, functional analysis, mathematical programming approximation theory, and crisp software.•Soft computing is oriented toward the analysis and design of intelligent systems. It is based on fuzzy logic, artificial neural networks, and probabilistic reasoning, including genetic algorithms, chaos theory, and parts of machine learning, and is characterized by approximation and dispositionality.In hard computing, imprecision and uncertainty are undesirable properties. In soft computing, the tolerance for imprecision and uncertainty is exploited to achieve an acceptable solution at low cost, tractability, and a high MIQ. Prof. Zadeh argues that soft rather than hard computing should be viewed as the foundation of real machine intelligence. A center has been established - the Berkeley Initiative for Soft Computing (BISC) - and he directs it at the University of California, Berkeley. BISC devotes its activities to this concept.3) Soft computing, as he explains2),•is a consortium of methodologies providing a foundation for the conception and design of intelligent systems,•is aimed at formalizing of the remarkable human ability to make rational decision in an uncertain, imprecise environment.The guiding principle of soft computing, given by Prof. Zadeh2) is: Exploit the tolerance for imprecision, uncertainty, and partial truth to achieve tractability, robustness, low solution cost, and better rapport with reality.Fuzzy logic is mainly concerned with imprecision and approximate reasoning, neurocomputing mainly with learning and curve fitting, genetic computation mainly with searching and optimization and probabilistic reasoning mainly with uncertainty and propagation of belief. The constituents of soft computing are complementary rather than competitive. Experience gained over the past decade indicates that it can be more effective to use them combined, rather than exclusively.Based on this approach, machine intelligence, including artificial intelligence and computational intelligence (soft computing techniques) is one pillar of Intelligent Engineering Systems. Hundreds of new results in this area are published in journals and international conference proceedings. One such conference, organized in Budapest, Hungary, on September 15-17, 1997, was titled'IEEE International Conference on Intelligent Engineering Systems 1997' (INES'97), sponsored by the IEEE Industrial Electronics Society, IEEE Hungary Section, Bá{a}nki Doná{a}t Polytechnic, Hungary, National Committee for Technological Development, Hungary, and in technical cooperation with the IEEE Robotics & Automation Society. It had around 100 participants from 29 countries. This special issue features papers selected from those papers presented during the conference. It should be pointed out that these papers are revised and expanded versions of those presented.The first paper discusses an intelligent control system of an automated guided vehicle used in container terminals. Container terminals, as the center of cargo transportation, play a key role in everyday cargo handling. Learning control has been applied to maintaining the vehicle's course and enabling it to stop at a designatedlocation. Speed control uses conventional control. System performance system was evaluated by simulation, and performance tests slated for a test vehicle.The second paper presents a real-time camera-based system designed for gaze tracking focused on human-computer communication. The objective was to equip computer systems with a tool that provides visual information about the user. The system detects the user's presence, then locates and tracks the face, nose and both eyes. Detection is enabled by combining image processing techniques and pattern recognition.The third paper discusses the application of soft computing techniques to solve modeling and control problems in system engineering. After the design of classical PID and fuzzy PID controllers for nonlinear systems with an approximately known dynamic model, the neural control of a SCARA robot is considered. Fuzzy control is discussed for a special class of MIMO nonlinear systems and the method of Wang generalized for such systems.The next paper describes fuzzy and neural network algorithms for word frequency prediction in document filtering. The two techniques presented are compared and an alternative neural network algoritm discussed.The fifth paper highlights the theory of common-sense knowledge in representation and reasoning. A connectionist model is proposed for common-sense knowledge representation and reasoning, and experimental results using this method presented.The next paper introduces an expert consulting system that employs software agents to manage distributed knowledge sources. These individual software agents solve users' problems either by themselves or thorough mutual cooperation.The last paper presents a methodology for creating and applying a generic manufacturing process model for mechanical parts. Based on the product model and other up-to-date approaches, the proposed model involves all possible manufacturing process variants for a cluster of manufacturing tasks. The application involves a four-level model structure and Petri net representation of manufacturing process entities. Creation and evaluation of model entities and representation of the knowledge built in the shape and manufacturing process models are emphasised. The proposed process model is applied in manufacturing process planning and production scheduling.References:1) C. W. De Silva, "Automation Intelligence," Engineering Application of Artificial Intelligence, 7-5, 471-477, (1994).2) L. A. Zadeh, "Fuzzy Logic, Neural Networks and Soft Computing," NATO Advanced Studies Institute on Soft Computing and Its Application, Antalya, Turkey, (1996).3) L. A. Zadeh, "Berkeley Initiative_in Soft Computing," IEEE Industrial Electronics Society Newsletter. 41-3, 8-10, (1994).
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Brendel, Rebecca, Sascha Rohn y Philipp Weller. "Nitrogen monoxide as dopant for enhanced selectivity of isomeric monoterpenes in drift tube ion mobility spectrometry with 3H ionization". Analytical and Bioanalytical Chemistry 413, n.º 13 (10 de abril de 2021): 3551–60. http://dx.doi.org/10.1007/s00216-021-03306-7.

Texto completo
Resumen
AbstractThe ion mobility spectra of the isomeric monoterpenes α-pinene, β-pinene, myrcene, and limonene in drift tube ion mobility spectrometry (IMS) with 3H radioactive ionization are highly similar and difficult to distinguish. The aim of this work was to enhance the selectivity of IMS by the addition of nitrogen monoxide (NO) as dopant and to investigate the underlying changes in ion formation responsible for the modified ion signals observed in the ion mobility spectra. Even though 3H-based-IMS systems have been used in hyphenation with gas chromatography (GC) for profiling of volatile organic compounds (VOCs), the investigation of ion formation still remains challenging and was exemplified by the investigated monoterpenes. Nonetheless, the formation of monomeric, dimeric, and trimeric ion clusters could be tentatively confirmed by a mass-to-mobility correlation and the highly similar pattern of ion signals in the monomer region was attributed to isomerization mechanisms potentially occurring after proton transfer reactions. The addition of NO as dopant could finally lead to the formation of additional product ions and increased the selectivity of IMS for the investigated monoterpenes as confirmed by principal component analysis (PCA). The discrimination of monoterpenes in the volatile profile is highly relevant in the quality control of hops and was given as the example for application. The results indicate that additional product ions were obtained by the formation of NO+ adduct ions, next to hydride abstraction, charge transfer, or fragmentation reactions. This approach can potentially leverage selectivity issues in VOC profiling of complex matrices, such as food matrices or raw materials in combination with chemometric pattern recognition techniques. Graphical abstract
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Illarionov, E., D. Sokoloff, R. Arlt y A. Khlystova. "Cluster analysis for pattern recognition in solar butterfly diagrams". Astronomische Nachrichten 332, n.º 6 (julio de 2011): 590–96. http://dx.doi.org/10.1002/asna.201111572.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Ji, Hongjin, Daoming Zeng, Yanxiang Shi, Yangang Wu y Xisheng Wu. "Semi-hierarchical correspondence cluster analysis and regional geochemical pattern recognition". Journal of Geochemical Exploration 93, n.º 2 (mayo de 2007): 109–19. http://dx.doi.org/10.1016/j.gexplo.2006.10.002.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

HAKEN, HERMANN. "SYNERGETICS: FROM PATTERN FORMATION TO PATTERN ANALYSIS AND PATTERN RECOGNITION". International Journal of Bifurcation and Chaos 04, n.º 05 (octubre de 1994): 1069–83. http://dx.doi.org/10.1142/s0218127494000782.

Texto completo
Resumen
It is by now well known that numerous open systems in physics (fluids, plasmas, lasers, nonlinear optical devices, semiconductors), chemistry and biology (morphogenesis) may spontaneously develop spatial, temporal or spatiotemporal structures by self-organization. Quite often, striking analogies between the corresponding patterns can be observed in spite of the fact that the underlying systems are of quite a different nature. In this paper I shall first give an outline of general concepts that allow us to deal with the spontaneous formation of structures from a unifying point of view that is based on concepts of instability, order parameters and enslavement. We shall discuss a number of generalized Ginzburg-Landau equations. In most cases treated so far, theory started from microscopic or mesoscopic equations of motion from which the evolving structures were derived. In my paper I shall address two further problems that are in a way the reverse, namely (1) Can we derive order parameters and the basic modes from observed experimental data? (2) Can we construct systems by means of an underlying dynamics that are capable of producing patterns or structures that we prescribe? In order to address (1), a new variational principle that may be derived from path intergrals is introduced and illustrated by examples. An approach to the problem (2) is illustrated by the device of a computer that recognizes patterns and that may be realized by various kinds of spontaneous pattern formations in semiconductors and lasers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Neidig, Klaus-Peter, Rainer Saffrich, Michael Lorenz y Hans Robert Kalbitzer. "Cluster analysis and multiplet pattern recognition in two-dimensional NMR spectra". Journal of Magnetic Resonance (1969) 89, n.º 3 (octubre de 1990): 543–52. http://dx.doi.org/10.1016/0022-2364(90)90337-9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Brauer, Donna J. "A Method for Pattern Recognition". Research and Theory for Nursing Practice 20, n.º 4 (diciembre de 2006): 277–90. http://dx.doi.org/10.1891/rtnp-v20i4a004.

Texto completo
Resumen
Although pattern is a dominant concept in nursing science, only Newman’s method for recognizing pattern has been fully articulated and widely used in research about the human health experience. This article proposes an alternative, less costly method to facilitate research with larger numbers of participants in clinical settings. Cluster analysis, a quasi-quantitative technique, and content analysis were combined to produce a technique for recognizing patterns of person–environment interaction. Results from two studies with persons experiencing a highly variable chronic illness, rheumatoid arthritis, indicated that this new approach identifies distinct common patterns of person–environment interaction. Sufficient detail about the nature of each pattern resulted to facilitate further knowledge development about the health experience and to provide guidance in structuring nursing care.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Inkinen, Sami J. "Pattern recognition and image analysis". Signal Processing 35, n.º 1 (enero de 1994): 95–96. http://dx.doi.org/10.1016/0165-1684(94)90196-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Plerou, Antonia, Elena Vlamou y Vasil Papadopoulos. "EEG SIGNAL PATTERN RECOGNITION ANALYSIS: FUZZY LOGIC SYSTEMS ASCENDANCY". Advances in Fuzzy Sets and Systems 21, n.º 2 (22 de octubre de 2016): 107–19. http://dx.doi.org/10.17654/fs021020107.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Milojković-Opsenica, Dušanka, Petar Ristivojević, Filip Andrić y Jelena Trifković. "Planar Chromatographic Systems in Pattern Recognition and Fingerprint Analysis". Chromatographia 76, n.º 19-20 (21 de febrero de 2013): 1239–47. http://dx.doi.org/10.1007/s10337-013-2423-9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Bolle, Ruud M., Nalini K. Ratha y Sharath Pankanti. "Error analysis of pattern recognition systems—the subsets bootstrap". Computer Vision and Image Understanding 93, n.º 1 (enero de 2004): 1–33. http://dx.doi.org/10.1016/j.cviu.2003.08.002.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Lin, Fu-Ru, Nan-Jing Wu y Ting-Kuei Tsay. "Applications of Cluster Analysis and Pattern Recognition for Typhoon Hourly Rainfall Forecast". Advances in Meteorology 2017 (2017): 1–17. http://dx.doi.org/10.1155/2017/5019646.

Texto completo
Resumen
Based on the factors of meteorology and topography, it is assumed that there exist some certain patterns in spatial and temporal rainfall distribution of a watershed. A typhoon rainfall forecasting model is developed under this assumption. If rainfall patterns can be analyzed and recognized in terms of individual watershed topography, only the spatial rainfall distribution prior to a specific moment is needed to forecast the rainfall in the next coming hours. It does not need any other condition in meteorology and climatology. Besides, supplement techniques of missing rainfall gage data are also considered to build an all-purpose forecast model. By integrating techniques of cluster analysis and pattern recognition, present proposed rainfall forecasting model is tested using historical data of Tamsui River Basin in Northern Taiwan. Good performance is validated by checking on coefficient of correlation and coefficient of efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Nayyar, Anand, Pijush Kanti Dutta Pramankit y Rajni Mohana. "Introduction to the Special Issue on Evolving IoT and Cyber-Physical Systems: Advancements, Applications, and Solutions". Scalable Computing: Practice and Experience 21, n.º 3 (1 de agosto de 2020): 347–48. http://dx.doi.org/10.12694/scpe.v21i3.1568.

Texto completo
Resumen
Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision successful. The major challenges and research scopes include system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. It is our great privilege to present Volume 21, Issue 3 of Scalable Computing: Practice and Experience. We had received 30 research papers and out of which 14 papers are selected for publication. The objective of this special issue is to explore and report recent advances and disseminate state-of-the-art research related to IoT, CPS and the enabling and associated technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to IoT and CPS. Vivek Kumar Prasad and Madhuri D Bhavsar in the paper titled "Monitoring and Prediction of SLA for IoT based Cloud described the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. Kasture et al. in the paper titled "Comparative Study of Speaker Recognition Techniques in IoT Devices for Text Independent Negative Recognition" compared the performance of features which are used in state of art speaker recognition models and analyse variants of Mel frequency cepstrum coefficients (MFCC) predominantly used in feature extraction which can be further incorporated and used in various smart devices. Mahesh Kumar Singh and Om Prakash Rishi in the paper titled "Event Driven Recommendation System for E-Commerce using Knowledge based Collaborative Filtering Technique" proposed a novel system that uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. The proposed approach uses about 100 percent of users' participation in the form of activities during navigation of the web site. Thus, the system expects under the users' interest that is beneficial for both seller and buyer. The proposed system is compared with baseline methods in area of recommendation system using three parameters: precision, recall and NDGA through online and offline evaluation studies with user data and it is observed that proposed system is better as compared to other baseline systems. Benbrahim et al. in the paper titled "Deep Convolutional Neural Network with TensorFlow and Keras to Classify Skin Cancer" proposed a novel classification model to classify skin tumours in images using Deep Learning methodology and the proposed system was tested on HAM10000 dataset comprising of 10,015 dermatoscopic images and the results observed that the proposed system is accurate in order of 94.06\% in validation set and 93.93\% in the test set. Devi B et al. in the paper titled "Deadlock Free Resource Management Technique for IoT-Based Post Disaster Recovery Systems" proposed a new class of techniques that do not perform stringent testing before allocating the resources but still ensure that the system is deadlock-free and the overhead is also minimal. The proposed technique suggests reserving a portion of the resources to ensure no deadlock would occur. The correctness of the technique is proved in the form of theorems. The average turnaround time is approximately 18\% lower for the proposed technique over Banker's algorithm and also an optimal overhead of O(m). Deep et al. in the paper titled "Access Management of User and Cyber-Physical Device in DBAAS According to Indian IT Laws Using Blockchain" proposed a novel blockchain solution to track the activities of employees managing cloud. Employee authentication and authorization are managed through the blockchain server. User authentication related data is stored in blockchain. The proposed work assists cloud companies to have better control over their employee's activities, thus help in preventing insider attack on User and Cyber-Physical Devices. Sumit Kumar and Jaspreet Singh in paper titled "Internet of Vehicles (IoV) over VANETS: Smart and Secure Communication using IoT" highlighted a detailed description of Internet of Vehicles (IoV) with current applications, architectures, communication technologies, routing protocols and different issues. The researchers also elaborated research challenges and trade-off between security and privacy in area of IoV. Deore et al. in the paper titled "A New Approach for Navigation and Traffic Signs Indication Using Map Integrated Augmented Reality for Self-Driving Cars" proposed a new approach to supplement the technology used in self-driving cards for perception. The proposed approach uses Augmented Reality to create and augment artificial objects of navigational signs and traffic signals based on vehicles location to reality. This approach help navigate the vehicle even if the road infrastructure does not have very good sign indications and marking. The approach was tested locally by creating a local navigational system and a smartphone based augmented reality app. The approach performed better than the conventional method as the objects were clearer in the frame which made it each for the object detection to detect them. Bhardwaj et al. in the paper titled "A Framework to Systematically Analyse the Trustworthiness of Nodes for Securing IoV Interactions" performed literature on IoV and Trust and proposed a Hybrid Trust model that seperates the malicious and trusted nodes to secure the interaction of vehicle in IoV. To test the model, simulation was conducted on varied threshold values. And results observed that PDR of trusted node is 0.63 which is higher as compared to PDR of malicious node which is 0.15. And on the basis of PDR, number of available hops and Trust Dynamics the malicious nodes are identified and discarded. Saniya Zahoor and Roohie Naaz Mir in the paper titled "A Parallelization Based Data Management Framework for Pervasive IoT Applications" highlighted the recent studies and related information in data management for pervasive IoT applications having limited resources. The paper also proposes a parallelization-based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach. Patel et al. in the paper titled "Performance Analysis of Video ON-Demand and Live Video Streaming Using Cloud Based Services" presented a review of video analysis over the LVS \& VoDS video application. The researchers compared different messaging brokers which helps to deliver each frame in a distributed pipeline to analyze the impact on two message brokers for video analysis to achieve LVS & VoS using AWS elemental services. In addition, the researchers also analysed the Kafka configuration parameter for reliability on full-service-mode. Saniya Zahoor and Roohie Naaz Mir in the paper titled "Design and Modeling of Resource-Constrained IoT Based Body Area Networks" presented the design and modeling of a resource-constrained BAN System and also discussed the various scenarios of BAN in context of resource constraints. The Researchers also proposed an Advanced Edge Clustering (AEC) approach to manage the resources such as energy, storage, and processing of BAN devices while performing real-time data capture of critical health parameters and detection of abnormal patterns. The comparison of the AEC approach is done with the Stable Election Protocol (SEP) through simulations and empirical data analysis. The results show an improvement in energy, processing time and storage requirements for the processing of data on BAN devices in AEC as compared to SEP. Neelam Saleem Khan and Mohammad Ahsan Chishti in the paper titled "Security Challenges in Fog and IoT, Blockchain Technology and Cell Tree Solutions: A Review" outlined major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes. Furthermore, this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT. In addition, the researchers discussed the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology. The researchers also proposed Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area. Bhadwal et al. in the paper titled "A Machine Translation System from Hindi to Sanskrit Language Using Rule Based Approach" proposed a rule-based machine translation system to bridge the language barrier between Hindi and Sanskrit Language by converting any test in Hindi to Sanskrit. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94\% while the pragmatic analysis of 50 sentences produce an accuracy of around 86\%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant. Anshu Kumar Dwivedi and A.K. Sharma in the paper titled "NEEF: A Novel Energy Efficient Fuzzy Logic Based Clustering Protocol for Wireless Sensor Network" proposed a a deterministic novel energy efficient fuzzy logic-based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Wang, Chien Chih, Cheng Ding Chang y Bernard C. Jiang. "Pattern Recognition of Multiscale Entropy Curve for ECG Signal Analysis". Applied Mechanics and Materials 195-196 (agosto de 2012): 603–7. http://dx.doi.org/10.4028/www.scientific.net/amm.195-196.603.

Texto completo
Resumen
Higher complexities of multiscale entropy (MSE) curve present the physiological system has the better ability to adapt under environment change. Traditional way to distinguish different complexity groups of MSE curves according to the area under MSE curves (AUC) by human self-determination, but that would be difficult to judge when some curves had similar AUC or had overlapped. This paper proposed a combination clustering and MR control chart to calculate the group distances as the response to assessment the clustering result of different MSE curves combination. From the experiment analysis result for ECG signal, using the four features considered in this paper could provide a good recognition in cluster MSE curves.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Vantas, Konstantinos y Epaminondas Sidiropoulos. "Intra-Storm Pattern Recognition through Fuzzy Clustering". Hydrology 8, n.º 2 (25 de marzo de 2021): 57. http://dx.doi.org/10.3390/hydrology8020057.

Texto completo
Resumen
The identification and recognition of temporal rainfall patterns is important and useful not only for climatological studies, but mainly for supporting rainfall–runoff modeling and water resources management. Clustering techniques applied to rainfall data provide meaningful ways for producing concise and inclusive pattern classifications. In this paper, a timeseries of rainfall data coming from the Greek National Bank of Hydrological and Meteorological Information are delineated to independent rainstorms and subjected to cluster analysis, in order to identify and extract representative patterns. The computational process is a custom-developed, domain-specific algorithm that produces temporal rainfall patterns using common characteristics from the data via fuzzy clustering in which (a) every storm may belong to more than one cluster, allowing for some equivocation in the data, (b) the number of the clusters is not assumed known a priori but is determined solely from the data and, finally, (c) intra-storm and seasonal temporal distribution patterns are produced. Traditional classification methods include prior empirical knowledge, while the proposed method is fully unsupervised, not presupposing any external elements and giving results superior to the former.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Ospina-Dávila, Y. M. y Mauricio Orozco-Alzate. "Parsimonious design of pattern recognition systems for slope stability analysis". Earth Science Informatics 13, n.º 2 (24 de diciembre de 2019): 523–36. http://dx.doi.org/10.1007/s12145-019-00429-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Sánchez, J. S. "PATTERN RECOGNITION AND IMAGE ANALYSIS IN CYBERNETIC APPLICATIONS". Cybernetics and Systems 35, n.º 1 (enero de 2004): 1–2. http://dx.doi.org/10.1080/716100285.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Naveen, V. Jagan, K. Krishna Kishore y P. Rajesh Kumar. "Human Ear Pattern Recognition System". International Journal of Advanced Research in Computer Science and Software Engineering 7, n.º 8 (30 de agosto de 2017): 117. http://dx.doi.org/10.23956/ijarcsse.v7i8.35.

Texto completo
Resumen
In the modern world, human recognition systems play an important role to improve security by reducing chances of evasion. Human ear is used for person identification .In the Empirical study on research on human ear, 10000 images are taken to find the uniqueness of the ear. Ear based system is one of the few biometric systems which can provides stable characteristics over the age. In this paper, ear images are taken from mathematical analysis of images (AMI) ear data base and the analysis is done on ear pattern recognition based on the Expectation maximization algorithm and k means algorithm. Pattern of ears affected with different types of noises are recognized based on Principle component analysis (PCA) algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Schlipsing, Marc, Jan Salmen, Marc Tschentscher y Christian Igel. "Adaptive pattern recognition in real-time video-based soccer analysis". Journal of Real-Time Image Processing 13, n.º 2 (25 de febrero de 2014): 345–61. http://dx.doi.org/10.1007/s11554-014-0406-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Chen, Hong Tao, Deng Wan Li y Pan Fu. "Tool Wear Prediction Based on Fuzzy Cluster Analysis". Advanced Materials Research 490-495 (marzo de 2012): 1589–94. http://dx.doi.org/10.4028/www.scientific.net/amr.490-495.1589.

Texto completo
Resumen
There are several stages of tool wear in turning process. The theory and the algorithm of the fuzzy cluster analysis (FCA) are applied in the research of the CNC turning tool wear State.We collect of the force signals and vibration signals at each stage. Using wavelet filtering and power spectrum methods, typical parameters changes are detected. We extract the signal feature for fuzzy clustering. Experimental results show that the tool wear prediction is achieved in turning by using this pattern recognition method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Banerjee, Saibal y Azriel Rosenfeld. "Model-based cluster analysis". Pattern Recognition 26, n.º 6 (junio de 1993): 963–74. http://dx.doi.org/10.1016/0031-3203(93)90061-z.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía