To see the other types of publications on this topic, follow the link: Gaussian mixer model (GMM).

Journal articles on the topic 'Gaussian mixer model (GMM)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gaussian mixer model (GMM).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Lianjun, Chuangmin Liu, and Craig J. Davis. "A mixture model-based approach to the classification of ecological habitats using Forest Inventory and Analysis data." Canadian Journal of Forest Research 34, no. 5 (2004): 1150–56. http://dx.doi.org/10.1139/x04-005.

Full text
Abstract:
A Gaussian mixture model (GMM) is used to classify Forest Inventory and Analysis (FIA) plots into six ecological habitats in the northeastern USA. The GMM approach captures intra-class variation by modeling each habitat class as a mixture of subclasses of Gaussian distributions. The classification is achieved based on the appropriate posterior probability. The GMM classifier outperforms a traditional statistical method (i.e., linear discriminant analysis or LDA), and produces similar overall accuracy rates to a commonly used neural network model (i.e., multi-layer perceptrons or MLP). For the classifications of individual ecological habitats, however, MLP produces better (or same) producers' classification accuracies for five of the six ecological habitats than does GMM. But the GMM's accuracy rates are more consistent (92%–97%) across the six ecological habitats than those of the MLP model (82%–99%). This study shows that GMM offers an attractive alternative for modeling the complex stand structure and relationships between variables in mixed-species forest stands.
APA, Harvard, Vancouver, ISO, and other styles
2

Jin, Qiwen, Yong Ma, Erting Pan, et al. "Hyperspectral Unmixing with Gaussian Mixture Model and Spatial Group Sparsity." Remote Sensing 11, no. 20 (2019): 2434. http://dx.doi.org/10.3390/rs11202434.

Full text
Abstract:
In recent years, endmember variability has received much attention in the field of hyperspectral unmixing. To solve the problem caused by the inaccuracy of the endmember signature, the endmembers are usually modeled to assume followed by a statistical distribution. However, those distribution-based methods only use the spectral information alone and do not fully exploit the possible local spatial correlation. When the pixels lie on the inhomogeneous region, the abundances of the neighboring pixels will not share the same prior constraints. Thus, in this paper, to achieve better abundance estimation performance, a method based on the Gaussian mixture model (GMM) and spatial group sparsity constraint is proposed. To fully exploit the group structure, we take the superpixel segmentation (SS) as preprocessing to generate the spatial groups. Then, we use GMM to model the endmember distribution, incorporating the spatial group sparsity as a mixed-norm regularization into the objective function. Finally, under the Bayesian framework, the conditional density function leads to a standard maximum a posteriori (MAP) problem, which can be solved using generalized expectation-maximization (GEM). Experiments on simulated and real hyperspectral data demonstrate that the proposed algorithm has higher unmixing precision compared with other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Nagesh, A. "New Feature Vectors using GFCC for Speaker Identification." International Journal of Emerging Research in Management and Technology 6, no. 8 (2018): 243. http://dx.doi.org/10.23956/ijermt.v6i8.146.

Full text
Abstract:
The feature vectors of speaker identification system plays a crucial role in the overall performance of the system. There are many new feature vectors extraction methods based on MFCC, but ultimately we want to maximize the performance of SID system. The objective of this paper to derive Gammatone Frequency Cepstral Coefficients (GFCC) based a new set of feature vectors using Gaussian Mixer model (GMM) for speaker identification. The MFCC are the default feature vectors for speaker recognition, but they are not very robust at the presence of additive noise. The GFCC features in recent studies have shown very good robustness against noise and acoustic change. The main idea is GFCC features based on GMM feature extraction is to improve the overall speaker identification performance in low signal to noise ratio (SNR) conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Dong, Xiaodong Wang, and Shaohe Lv. "An Overview of End-to-End Automatic Speech Recognition." Symmetry 11, no. 8 (2019): 1018. http://dx.doi.org/10.3390/sym11081018.

Full text
Abstract:
Automatic speech recognition, especially large vocabulary continuous speech recognition, is an important issue in the field of machine learning. For a long time, the hidden Markov model (HMM)-Gaussian mixed model (GMM) has been the mainstream speech recognition framework. But recently, HMM-deep neural network (DNN) model and the end-to-end model using deep learning has achieved performance beyond HMM-GMM. Both using deep learning techniques,
APA, Harvard, Vancouver, ISO, and other styles
5

Satyanand, Singh, and Singh Pragya. "High level speaker specific features modeling in automatic speaker recognition system." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 2 (2020): 1859–67. https://doi.org/10.11591/ijece.v10i2.pp1859-1867.

Full text
Abstract:
Spoken words convey several levels of information. At the primary level, the speech conveys words or spoken messages, but at the secondary level, the speech also reveals information about the speakers. This work is based on the high-level speaker-specific features on statistical speaker modeling techniques that express the characteristic sound of the human voice. Using Hidden Markov model (HMM), Gaussian mixture model (GMM), and Linear Discriminant Analysis (LDA) models build Automatic Speaker Recognition (ASR) system that are computational inexpensive can recognize speakers regardless of what is said. The performance of the ASR system is evaluated for clear speech to a wide range of speech quality using a standard TIMIT speech corpus. The ASR efficiency of HMM, GMM, and LDA based modeling technique are 98.8%, 99.1%, and 98.6% and Equal Error Rate (EER) is 4.5%, 4.4% and 4.55% respectively. The EER improvement of GMM modeling technique based ASR systemcompared with HMM and LDA is 4.25% and 8.51% respectively.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Ying. "Solving Multi-Instance Visual Scene Recognition with Classifier Ensemble Based on Unsupervised Clustering." Applied Mechanics and Materials 415 (September 2013): 338–44. http://dx.doi.org/10.4028/www.scientific.net/amm.415.338.

Full text
Abstract:
This paper proposes a new image Multi-Instance (MI) bag generating method, which models an image with a Gaussian Mixed Model (GMM). The generated GMM is treated as an MI bag, of which the color and locally stable invariant components (SIFT) are the instances. Agglomerative Information Bottleneck clustering is employed to transform the MIL problem into single-instance learning problem so that single-instance classifiers can be used for classification. Finally, ensemble learningis involved to further enhance classifiers generalization ability. Experimental results demonstrate that the performance of the proposed framework for image recognition is superior to some common MI algorithms on average in a 5-category scene recognition task Key words:Multi-Instance Learning; Gaussian Mixed Model; AIB Clustering; image modeling; Single-Instance Bag; Ensemble Classifier; Scene Recognition
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Qi, Liwen Jiang, and Haitao Xu. "Expectation-Maximization Algorithm of Gaussian Mixture Model for Vehicle-Commodity Matching in Logistics Supply Chain." Complexity 2021 (January 13, 2021): 1–11. http://dx.doi.org/10.1155/2021/9305890.

Full text
Abstract:
A vehicle-commodity matching problem (VCMP) is presented for service providers to reduce the cost of the logistics system. The vehicle classification model is built as a Gaussian mixture model (GMM), and the expectation-maximization (EM) algorithm is designed to solve the parameter estimation of GMM. A nonlinear mixed-integer programming model is constructed to minimize the total cost of VCMP. The matching process between vehicle and commodity is realized by GMM-EM, as a preprocessing of the solution. The design of the vehicle-commodity matching platform for VCMP is designed to reduce and eliminate the information asymmetry between supply and demand so that the order allocation can work at the right time and the right place and use the optimal solution of vehicle-commodity matching. Furthermore, the numerical experiment of an e-commerce supply chain proves that a hybrid evolutionary algorithm (HEA) is superior to the traditional method, which provides a decision-making reference for e-commerce VCMP.
APA, Harvard, Vancouver, ISO, and other styles
8

Bandi, Hari, Dimitris Bertsimas, and Rahul Mazumder. "Learning a Mixture of Gaussians via Mixed-Integer Optimization." INFORMS Journal on Optimization 1, no. 3 (2019): 221–40. http://dx.doi.org/10.1287/ijoo.2018.0009.

Full text
Abstract:
We consider the problem of estimating the parameters of a multivariate Gaussian mixture model (GMM) given access to n samples that are believed to have come from a mixture of multiple subpopulations. State-of-the-art algorithms used to recover these parameters use heuristics to either maximize the log-likelihood of the sample or try to fit first few moments of the GMM to the sample moments. In contrast, we present here a novel mixed-integer optimization (MIO) formulation that optimally recovers the parameters of the GMM by minimizing a discrepancy measure (either the Kolmogorov–Smirnov or the total variation distance) between the empirical distribution function and the distribution function of the GMM whenever the mixture component weights are known. We also present an algorithm for multidimensional data that optimally recovers corresponding means and covariance matrices. We show that the MIO approaches are practically solvable for data sets with n in the tens of thousands in minutes and achieve an average improvement of 60%–70% and 50%–60% on mean absolute percentage error in estimating the means and the covariance matrices, respectively, over the expectation–maximization (EM) algorithm independent of the sample size n. As the separation of the Gaussians decreases and, correspondingly, the problem becomes more difficult, the edge in performance in favor of the MIO methods widens. Finally, we also show that the MIO methods outperform the EM algorithm with an average improvement of 4%–5% on the out-of-sample accuracy for real-world data sets.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Gang, Binjie Hou, and Tiangang Lei. "A new Monte Carlo sampling method based on Gaussian Mixture Model for imbalanced data classification." Mathematical Biosciences and Engineering 20, no. 10 (2023): 17866–85. http://dx.doi.org/10.3934/mbe.2023794.

Full text
Abstract:
<abstract><p>Imbalanced data classification has been a major topic in the machine learning community. Different approaches can be taken to solve the issue in recent years, and researchers have given a lot of attention to data level techniques and algorithm level. However, existing methods often generate samples in specific regions without considering the complexity of imbalanced distributions. This can lead to learning models overemphasizing certain difficult factors in the minority data. In this paper, a Monte Carlo sampling algorithm based on Gaussian Mixture Model (MCS-GMM) is proposed. In MCS-GMM, we utilize the Gaussian mixed model to fit the distribution of the imbalanced data and apply the Monte Carlo algorithm to generate new data. Then, in order to reduce the impact of data overlap, the three sigma rule is used to divide data into four types, and the weight of each minority class instance based on its neighbor and probability density function. Based on experiments conducted on Knowledge Extraction based on Evolutionary Learning datasets, our method has been proven to be effective and outperforms existing approaches such as Synthetic Minority Over-sampling TEchnique.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
10

Deng, Lei, and Yong Gao. "Gammachirp Filter Banks Applied in Roust Speaker Recognition Based GMM-UBM Classifier." International Arab Journal of Information Technology 17, no. 2 (2019): 170–77. http://dx.doi.org/10.34028/iajit/17/2/4.

Full text
Abstract:
In this paper, authors propose an auditory feature extraction algorithm in order to improve the performance of the speaker recognition system in noisy environments. In this auditory feature extraction algorithm, the Gammachirp filter bank is adapted to simulate the auditory model of human cochlea. In addition, the following three techniques are applied: cube-root compression method, Relative Spectral Filtering Technique (RASTA), and Cepstral Mean and Variance Normalization algorithm (CMVN).Subsequently, based on the theory of Gaussian Mixes Model-Universal Background Model (GMM-UBM), the simulated experiment was conducted. The experimental results implied that speaker recognition systems with the new auditory feature has better robustness and recognition performance compared to Mel-Frequency Cepstral Coefficients(MFCC), Relative Spectral-Perceptual Linear Predictive (RASTA-PLP),Cochlear Filter Cepstral Coefficients (CFCC) and gammatone Frequency Cepstral Coefficeints (GFCC)
APA, Harvard, Vancouver, ISO, and other styles
11

Tang, Lin, Shane Halloran, Jian Qing Shi, Yu Guan, Chunzheng Cao, and Janet Eyre. "Evaluating upper limb function after stroke using the free-living accelerometer data." Statistical Methods in Medical Research 29, no. 11 (2020): 3249–64. http://dx.doi.org/10.1177/0962280220922259.

Full text
Abstract:
Accelerometer devices are becoming efficient tools in clinical studies for automatically measuring the activities of daily living. Such data provides a time series describing activity level at every second and displays a subject’s activity pattern throughout a day. However, the analysis of such data is very challenging due to the large number of observations produced each second and the variability among subjects. The purpose of this study is to develop efficient statistical analysis techniques for predicting the recovery level of the upper limb function after stroke based on the free-living accelerometer data. We propose to use a Gaussian Mixture Model (GMM)-based method for clustering and extracting new features to capture the information contained in the raw data. A nonlinear mixed effects model with Gaussian Process prior for the random effects is developed as the predictive model for evaluating the recovery level of the upper limb function. Results of applying to the accelerometer data for patients after stroke are presented.
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Xi, Zhangyong Li, Dewei Yang, Lisha Zhong, Lian Huang, and Jinzhao Lin. "Research on Finger Vein Image Segmentation and Blood Sampling Point Location in Automatic Blood Collection." Sensors 21, no. 1 (2020): 132. http://dx.doi.org/10.3390/s21010132.

Full text
Abstract:
In the fingertip blood automatic sampling process, when the blood sampling point in the fingertip venous area, it will greatly increase the amount of bleeding without being squeezed. In order to accurately locate the blood sampling point in the venous area, we propose a new finger vein image segmentation approach basing on Gabor transform and Gaussian mixed model (GMM). Firstly, Gabor filter parameter can be set adaptively according to the differential excitation of image and we use the local binary pattern (LBP) to fuse the same-scale and multi-orientation Gabor features of the image. Then, finger vein image segmentation is achieved by Gabor-GMM system and optimized by the max flow min cut method which is based on the relative entropy of the foreground and the background. Finally, the blood sampling point can be localized with corner detection. The experimental results show that the proposed approach has significant performance in segmenting finger vein images which the average accuracy of segmentation images reach 91.6%.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Zhenyu, Jian Wang, Zhiyuan Li, Youlong Zhao, Ruisheng Wang, and Ayman Habib. "Optimization Method of Airborne LiDAR Individual Tree Segmentation Based on Gaussian Mixture Model." Remote Sensing 14, no. 23 (2022): 6167. http://dx.doi.org/10.3390/rs14236167.

Full text
Abstract:
Forests are the main part of the terrestrial ecosystem. Airborne LiDAR is fast, comprehensive, penetrating, and contactless and can depict 3D canopy information with a high efficiency and accuracy. Therefore, it plays an important role in forest ecological protection, tree species recognition, carbon sink calculation, etc. Accurate recognition of individual trees in forests is a key step to various application. In real practice, however, the accuracy of individual tree segmentation (ITS) is often compromised by under-segmentation due to the diverse species, obstruction and understory trees typical of a high-density multistoried mixed forest area. Therefore, this paper proposes an ITS optimization method based on Gaussian mixture model for airborne LiDAR data. First, the mean shift (MS) algorithm is used for the initial ITS of the pre-processed airborne LiDAR data. Next, under-segmented samples are extracted by integrated learning, normally segmented samples are classified by morphological approximation, and the approximate distribution uncertainty of the normal samples is described with a covariance matrix. Finally, the class composition among the under-segmented samples is determined, and the under-segmented samples are re-segmented using Gaussian mixture model (GMM) clustering, in light of the optimal covariance matrix of the corresponding categories. Experiments with two datasets, Trento and Qingdao, resulted in ITS recall of 94% and 96%, accuracy of 82% and 91%, and F-scores of 0.87 and 0.93. Compared with the MS algorithm, our method is more accurate and less likely to under-segment individual trees in many cases. It can provide data support for the management and conservation of high-density multistoried mixed forest areas.
APA, Harvard, Vancouver, ISO, and other styles
14

Guo, Yao, and Hongyan Zhu. "Joint short-time speaker recognition and tracking using sparsity-based source detection." Acta Acustica 7 (2023): 10. http://dx.doi.org/10.1051/aacus/2023004.

Full text
Abstract:
A random finite set-based sequential Monte–Carlo tracking method is proposed to track multiple acoustic sources in indoor scenarios. The proposed method can improve tracking performance by introducing recognized speaker identities from the received signals. At the front-end, the degenerate unmixing estimation technique (DUET) is employed to separate the mixed signals, and the time delay of arrival (TDOA) is measured. In addition, a criterion to select the reliable microphone pair is designed to quickly obtain accurate speaker identities from the mixed signals, and the Gaussian mixture model universal background model (GMM-UBM) is employed to train the speaker model. In the tracking step, the update of the weight for each particle is derived after introducing the recognized speaker identities, which results in better association between the measurements and sources. Simulation results demonstrate that the proposed method can improve the accuracy of the filter states and discriminate the sources close to each other.
APA, Harvard, Vancouver, ISO, and other styles
15

Ren, Hang, and Taotao Hu. "An Adaptive Feature Selection Algorithm for Fuzzy Clustering Image Segmentation Based on Embedded Neighbourhood Information Constraints." Sensors 20, no. 13 (2020): 3722. http://dx.doi.org/10.3390/s20133722.

Full text
Abstract:
This paper addresses the lack of robustness of feature selection algorithms for fuzzy clustering segmentation with the Gaussian mixture model. Assuming that the neighbourhood pixels and the centre pixels obey the same distribution, a Markov method is introduced to construct the prior probability distribution and achieve the membership degree regularisation constraint for clustering sample points. Then, a noise smoothing factor is introduced to optimise the prior probability constraint. Second, a power index is constructed by combining the classification membership degree and prior probability since the Kullback–Leibler (KL) divergence of the noise smoothing factor is used to supervise the prior probability; this probability is embedded into Fuzzy Superpixels Fuzzy C-means (FSFCM) as a regular factor. This paper proposes a fuzzy clustering image segmentation algorithm based on an adaptive feature selection Gaussian mixture model with neighbourhood information constraints. To verify the segmentation performance and anti-noise robustness of the improved algorithm, the fuzzy C-means clustering algorithm Fuzzy C-means (FCM), FSFCM, Spatially Variant Finite Mixture Model (SVFMM), EGFMM, extended Gaussian mixture model (EGMM), adaptive feature selection robust fuzzy clustering segmentation algorithm (AFSFCM), fast and robust spatially constrained Gaussian mixture model (GMM) for image segmentation (FRSCGMM), and improve method are used to segment grey images containing Gaussian noise, salt-and-pepper noise, multiplicative noise and mixed noise. The peak signal-to-noise ratio (PSNR) and the error rate (MCR) are used as the theoretical basis for assessing the segmentation results. The improved algorithm indicators proposed in this paper are optimised. The improved algorithm yields increases of 0.1272–12.9803 dB, 1.5501–13.4396 dB, 1.9113–11.2613 dB and 1.0233–10.2804 dB over the other methods, and the Misclassification rate (MSR) decreases by 0.32–37.32%, 5.02–41.05%, 0.3–21.79% and 0.9–30.95% compared to that with the other algorithms. It is verified that the segmentation results of the improved algorithm have good regional consistency and strong anti-noise robustness, and they meet the needs of noisy image segmentation.
APA, Harvard, Vancouver, ISO, and other styles
16

Arunachalam, Manasha, Siddhaarth Sekar, Annastasia M. Erdmann, V. V. Sajith Variyar, and Ramesh Sivanpillai. "Comparative Analysis of Machine Learning Algorithms and Statistical Techniques for Data Analysis in Crop Growth Monitoring with NDVI." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-M-5-2024 (March 12, 2025): 15–20. https://doi.org/10.5194/isprs-archives-xlviii-m-5-2024-15-2025.

Full text
Abstract:
Abstract. We assessed the potential of Machine Learning (ML) for mapping crop growth in three flood irrigated fields. Results generated from ML algorithms were compared to the output generated by the ISODATA algorithm. Affinity Propagation (AP) identifies the number of clusters by considering all data points as potential exemplars and iteratively refine the set, while Gaussian Mixture Model (GMM) algorithm treats the data as a mixture of several Gaussian distributions, allowing for flexible cluster shapes. In contrast, ISODATA, a statistical clustering method, requires an analyst to specify the number of output clusters followed by iterative splitting and merging of clusters based on variance and distance criteria. We acquired Landsat derived NDVI images for three flood-irrigated fields over a span of four years. These images were collected at the start of the growing season to ensure consistency. Initially we clustered the pixels in these images for each field using AP and determine the number of clusters. Next, we applied GMM to identify and define the clusters. Finally, we plotted the mean value of all the pixels in each cluster for every year and assigned the clusters into six thematic classes: the first three classes for consistent growth (good, average, or poor) across all four years, and the other three for mixed growth patterns (e.g., good in three years and average in one). Output maps generated from these methods were compared using IoU scores. ML methods had greater efficiency in terms of replicating the steps for other fields, whereas ISODATA requires analyst intervention and interpretation.
APA, Harvard, Vancouver, ISO, and other styles
17

Othman, Khairulnizam, Mohd Norzali Mohd, Muhammad Qusyairi Abdul Rahman, Mohd Hadri Mohamed Nor, Khairulnizam Ngadimon, and Zulkifli Sulaiman. "A Mixed Gaussian Distribution Approach using the Expectation-Maximization Algorithm for Topography Predictive Modelling." WSEAS TRANSACTIONS ON COMPUTERS 24 (April 7, 2025): 29–41. https://doi.org/10.37394/23205.2025.24.4.

Full text
Abstract:
The incidence of sugarcane crop infestations at the migration stage, especially by the top borer, can lower yields substantially, which may translate to revenue losses of over 20% across many parts of the world. Traditional pest surveillance approaches tend to lack the accuracy required for timely intervention. This research introduces a new burden rate concept incorporated within a Gaussian Mixture Model (GMM), framed within a machine learning environment in order to enhance the precision of infestation pattern prediction. Through the utilization of the Expectation-Maximization (EM) algorithm, the model easily receives maximum likelihood estimates automatically, thus efficiently dealing with cluster distributions at low computational costs. A significant extension of this research is the inclusion of wind direction and topography as dynamic predictors. This allows for maximizing the model's potential in determining highly susceptible locations of infestation. The incorporation of remote sensing and drone data increases the precision of parameter estimation, leading to accurate predictive modeling. The EM-based clustering method reaches a high level of accuracy of 97.5%, which is greater compared to conventional pest monitoring methods. The result of this study provides a new analytical instrument for pest outbreak control and forecasting in precision agriculture. The tool provides real-time workforce management, selective pest eradication, and efficient resource management. Furthermore, the new synergy of clustering processes, topographic modeling, and remote sensing used in the study achieves a scalable data-driven approach to sustainable farm management that involves proactive crop loss minimization.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhou, Yuliang, Mingxuan Chen, Guanglong Du, Ping Zhang, and Xin Liu. "Intelligent grasping with natural human-robot interaction." Industrial Robot: An International Journal 45, no. 1 (2018): 44–53. http://dx.doi.org/10.1108/ir-05-2017-0089.

Full text
Abstract:
Purpose The aim of this paper is to propose a grasping method based on intelligent perception for implementing a grasp task with human conduct. Design/methodology/approach First, the authors leverage Kinect to collect the environment information including both image and voice. The target object is located and segmented by gesture recognition and speech analysis and finally grasped through path teaching. To obtain the posture of the human gesture accurately, the authors use the Kalman filtering (KF) algorithm to calibrate the posture use the Gaussian mixture model (GMM) for human motion modeling, and then use Gaussian mixed regression (GMR) to predict human motion posture. Findings In the point-cloud information, many of which are useless, the authors combined human’s gesture to remove irrelevant objects in the environment as much as possible, which can help to reduce the computation while dividing and recognizing objects; at the same time to reduce the computation, the authors used the sampling algorithm based on the voxel grid. Originality/value The authors used the down-sampling algorithm, kd-tree algorithm and viewpoint feature histogram algorithm to remove the impact of unrelated objects and to get a better grasp of the state.
APA, Harvard, Vancouver, ISO, and other styles
19

Du, Zhibin, Hui Xie, Pengyu Zhai, et al. "Game-Based Flexible Merging Decision Method for Mixed Traffic of Connected Autonomous Vehicles and Manual Driving Vehicles on Urban Freeways." Applied Sciences 14, no. 16 (2024): 7375. http://dx.doi.org/10.3390/app14167375.

Full text
Abstract:
Connected Autonomous Vehicles (CAVs) have the potential to revolutionize traffic systems by autonomously handling complex maneuvers such as freeway ramp merging. However, the unpredictability of manual-driven vehicles (MDVs) poses a significant challenge. This study introduces a novel decision-making approach that incorporates the uncertainty of MDVs’ driving styles, aiming to enhance merging efficiency and safety. By framing the CAV-MDV interaction as an incomplete information static game, we categorize MDVs’ behaviors using a Gaussian Mixture Model–Support Vector Machine (GMM-SVM) method. The identified driving styles are then integrated into the flexible merging decision process, leveraging the concept of pure-strategy Nash equilibrium to determine optimal merging points and timing. A deep reinforcement learning algorithm is employed to refine CAVs’ control decisions, ensuring efficient right-of-way acquisition. Simulations at both micro and macro levels validate the method’s effectiveness, demonstrating improved merging success rates and overall traffic efficiency without compromising safety. The research contributes to the field by offering a sophisticated merging strategy that respects real-world driving behavior complexity, with potential for practical applications in urban traffic scenarios.
APA, Harvard, Vancouver, ISO, and other styles
20

Cai, Zhi, Jiawei Wang, Tong Li, et al. "A Novel Trajectory Based Prediction Method for Urban Subway Design." ISPRS International Journal of Geo-Information 11, no. 2 (2022): 126. http://dx.doi.org/10.3390/ijgi11020126.

Full text
Abstract:
In recent years, with the development of various types of public transportation, they are also more and more closely connected. Among them, subway transportation has become the first choice of major cities. However, the planning of subway stations is very difficult and there are many factors to consider. Besides, few methods for selecting optimal station locations take other public transport in to consideration. In order to study the relationship between different types of public transportation, the authors collected and analyzed the travel data of subway passengers and the passenger trajectory data of taxis. In this paper, a method based on LeaderRank and Gaussian Mixed Model (GMM) is proposed to conduct subway station locations selection. In this method, the author builds a subway-passenger traffic zone weighted network and a station location prediction model. First, we evaluate the nodes in the network, then use the GPS track data of taxis to predict the location of new stations in future subway construction, and analyze and discuss the land use characteristics in the prediction area. Taking the design of the Beijing subway line as an example, the suitability of this method is illustrated.
APA, Harvard, Vancouver, ISO, and other styles
21

Duan, Shaoming, Chuanyi Liu, Peiyi Han, et al. "HT-Fed-GAN: Federated Generative Model for Decentralized Tabular Data Synthesis." Entropy 25, no. 1 (2022): 88. http://dx.doi.org/10.3390/e25010088.

Full text
Abstract:
In this paper, we study the problem of privacy-preserving data synthesis (PPDS) for tabular data in a distributed multi-party environment. In a decentralized setting, for PPDS, federated generative models with differential privacy are used by the existing methods. Unfortunately, the existing models apply only to images or text data and not to tabular data. Unlike images, tabular data usually consist of mixed data types (discrete and continuous attributes) and real-world datasets with highly imbalanced data distributions. Existing methods hardly model such scenarios due to the multimodal distributions in the decentralized continuous columns and highly imbalanced categorical attributes of the clients. To solve these problems, we propose a federated generative model for decentralized tabular data synthesis (HT-Fed-GAN). There are three important parts of HT-Fed-GAN: the federated variational Bayesian Gaussian mixture model (Fed-VB-GMM), which is designed to solve the problem of multimodal distributions; federated conditional one-hot encoding with conditional sampling for global categorical attribute representation and rebalancing; and a privacy consumption-based federated conditional GAN for privacy-preserving decentralized data modeling. The experimental results on five real-world datasets show that HT-Fed-GAN obtains the best trade-off between the data utility and privacy level. For the data utility, the tables generated by HT-Fed-GAN are the most statistically similar to the original tables and the evaluation scores show that HT-Fed-GAN outperforms the state-of-the-art model in terms of machine learning tasks.
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Yue, Chunying Ma, and Zhehao Huang. "Can the digital economy improve green total factor productivity? An empirical study based on Chinese urban data." Mathematical Biosciences and Engineering 20, no. 4 (2023): 6866–93. http://dx.doi.org/10.3934/mbe.2023296.

Full text
Abstract:
<abstract><p>With the new generation of technological revolution, the digital economy has progressively become a key driver of global economic development. In this context, how to promote green economic growth and improve green total factor productivity (GTFP) with the help of the digital economy is an important issue that urgently needs empirical research. We adopted the panel data of 278 Chinese prefecture-level cities from 2011 to 2020 to test whether the digital economy improves the GTFP through the Gaussian Mixed Model (GMM) dynamic panel model. The moderating effect model has been used to explore the impact mechanism from the perspectives of industrial structure upgrade and environmental regulation. In addition, a grouping regression was applied to the sample cities to test the heterogeneous impact of the digital economy on the GTFP. Based upon the empirical findings, this work has the following conclusions. First, the digital economy plays a significant role in improving the GTFP. Second, an industrial structure upgrade has a positive moderating effect on the ability of the digital economy to enhance the GTFP. The environmental regulation, in contrast, has a negative moderating effect. Third, the digital economy exerts heterogeneous impacts on the GTFP across regions, but not at the city level.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
23

Deng, Hui, Zhibin Ou, and Yichuan Deng. "Multi-Angle Fusion-Based Safety Status Analysis of Construction Workers." International Journal of Environmental Research and Public Health 18, no. 22 (2021): 11815. http://dx.doi.org/10.3390/ijerph182211815.

Full text
Abstract:
Hazardous accidents often happen in construction sites and bring fatal consequences, and therefore safety management has been a certain dilemma to construction managers for long time. Although computer vision technology has been used on construction sites to identify construction workers and track their movement trajectories for safety management, the detection effect is often influenced by limited coverage of single cameras and occlusion. A multi-angle fusion method applying SURF feature algorithm is proposed to coalesce the information processed by improved GMM (Gaussian Mixed Model) and HOG + SVM (Histogram of Oriented Gradient and Support Vector Machines), identifying the obscured workers and achieving a better detection effect with larger coverage. Workers are tracked in real-time, with their movement trajectory estimated by utilizing Kalman filters and safety status analyzed to offer a prior warning signal. Experimental studies are conducted for validation of the proposed framework for workers’ detection and trajectories estimation, whose result indicates that the framework is able to detect workers and predict their movement trajectories for safety forewarning.
APA, Harvard, Vancouver, ISO, and other styles
24

Gao, Zitian, Danlu Guo, Dongryeol Ryu, and Andrew W. Western. "Enhancing the Accuracy and Temporal Transferability of Irrigated Cropping Field Classification Using Optical Remote Sensing Imagery." Remote Sensing 14, no. 4 (2022): 997. http://dx.doi.org/10.3390/rs14040997.

Full text
Abstract:
Mapping irrigated areas using remotely sensed imagery has been widely applied to support agricultural water management; however, accuracy is often compromised by the in-field heterogeneity of and interannual variability in crop conditions. This paper addresses these key issues. Two classification methods were employed to map irrigated fields using normalized difference vegetation index (NDVI) values derived from Landsat 7 and Landsat 8: a dynamic thresholding method (method one) and a random forest method (method two). To improve the representativeness of field-level NDVI aggregates, which are the key inputs in our methods, a Gaussian mixture model (GMM)-based filtering approach was adopted to remove noncrop pixels (e.g., trees and bare soils) and mixed pixels along the field boundary. To improve the temporal transferability of method one we dynamically determined the threshold value to account for the impact of interannual weather variability based on the dynamic range of NDVI values. In method two an innovative training sample pool was designed for the random forest modeling to enable automatic calibration for each season, which contributes to consistent performance across years. The irrigated field mapping was applied to a major irrigation district in Australia from 2011 to 2018, for summer and winter cropping seasons separately. The results showed that using GMM-based filtering can markedly improve field-level data quality and avoid up to 1/3 of omission errors for irrigated fields. Method two showed superior performance, exhibiting consistent and good accuracy (kappa > 0.9) for both seasons. The classified maps in wet winter seasons should be used with caution, because rainfall alone can largely meet plant water requirements, leaving the contribution of irrigation to the surface spectral signature weak. The approaches introduced are transferable to other areas, can support multiyear irrigated area mapping with high accuracy, and significantly reduced model development effort.
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Lin, Bingbing Wang, Yongfu Li, and Nenglong Hu. "Regular Vehicle Spatial Distribution Estimation Based on Machine Learning." Journal of Electrical and Computer Engineering 2023 (August 30, 2023): 1–11. http://dx.doi.org/10.1155/2023/4954035.

Full text
Abstract:
For the mixed traffic flow, obtaining the distribution of connected vehicles (CVs) and regular vehicles (RVs) is of great significance for road network analysis and cooperative control in intelligent transportation systems (ITSs). However, whether it is based on fixed sensors or based on CVs and traffic mechanism to estimate the spatial distribution of RVs, the implementation complexity and low estimation accuracy are the points that need to be improved. This paper proposes a regular vehicle spatial distribution estimation method using adjacent connected vehicles as mobile sensors. First, to investigate the hidden relationship between the interaction information of adjacent CVs and the spatial distribution of RVs among CVs, the Gaussian mixture model-hidden Markov model (GMM-HMM) is selected as the identification method. Then, three sets of experiments were designed to study the influence of observed features on the identification capability of the model, generalization capability validation, and comparison with other methods, respectively. Finally, the proposed method is verified by the dataset generated by the car-following model. The experimental results show that selecting the relative position and time headway as observed features can effectively reflect the regular vehicle spatial distribution between adjacent CVs. The average accuracy of the proposed method to identify the regular vehicle spatial distribution is over 93.7%, which can provide valuable suggestions for the Internet of Vehicles application.
APA, Harvard, Vancouver, ISO, and other styles
26

Khalifa, Othman O., Muhammad H. Wajdi, Rashid A. Saeed, Aisha H. A. Hashim, Muhammed Z. Ahmed, and Elmustafa Sayed Ali. "Vehicle Detection for Vision-Based Intelligent Transportation Systems Using Convolutional Neural Network Algorithm." Journal of Advanced Transportation 2022 (March 15, 2022): 1–11. http://dx.doi.org/10.1155/2022/9189600.

Full text
Abstract:
Vehicle detection in Intelligent Transportation Systems (ITS) is a key factor ensuring road safety, as it is necessary for the monitoring of vehicle flow, illegal vehicle type detection, incident detection, and vehicle speed estimation. Despite the growing popularity in research, it remains a challenging problem that must be solved. Hardware-based solutions such as radars and LIDAR are been proposed but are too expensive to be maintained and produce little valuable information to human operators at traffic monitoring systems. Software based solutions using traditional algorithms such as Histogram of Gradients (HOG) and Gaussian Mixed Model (GMM) are computationally slow and not suitable for real-time traffic detection. Therefore, the paper will review and evaluate different vehicle detection methods. In addition, a method of utilizing Convolutional Neural Network (CNN) is used for the detection of vehicles from roadway camera outputs to apply video processing techniques and extract the desired information. Specifically, the paper utilized the YOLOv5s architecture coupled with k-means algorithm to perform anchor box optimization under different illumination levels. Results from the simulated and evaluated algorithm showed that the proposed model was able to achieve a mAP of 97.8 in the daytime dataset and 95.1 in the nighttime dataset.
APA, Harvard, Vancouver, ISO, and other styles
27

Mattana, Sara, Alice Dal Fovo, João Luís Lagarto, et al. "Automated Phasor Segmentation of Fluorescence Lifetime Imaging Data for Discriminating Pigments and Binders Used in Artworks." Molecules 27, no. 5 (2022): 1475. http://dx.doi.org/10.3390/molecules27051475.

Full text
Abstract:
The non-invasive analysis of fluorescence from binders and pigments employed in mixtures in artworks is a major challenge in cultural heritage science due to the broad overlapping emission of different fluorescent species causing difficulties in the data interpretation. To improve the specificity of fluorescence measurements, we went beyond steady-state fluorescence measurements by resolving the fluorescence decay dynamics of the emitting species through time-resolved fluorescence imaging (TRFI). In particular, we acquired the fluorescence decay features of different pigments and binders using a portable and compact fibre-based imaging setup. Fluorescence time-resolved data were analysed using the phasor method followed by a Gaussian mixture model (GMM) to automatically identify the populations of fluorescent species within the fluorescence decay maps. Our results demonstrate that this approach allows distinguishing different binders when mixed with the same pigment as well as discriminating different pigments dispersed in a common binder. The results obtained could establish a framework for the analysis of a broader range of pigments and binders to be then extended to several other materials used in art production. The obtained results, together with the compactness and portability of the instrument, pave the way for future in situ applications of the technology on paintings.
APA, Harvard, Vancouver, ISO, and other styles
28

Jun, Sunghae. "Text Data Analysis Using Generalized Linear Mixed Model and Bayesian Visualization." Axioms 11, no. 12 (2022): 674. http://dx.doi.org/10.3390/axioms11120674.

Full text
Abstract:
Many parts of big data, such as web documents, online posts, papers, patents, and articles, are in text form. So, the analysis of text data in the big data domain is an important task. Many methods based on statistics or machine learning algorithms have been studied for text data analysis. Most of them were analytical methods based on the generalized linear model (GLM). For the GLM, text data analysis is performed based on the assumption of the error included in the given data and follows the Gaussian distribution. However, the GLM has shown limitations in the analysis of text data, including data sparseness. This is because the preprocessed text data has a zero-inflated problem. To solve this problem, we proposed a text data analysis using the generalized linear mixed model (GLMM) and Bayesian visualization. Therefore, the objective of our study is to propose the use of GLMM to overcome the limitations of the conventional GLM in the analysis of text data with a zero-inflated problem. The GLMM uses various probability distributions as well as Gaussian for error terms and considers the difference between observations by clustering. We also use Bayesian visualization to find meaningful associations between keywords. Lastly, we carried out the analysis of text data searched from real domains and provided the analytical results to show the performance and validity of our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Guangwei, and Xiaomei Chen. "Evaluation of the Online and Offline Mixed Teaching Effect of MOOC Based upon the Deep Neural Network Model." Wireless Communications and Mobile Computing 2022 (March 19, 2022): 1–12. http://dx.doi.org/10.1155/2022/2173005.

Full text
Abstract:
This article is dedicated to discussing the online and offline mixed teaching evaluation of MOOC based on deep neural networks. Deep neural networks are an important means to solve various problems in various fields. It can evaluate the teaching attitude of teachers, the teaching content in the classroom, the teacher’s narrative ability, the teaching methods used by the teachers, and whether the teaching methods are rigorous. And it can train on a large number of datasets evaluated by students on a certain course and get results. This article first explains the advantages of the neural network model and explains the reasons for the emergence of MOOCs and the mixing with traditional classrooms. It also explains some deep neural network (DNN) models and algorithms, such as BP neural network model and algorithms. This model uses backpropagation. When there is an error between the output sample of the neural network and the target sample, the error can be backpropagated to adjust the threshold and weight to make the error reach the minimum. The algorithm steps include forward propagation and backpropagation and are substituted into the gradient descent method to obtain the weight change of the output layer and the hidden layer. It also explains the Gaussian model in DNNs. The given training data vector in the Gaussian mixture model and the configuration of GMM are used for expectation maximization training using an iterative algorithm, and the unsupervised clustering accuracy ACC is applied to evaluate its performance. Use pictures to describe the mixed-mode teaching mode in the MOOC environment. It is necessary to consider teaching practice conditions, time, location, curriculum resources, teaching methods and means, etc. It can cultivate students’ spatial imagination, engineering consciousness, creative design ability, drawing hand-made ability, and logical thinking abilities. It enables teachers to accept the fair and just evaluation of students. Finally, this article discusses the parallelization and optimization of GPU-based DNN models, splits the DNN models, and combines different models to calculate weight parameters. This article combines model training and data training in parallel to increase the processing speed under the same amount of data, increase the batch, increase the accuracy, and reduce the training shock. It can be concluded that its DNN model has greatly improved the training effect performance of the MOOC online and offline mixed course effect dataset. The calculation time is shortened, the convergence speed is accelerated, the accuracy rate is improved, and the acceleration ratio is increased, which compares with the same period of the previous year increase of more than 37.37%. The accuracy has increased, comparing with the same period of the previous year, an increase of more than 12.34%.
APA, Harvard, Vancouver, ISO, and other styles
30

Jang, Minseok, Hyun-Cheol Jeong, Taegon Kim, and Sung-Kwan Joo. "Load Profile-Based Residential Customer Segmentation for Analyzing Customer Preferred Time-of-Use (TOU) Tariffs." Energies 14, no. 19 (2021): 6130. http://dx.doi.org/10.3390/en14196130.

Full text
Abstract:
Smart meters and dynamic pricing are key factors in implementing a smart grid. Dynamic pricing is one of the demand-side management methods that can shift demand from on-peak to off-peak. Furthermore, dynamic pricing can help utilities reduce the investment cost of a power system by charging different prices at different times according to system load profile. On the other hand, a dynamic pricing strategy that can satisfy residential customers is required from the customer’s perspective. Residential load profiles can be used to comprehend residential customers’ preferences for electricity tariffs. In this study, in order to analyze the preference for time-of-use (TOU) rates of Korean residential customers through residential electricity consumption data, a representative load profile for each customer can be found by utilizing the hourly consumption of median. In the feature extraction stage, six features that can explain the customer’s daily usage patterns are extracted from the representative load profile. Korean residential load profiles are clustered into four groups using a Gaussian mixture model (GMM) with Bayesian information criterion (BIC), which helps find the optimal number of groups, in the clustering stage. Furthermore, a choice experiment (CE) is performed to identify Korean residential customers’ preferences for TOU with selected attributes. A mixed logit model with a Bayesian approach is used to estimate each group’s customer preference for attributes of a time-of-use (TOU) tariff. Finally, a TOU tariff for each group’s load profile is recommended using the estimated part-worth.
APA, Harvard, Vancouver, ISO, and other styles
31

Ma, Wei, Chao Gou, and Yunyun Hou. "Research on Adaptive 1DCNN Network Intrusion Detection Technology Based on BSGM Mixed Sampling." Sensors 23, no. 13 (2023): 6206. http://dx.doi.org/10.3390/s23136206.

Full text
Abstract:
The development of internet technology has brought us benefits, but at the same time, there has been a surge in network attack incidents, posing a serious threat to network security. In the real world, the amount of attack data is much smaller than normal data, leading to a severe class imbalance problem that affects the performance of classifiers. Additionally, when using CNN for detection and classification, manual adjustment of parameters is required, making it difficult to obtain the optimal number of convolutional kernels. Therefore, we propose a hybrid sampling technique called Borderline-SMOTE and Gaussian Mixture Model (GMM), referred to as BSGM, which combines the two approaches. We utilize the Quantum Particle Swarm Optimization (QPSO) algorithm to automatically determine the optimal number of convolutional kernels for each one-dimensional convolutional layer, thereby enhancing the detection rate of minority classes. In our experiments, we conducted binary and multi-class experiments using the KDD99 dataset. We compared our proposed BSGM-QPSO-1DCNN method with ROS-CNN, SMOTE-CNN, RUS-SMOTE-CNN, RUS-SMOTE-RF, and RUS-SMOTE-MLP as benchmark models for intrusion detection. The experimental results show the following: (i) BSGM-QPSO-1DCNN achieves high accuracy rates of 99.93% and 99.94% in binary and multi-class experiments, respectively; (ii) the precision rates for the minority classes R2L and U2R are improved by 68% and 66%, respectively. Our research demonstrates that BSGM-QPSO-1DCNN is an efficient solution for addressing the imbalanced data issue in this field, and it outperforms the five intrusion detection methods used in this study.
APA, Harvard, Vancouver, ISO, and other styles
32

Kozłowski, Edward, Anna Borucka, Marta Cholewa-Wiktor, and Tomasz Jałowiec. "Influence of Selected Geopolitical Factors on Municipal Waste Management." Sustainability 17, no. 1 (2024): 190. https://doi.org/10.3390/su17010190.

Full text
Abstract:
The collection and transportation of municipal solid waste create a significant energy and carbon footprint, resulting in a significant environmental impact. Proper waste management organization is necessary to minimize this impact. This research aims to identify differences and similarities in waste collection sectors, distinguish affiliation clusters for different waste types, and determine the impact of geopolitical factors on waste production in the analyzed region. Therefore, the similarities of waste production in the separated sectors for different waste types were analyzed. Instead of using the Kolmogorov–Smirnov distance between distributions of waste production, the statistics have been calculated based on L1 and L2 norm because they give the scale of differences. The multidimensional scaling method (MDS) and cluster analysis with a Gaussian mixed model (GMM) were used to identify changes in waste production. This technique makes it possible to detect changes between sectors in the analyzed region. Significant differences in cluster membership of sectors by waste type were observed. Geopolitical factors such as the COVID-19 pandemic and the war in Ukraine have caused changes in the sector affiliations of the waste clusters under analysis. The pandemic caused changes in the affiliation of non-segregated waste, plastics, and glass, while no change in waste generation preferences was observed for paper and cardboard waste. The war in Ukraine caused changes in the generation preferences of all waste types in the analyzed region.
APA, Harvard, Vancouver, ISO, and other styles
33

Elking, Dennis M., G. Andrés Cisneros, Jean-Philip Piquemal, Thomas A. Darden, and Lee G. Pedersen. "Gaussian Multipole Model (GMM)." Journal of Chemical Theory and Computation 6, no. 1 (2009): 190–202. http://dx.doi.org/10.1021/ct900348b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Satyanand, Singh. "High level speaker specific features as an efficiency enhancing parameters in speaker recognition system." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 4 (2019): 2443–50. https://doi.org/10.11591/ijece.v9i4.pp2443-2450.

Full text
Abstract:
In this paper, I present high-level speaker specific feature extraction considering intonation, linguistics rhythm, linguistics stress, prosodic features directly from speech signals. I assume that the rhythm is related to language units such as syllables and appears as changes in measurable parameters such as fundamental frequency Fo, duration, and energy. In this work, the syllable type features are selected as the basic unit for expressing the prosodic features. The approximate segmentation of continuous speech to syllable units is achieved by automatically locating the vowel starting point. The knowledge of high-level speaker’s specific speakers is used as a reference for extracting the prosodic features of the speech signal. High-level speaker-specific features extracted using this method may be useful in applications such as speaker recognition where explicit phoneme/syllable boundaries are not readily available. The efficiency of the particular characteristics of the specific features used for automatic speaker recognition was evaluated on TIMIT and HTIMIT corpora initially sampled in the TIMIT at 16kHz to 8kHz. In summary, the experiment, the basic discriminating system, and the HMM system are formed on TIMIT corpus with a set of 48 phonemes. Proposed ASR system shows 1.99%, 2.10%, 2.16% and 2.19 % of efficiency improvements compare to traditional ASR system for <10ms, <20ms, <30ms and <40ms of 16KHz TIMIT utterances.
APA, Harvard, Vancouver, ISO, and other styles
35

Rusk, Sam, Chris Fernandez, Yoav Nygate, et al. "0710 REM Behavior Disorder Explainability in EEG via Spectral Band Cluster Prevalence." SLEEP 47, Supplement_1 (2024): A303—A304. http://dx.doi.org/10.1093/sleep/zsae067.0710.

Full text
Abstract:
Abstract Introduction Prior work has established substantial overlap in polysomnography features between synucleinopathy-associated RBD and PTSD/TASD-associated RBD (trauma-associated-sleep-disorders). However, our mechanistic understanding remains limited. To explore RBD endophenotypes, we applied a novel analysis for clustering and categorizing PSG without AI/ML or sleep scoring, Spectral-Band Cluster-Prevalence (SBCP), to examine and compare differences in EEG characteristics between patients with RBD diagnosis versus clinical controls. Methods Our data source was retrospective EEG/EOG recordings from N=124 PSG participants (age=57.5 [SD=15]) including n=74 RBD diagnosed patients (defined by PSG findiings and patient-reported dream-enactment) with n=50 clinical controls (AHI< 15). EEG-channels were excluded based on artifacts, normalized to max voltage, EOG-channels were normalized to in-channel voltage, and extracted into ten-second segments. Signal features were extracted for each segment: EEG delta(1-4Hz), theta(4-8Hz), alpha(8-12Hz), beta(12-30Hz) spectral band-powers and EOG broadband-powers. Feature EEG band-powers were projected into 3-dimensional subspace, where optimal parameters for Gaussian Mixture Model (GMM) were identified to allow for mixed EEG states. Cluster quality measures Silhouette, Davies-Bouldin, Akaike-Information-Criterion were evaluated to determine the optimal number of components (i.e. unique EEG states) required by the GMM to maximize the explained variance based on global optima in cluster quality values. Dwell-Fraction was estimated by assigning components to ten-second EEG segments, and used to report between-groups differences. Results GMM global optima identified n=3 components as the optimal number to describe short segments of EEG/EOG measured by how well the components explain RBD-associated between-group differences, showing the highest cluster quality values observed across all 3 cluster quality measures. Dwell-Fraction (defined as: percentage of total-sleep-time spent in each component), revealed statistically significant differences associated with RBD (Component-3: RBD>Controls) and clinical controls (Component-1: Controls>RBD) based on Mann-Whitney-U and t-test results. ROC-AUCs were calculated for classifying RBD-vs-Controls, based only on the Dwell-Fraction (Component-3: 0.65, Component-1: 0.57, Component-2: 0.52). Relative to Component-1, which best described controls, Component-2 best described RBD. Further, Component-2 showed band-power distributions associated with RBD including significantly higher theta/alpha band-power in EOG-channels, lower delta/beta band-power in EEG frontal-channels, and higher delta band-power in EEG central-channels. Conclusion Spectral-Band Cluster-Prevalence has potential applications to improve identification of RBD and RBD subtype-specific EEG biomarkers associated with synucleinopathy and PTSD/TASD. Support (if any)
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Yi, Miaomiao Li, Siwei Wang, et al. "Gaussian Mixture Model Clustering with Incomplete Data." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1s (2021): 1–14. http://dx.doi.org/10.1145/3408318.

Full text
Abstract:
Gaussian mixture model (GMM) clustering has been extensively studied due to its effectiveness and efficiency. Though demonstrating promising performance in various applications, it cannot effectively address the absent features among data, which is not uncommon in practical applications. In this article, different from existing approaches that first impute the absence and then perform GMM clustering tasks on the imputed data, we propose to integrate the imputation and GMM clustering into a unified learning procedure. Specifically, the missing data is filled by the result of GMM clustering, and the imputed data is then taken for GMM clustering. These two steps alternatively negotiate with each other to achieve optimum. By this way, the imputed data can best serve for GMM clustering. A two-step alternative algorithm with proved convergence is carefully designed to solve the resultant optimization problem. Extensive experiments have been conducted on eight UCI benchmark datasets, and the results have validated the effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
37

Alqulaity, Malak, and Po Yang. "Enhanced Conditional GAN for High-Quality Synthetic Tabular Data Generation in Mobile-Based Cardiovascular Healthcare." Sensors 24, no. 23 (2024): 7673. https://doi.org/10.3390/s24237673.

Full text
Abstract:
The generation of synthetic tabular data has emerged as a critical task in various fields, particularly in healthcare, where data privacy concerns limit the availability of real datasets for research and analysis. This paper presents an enhanced Conditional Generative Adversarial Network (GAN) architecture designed for generating high-quality synthetic tabular data, with a focus on cardiovascular disease datasets that encompass mixed data types and complex feature relationships. The proposed architecture employs specialized sub-networks to process continuous and categorical variables separately, leveraging metadata such as Gaussian Mixture Model (GMM) parameters for continuous attributes and embedding layers for categorical features. By integrating these specialized pathways, the generator produces synthetic samples that closely mimic the statistical properties of the real data. Comprehensive experiments were conducted to compare the proposed architecture with two established models: Conditional Tabular GAN (CTGAN) and Tabular Variational AutoEncoder (TVAE). The evaluation utilized metrics such as the Kolmogorov–Smirnov (KS) test for continuous variables, the Jaccard coefficient for categorical variables, and pairwise correlation analyses. Results indicate that the proposed approach attains a mean KS statistic of 0.3900, demonstrating strong overall performance that outperforms CTGAN (0.4803) and is comparable to TVAE (0.3858). Notably, our approach shows lowest KS statistics for key continuous features, such as total cholesterol (KS = 0.0779), weight (KS = 0.0861), and diastolic blood pressure (KS = 0.0957), indicating its effectiveness in closely replicating real data distributions. Additionally, it achieved a Jaccard coefficient of 1.00 for eight out of eleven categorical variables, effectively preserving categorical distributions. These findings indicate that the proposed architecture captures both distributions and dependencies, providing a robust solution in supporting mobile personalized cardiovascular disease prevention systems.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Mingliang, Fangyi Liu, Wei Sun, and Xin Tao. "The Relationship between Environmental Regulation and Green Total Factor Productivity in China: An Empirical Study Based on the Panel Data of 177 Cities." International Journal of Environmental Research and Public Health 17, no. 15 (2020): 5287. http://dx.doi.org/10.3390/ijerph17155287.

Full text
Abstract:
Promoting the coordinated development of industrialization and the environment is a goal pursued by all of the countries of the world. Strengthening environmental regulation (ER) and improving green total factor productivity (GTFP) are important means to achieving this goal. However, the relationship between ER and GTFP has been debated in the academic circles, which reflects the complexity of this issue. This paper empirically tested the relationship between ER and GTFP in China by using panel data and a systematic Gaussian Mixed Model (GMM) of 177 cities at the prefecture level. The research shows that the relationship between ER and GTFP is complex, which is reflected in the differences and nonlinearity between cities with different monitoring levels and different economic development levels. (1) The relationship between ER and GTFP is linear and non-linear in different urban groups. A positive linear relationship was found in the urban group with high economic development level, while a U-shaped nonlinear relationship was found in other urban groups. (2) There are differences in the inflection point value and the variable mean of ER in different urban groups, which have different promoting effects on GTFP. In key monitoring cities and low economic development level cities, the mean value of ER had not passed the inflection point, and ER was negatively correlated with GTFP. The mean values of ER variables in the whole sample, the non-key monitoring and the middle economic development level cities had all passed the inflection point, which gradually promoted the improvement of GTFP. (3) Among the control variables of the different city groups, science and technology input and the financial development level mainly had positive effects on GTFP, while foreign direct investment (FDI) and fixed asset investment variables mainly had negative effects.
APA, Harvard, Vancouver, ISO, and other styles
39

Singh, Renu, Arvind Singh, and Utpal Bhattacharjee. "A Review on Text-Independent Speaker Verification Techniques in Realistic World." Oriental journal of computer science and technology 9, no. 1 (2016): 36–40. http://dx.doi.org/10.13005/ojcst/901.07.

Full text
Abstract:
This paper presents a reviewof various speaker verification approaches in realistic world, and explore a combinational approach between Gaussian Mixture Model (GMM) and Support Vector Machine (SVM) as well as Gaussian Mixture Model (GMM) and Universal Background Model (UBM).
APA, Harvard, Vancouver, ISO, and other styles
40

Shi, X., and Q. H. Zhao. "GAUSSIAN MIXTURE MODEL AND RJMCMC BASED RS IMAGE SEGMENTATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (September 13, 2017): 647–50. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-647-2017.

Full text
Abstract:
For the image segmentation method based on Gaussian Mixture Model (GMM), there are some problems: 1) The number of component was usually a fixed number, i.e., fixed class and 2) GMM is sensitive to image noise. This paper proposed a RS image segmentation method that combining GMM with reversible jump Markov Chain Monte Carlo (RJMCMC). In proposed algorithm, GMM was designed to model the distribution of pixel intensity in RS image. Assume that the number of component was a random variable. Respectively build the prior distribution of each parameter. In order to improve noise resistance, used Gibbs function to model the prior distribution of GMM weight coefficient. According to Bayes' theorem, build posterior distribution. RJMCMC was used to simulate the posterior distribution and estimate its parameters. Finally, an optimal segmentation is obtained on RS image. Experimental results show that the proposed algorithm can converge to the optimal number of class and get an ideal segmentation results.
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Jialu, Deng Cai, and Xiaofei He. "Gaussian Mixture Model with Local Consistency." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (2010): 512–17. http://dx.doi.org/10.1609/aaai.v24i1.7659.

Full text
Abstract:
Gaussian Mixture Model (GMM) is one of the most popular data clustering methods which can be viewed as a linear combination of different Gaussian components. In GMM, each cluster obeys Gaussian distribution and the task of clustering is to group observations into different components through estimating each cluster's own parameters. The Expectation-Maximization algorithm is always involved in such estimation problem. However, many previous studies have shown naturally occurring data may reside on or close to an underlying submanifold. In this paper, we consider the case where the probability distribution is supported on a submanifold of the ambient space. We take into account the smoothness of the conditional probability distribution along the geodesics of data manifold. That is, if two observations are close in intrinsic geometry, their distributions over different Gaussian components are similar. Simply speaking, we introduce a novel method based on manifold structure for data clustering, called Locally Consistent Gaussian Mixture Model (LCGMM). Specifically, we construct a nearest neighbor graph and adopt Kullback-Leibler Divergence as the distance measurement to regularize the objective function of GMM. Experiments on several data sets demonstrate the effectiveness of such regularization.
APA, Harvard, Vancouver, ISO, and other styles
42

Bakare K.A and Torentikaza I.E. "An Improved Semi-Supervised Gaussian Mixture Model (I-SGMM)." Research Briefs on Information and Communication Technology Evolution 9 (November 12, 2023): 147–59. http://dx.doi.org/10.56801/rebicte.v9i.165.

Full text
Abstract:
In the era of data-driven decision-making, the Gaussian Mixture Model (GMM) stands as a cornerstone in statistical modeling, particularly in clustering and density estimation. The Improved GMM presents a robust solution to a fundamental problem in clustering: the determination of the optimal number of clusters. Unlike its predecessor, it does not rely on a predetermined cluster count but employs model selection criteria, such as the Bayesian Information Criterion (BIC) or Akaike Information Criterion (AIC), to automatically identify the most suitable cluster count for the given data. This inherent adaptability is a hallmark of the Improved GMM, making it a versatile tool in a broad spectrum of applications, from market segmentation to image processing. Furthermore, the Improved GMM revolutionizes parameter estimation and model fitting. It leverages advanced optimization techniques, such as the Expectation-Maximization (EM) algorithm or variational inference, to achieve convergence to more favorable local optima. This results in precise and reliable parameter estimates, including cluster means, covariances, and component weights. The Improved GMM is particularly invaluable when dealing with data of varying complexities, non-standard data distributions, and clusters with differing shapes and orientations. It excels at capturing the nuanced relationships within the data, providing a powerful framework for understanding complex systems. One of the key differentiators of the Improved GMM is its accommodation of full covariance matrices for each component. This feature empowers the model to account for intricate interdependencies between variables, which is essential for modeling real-world data effectively. It is capable of handling data that exhibits non-spherical or irregular cluster shapes, a significant limitation of the traditional GMM.
APA, Harvard, Vancouver, ISO, and other styles
43

Ding, Ing Jr, Chih Ta Yen, and Che Wei Chang. "Classification of Chinese Popular Songs Using a Fusion Scheme of GMM Model Estimate and Formant Feature Analysis." Applied Mechanics and Materials 479-480 (December 2013): 1006–9. http://dx.doi.org/10.4028/www.scientific.net/amm.479-480.1006.

Full text
Abstract:
In this paper, a fusion scheme that combines Gaussian mixture model (GMM) calculations and formant feature analysis, called GMM-Formant, is proposed for classification of Chinese popular songs. Generally, automatic classification of popular music could be performed by two main categories of techniques, model-based and feature-based approaches. In model-based classification techniques, GMM is widely used for its simplicity. In feature-based music recognition, the formant parameter is an important acoustic feature for evaluation. The proposed GMM-Formant method takes use of linear interpolation for combining GMM likelihood estimates and formant evaluation results appropriately. GMM-Formant will effectively adjust the likelihood score, which is derived from GMM calculations, by referring to certain degree of formant feature evaluation outcomes. By considering both model-based and feature-based techniques for song classification, GMM-Formant provides a more reliable recognition classification result and therefore will maintain a satisfactory performance in recognition accuracy. Experimental results obtained from a musical data set of numerous Chinese popular songs show the superiority of the proposed GMM-Formant. Keywords: Song classification; Gaussian mixture model; Formant feature; GMM-Formant.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Yan, Cun Bao Chen, and Li Zhao. "Noise Classification Based on GMM and AANN." Applied Mechanics and Materials 58-60 (June 2011): 1847–53. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.1847.

Full text
Abstract:
In this paper, Gaussian Mixture model (GMM) as specific method is applied to noise classification. On this basis, a modified Gaussian Mixture Model with an embedded Auto-Associate Neural Network (AANN) is proposed. It integrates the merits of GMM and AANN. We train GMM and AANN as a whole and they are trained by means of Maximum Likelihood (ML). In the process of training, the parameter of GMM and AANN are updated alternately. AANN reshapes the distribution of the data and improves the similarity of the feature data in the same distribution type of noise. Experiments show that the GMM with embedded AANN improves accuracy rate of noise classification against baseline GMM.
APA, Harvard, Vancouver, ISO, and other styles
45

Ma, Yong, Qiwen Jin, Xiaoguang Mei, et al. "Hyperspectral Unmixing with Gaussian Mixture Model and Low-Rank Representation." Remote Sensing 11, no. 8 (2019): 911. http://dx.doi.org/10.3390/rs11080911.

Full text
Abstract:
Gaussian mixture model (GMM) has been one of the most representative models for hyperspectral unmixing while considering endmember variability. However, the GMM unmixing models only have proper smoothness and sparsity prior constraints on the abundances and thus do not take into account the possible local spatial correlation. When the pixels that lie on the boundaries of different materials or the inhomogeneous region, the abundances of the neighboring pixels do not have those prior constraints. Thus, we propose a novel GMM unmixing method based on superpixel segmentation (SS) and low-rank representation (LRR), which is called GMM-SS-LRR. we adopt the SS in the first principal component of HSI to get the homogeneous regions. Moreover, the HSI to be unmixed is partitioned into regions where the statistical property of the abundance coefficients have the underlying low-rank property. Then, to further exploit the spatial data structure, under the Bayesian framework, we use GMM to formulate the unmixing problem, and put the low-rank property into the objective function as a prior knowledge, using generalized expectation maximization to solve the objection function. Experiments on synthetic datasets and real HSIs demonstrated that the proposed GMM-SS-LRR is efficient compared with other current popular methods.
APA, Harvard, Vancouver, ISO, and other styles
46

Bhuvaneswari, M. "Gaussian mixture model: An application to parameter estimation and medical image classification." Journal of Scientific and Innovative Research 5, no. 3 (2016): 100–105. http://dx.doi.org/10.31254/jsir.2016.5308.

Full text
Abstract:
Gaussian mixture model based parameter estimation and classification has recently received great attention in modelling and processin g data. Gaussian Mixture Model (GMM) is the probabilistic model for representing the presence of subpopulations and it works well with the classification and parameter estimation strategy. Here in this work Maximum Likelihood Estimation (MLE) based on Expectation Maximization (EM) is being used for the parameter estimation approach and the estimated parameters are being used for the training and the testing of the images for their normality and the abnormality. With the mean and the covariance calculated as the parameters they are used in the Gaussian Mixture Model (GMM) based training of the classifier. Support Vector Machine a discriminative classifier and the Gaussian Mixture Model a generative model classifier are the two most popular techniques. The performance of the classification strategy of both the classifiers used has a better proficiency when compared to the other classifiers. By combining the SVM and GMM we co uld be able to classify at a better level since estimating the parameters through the GMM has a very few amount of features and hence it is not needed to use any of the feature reduction techniques. In this the GMM classifier and the SVM classifier are trained usin g the parameters and they are to be compared.
APA, Harvard, Vancouver, ISO, and other styles
47

Haiqin Zhu. "Ice-Core Micro-CT Image Segmentation with Dual Stream Spectrum Deconvolution Neural Network and Gaussian Mixture Model." Journal of Electrical Systems 20, no. 3s (2024): 2588–600. http://dx.doi.org/10.52783/jes.3156.

Full text
Abstract:
Polar ice sheets, or ice cores, are among the most well-known natural archives that might provide crucial historical details about our planet's previous environment. An important factor in establishing the fundamental characteristics of ice, likes pore close-off, albedo, melt events, is the ice-core microstructure. To engulf these complications Ice-Core Micro-CT Image Segmentation with Dual Stream Spectrum Deconvolution Neural Network and Gaussian Mixture Model (ICMCTS-WSOA-DSSDNN-GMM)is proposed. Initially, micro scale CT images are collect from Alfred Wegener Institute ice-core storage as input. Then,data’s are given to pre-processing. In pre-processing, it enhance image brightness, remove salt pepper noise, crop outer ring (carbon fiber casing) to have only ice particles in image utilizing Federated Neural Collaborative Filtering (FNCF).The pre-processing output is given to segmentation. Here, micro scale CT image is utilized for segmenting high-resolution scans using Gaussian Mixture Model (GMM). After that, the segmented images are given to Dual Stream Spectrum Deconvolution Neural Network optimized with Water Strider Optimization Algorithm for classifying micro scale CT images as sintered snow, compacted firn and bubbly ice. The proposed ICMCTS-WSOA-DSSDNN-GMM method is executed in python. The performance of ICMCTS-WSOA-DSSDNN-GMM approach attains 16.24%, 17.90% and 27.7% high accuracy, 14.04%, 25.51% and 19.31% higher precision and 14.36%, 12.65%, 14.51% higher recall analyzed with existing techniques likes Ice-Core Micro-CT Image Segmentation With Deep Learning and Gaussian Mixture Method (ICMCTS-U-net-GMM), Computer Aided Detection of COVID 19 from CT Images Depend on Gaussian Mixture Method with Kernel Support Vector Machines Classifer (ICMCTS-KNN-GMM) and the Gmmseg: Gaussian mixture based generative semantic segmentation methods(ICMCTS-FCN-GMM) respectively.
APA, Harvard, Vancouver, ISO, and other styles
48

Muthahharah, Andi Shahifah, Muhammad Arif Tiro, and Aswi Aswi. "Application of Soft-Clustering Analysis Using Expectation Maximization Algorithms on Gaussian Mixture Model." Jurnal Varian 6, no. 1 (2022): 71–80. http://dx.doi.org/10.30812/varian.v6i1.2142.

Full text
Abstract:
Research on soft-clustering has not been explored much compared to hard-clustering. Soft-clustering algorithms are important in solving complex clustering problems. One of the soft-clustering methods is the Gaussian Mixture Model (GMM). GMM is a clustering method to classify data points into different clusters based on the Gaussian distribution. This study aims to determine the number of clusters formed by using the GMM method. The data used in this study is synthetic data on water quality indicators obtained from the Kaggle website. The stages of the GMM method are: imputing the Not Available (NA) value (if there is an NA value), checking the data distribution, conducting a normality test, and standardizing the data. The next step is to estimate the parameters with the Expectation Maximization (EM) algorithm. The best number of clusters is based on the biggest value of the Bayesian Information Creation (BIC). The results showed that the best number of clusters from synthetic data on water quality indicators was 3 clusters. Cluster 1 consisted of 1110 observations with low-quality category, cluster 2 consisted of 499 observations with medium quality category, and cluster 3 consisted of 1667 observations with high-quality category or acceptable. The results of this study recommend that the GMM method can be grouped correctly when the variables used are generally normally distributed. This method can be applied to real data, both in which the variables are normally distributed or which have a mixture of Gaussian and non-Gaussian.
APA, Harvard, Vancouver, ISO, and other styles
49

Gao, Guohua, Jeroen Vink, Fredrik Saaf, and Terence Wells. "Strategies to Enhance the Performance of Gaussian Mixture Model Fitting for Uncertainty Quantification." SPE Journal 27, no. 01 (2021): 329–48. http://dx.doi.org/10.2118/204008-pa.

Full text
Abstract:
Summary When formulating history matching within the Bayesian framework, we may quantify the uncertainty of model parameters and production forecasts using conditional realizations sampled from the posterior probability density function (PDF). It is quite challenging to sample such a posterior PDF. Some methods [e.g., Markov chain Monte Carlo (MCMC)] are very expensive, whereas other methods are cheaper but may generate biased samples. In this paper, we propose an unconstrained Gaussian mixture model (GMM) fitting method to approximate the posterior PDF and investigate new strategies to further enhance its performance. To reduce the central processing unit (CPU) time of handling bound constraints, we reformulate the GMM fitting formulation such that an unconstrained optimization algorithm can be applied to find the optimal solution of unknown GMM parameters. To obtain a sufficiently accurate GMM approximation with the lowest number of Gaussian components, we generate random initial guesses, remove components with very small or very large mixture weights after each GMM fitting iteration, and prevent their reappearance using a dedicated filter. To prevent overfitting, we add a new Gaussian component only if the quality of the GMM approximation on a (large) set of blind-test data sufficiently improves. The unconstrained GMM fitting method with the new strategies proposed in this paper is validated using nonlinear toy problems and then applied to a synthetic history-matching example. It can construct a GMM approximation of the posterior PDF that is comparable to the MCMC method, and it is significantly more efficient than the constrained GMM fitting formulation (e.g., reducing the CPU time by a factor of 800 to 7,300 for problems we tested), which makes it quite attractive for large-scale history-matching problems. NOTE: This paper is also published as part of the 2021 SPE Reservoir Simulation Conference Special Issue.
APA, Harvard, Vancouver, ISO, and other styles
50

Hong, Ruiwei, Qingjun Xing, Yuanyuan Shen, and Yanfei Shen. "Effective Quantization Evaluation Method of Functional Movement Screening with Improved Gaussian Mixture Model." Applied Sciences 13, no. 13 (2023): 7487. http://dx.doi.org/10.3390/app13137487.

Full text
Abstract:
Background: Functional movement screening (FMS) allows for the rapid assessment of an individual’s physical activity level and the timely detection of sports injury risk. However, traditional functional movement screening often requires on-site assessment by experts, which is time-consuming and prone to subjective bias. Therefore, the study of automated functional movement screening has become increasingly important. Methods: In this study, we propose an automated assessment method for FMS based on an improved Gaussian mixture model (GMM). First, the oversampling of minority samples is conducted, the movement features are manually extracted from the FMS dataset collected with two Azure Kinect depth sensors; then, we train the Gaussian mixture model with different scores (1 point, 2 points, 3 points) of feature data separately; finally, we conducted FMS assessment by using a maximum likelihood estimation. Results: The improved GMM has a higher scoring accuracy (improved GMM: 0.8) compared to other models (traditional GMM = 0.38, AdaBoost.M1 = 0.7, Naïve Bayes = 0.75), and the scoring results of improved GMM have a high level of agreement with the expert scoring (kappa = 0.67). Conclusions: The results show that the proposed method based on the improved Gaussian mixture model can effectively perform the FMS assessment task, and it is potentially feasible to use depth cameras for FMS assessment.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!