To see the other types of publications on this topic, follow the link: ICM algorithm.

Journal articles on the topic 'ICM algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'ICM algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shahida, T. D., M. Othman, and M. K. Abdullah. "FAST ZEROX ALGORITHM FOR ROUTING IN OPTICAL MULTISTAGE INTERCONNECTION NETWORKS." IIUM Engineering Journal 11, no. 1 (2010): 28–39. http://dx.doi.org/10.31436/iiumej.v11i1.51.

Full text
Abstract:
Based on the ZeroX algorithm, a fast and efficient crosstalk-free time- domain algorithm called the Fast ZeroX or shortly FastZ_X algorithm is proposed for solving optical crosstalk problem in optical Omega multistage interconnection networks. A new pre-routing technique called the inverse Conflict Matrix (iCM) is also introduced to map all possible conflicts identified between each node in the network as another representation of the standard conflict matrix commonly used in previous Zero-based algorithms. It is shown that using the new iCM, the original ZeroX algorithm is simplified, thus improved the algorithm by reducing the time to complete routing process. Through simulation modeling, the new approach yields the best performance in terms of minimal routing time in comparison to the original ZeroX algorithm as well as previous algorithms tested for comparison in this paper.
APA, Harvard, Vancouver, ISO, and other styles
2

BOAST, CHARLES W., and PHILIPPE BAVEYE. "ALLEVIATION OF AN INDETERMINACY PROBLEM AFFECTING TWO CLASSICAL ITERATIVE IMAGE THRESHOLDING ALGORITHMS." International Journal of Pattern Recognition and Artificial Intelligence 20, no. 01 (2006): 1–14. http://dx.doi.org/10.1142/s021800140600448x.

Full text
Abstract:
Thresholding algorithms are being increasingly used in a wide variety of disciplines to objectively discern patterns and objects in micrographs, still pictures or remotely-sensed images. Our experience has shown that three common thresholding algorithms exhibit indeterminacy, in that different operator inputs may lead to very different pattern characterizations. A grayscale image of a soil profile is used to illustrate this phenomemon in the case of the intermeans (IM), minimum error (ME), and Besag's iterated conditional modes (ICM) algorithms. For the illustrative example, the IM algorithm depends only weakly on the starting point of the iterative process — it converges to only two adjacent threshold values. In contrast, the ME algorithm converges to 14 different threshold values plus a segmentation that identifies the entire image as dye, and one that identifies none of it as dye. The ICM algorithm converges to an even wider variety of final segmentations, depending on its starting point. A noniterative modification of the IM and ME algorithms is proposed, providing a consistent method for choosing from among a set of apparently equally-valid segmentations.
APA, Harvard, Vancouver, ISO, and other styles
3

Shan Gao, 高山, 李成 Cheng Li, and 毕笃彦 Duyan Bi. "Image enhancement algorithm based on NF-ICM." Chinese Optics Letters 8, no. 5 (2010): 474–77. http://dx.doi.org/10.3788/col20100805.0474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Yan Ming, Hong Ling Ye, Yao Ming Li, and Yun Kang Sui. "The Research of Optimization Algorithm of Dynamic Topology Optimization Model of Continuum Structure Based on the ICM Method." Applied Mechanics and Materials 380-384 (August 2013): 1804–7. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.1804.

Full text
Abstract:
In this paper, we mainly focus on the structural optimal design of dynamics for continuum structures, and aim at constructing the topological optimal formulation by using the ICM (Independent, Continuum and Mapping) method, which is considering weight as objective function and fundamental eigenfrequency as constraint. The local model is removed by selecting suitable filter function. And two algorithms, dual sequential quadratic programming (DSQP) and global convergent method of moving asymptotes (GCMMA) algorithm, were used to solve the mathematic optimal model. Finally, numerical example is provided to demonstrate the validity and effectiveness of the ICM method and compare the optimization results of two optimization algorithms. The results show that both optimization algorithms can solve the mathematics optimization model effectively.
APA, Harvard, Vancouver, ISO, and other styles
5

Deng, Yaqi, Zhengwang Pei, Wenguo Li, and Dongchu Jiang. "Clutter Suppression Algorithm with Joint Intrinsic Clutter Motion Errors Calibration and Off-Grid Effects Mitigation in Airborne Passive Radars." Applied Sciences 13, no. 9 (2023): 5653. http://dx.doi.org/10.3390/app13095653.

Full text
Abstract:
In an airborne passive radar, multipath (MP) clutter, which is caused by MP signals contained in the contaminated reference signal, degrades the space-time adaptive processing (STAP) performance. The MP clutter suppression algorithm before STAP can mitigate the influence of impure reference signals. However, the performances of the existing MP clutter suppression methods deteriorate when the intrinsic clutter motion (ICM) exists because the sparse model of MP clutter is disturbed. To eliminate the impacts of ICM on MP clutter suppression, a joint optimization algorithm is developed for airborne passive radar. Firstly, the sparse model of MP clutter is modified by taking ICM fluctuation into account. Subsequently, the joint optimization function of the ICM fluctuation and MP clutter profile is derived. Finally, based on the local search technique, MP clutter is suppressed with ICM error calibration and off-grid effects mitigation. A range of simulations verify the reliability and superiority of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
6

Thomas, Molnar. "CAPSULE NETWORK PERFORMANCE WITH AUTONOMOUS NAVIGATION." International Journal of Artificial Intelligence and Applications (IJAIA) 11, January (2020): 1–15. https://doi.org/10.5281/zenodo.3663522.

Full text
Abstract:
Capsule Networks (CapsNets) have been proposed as an alternative to Convolutional Neural Networks (CNNs). This paper showcases how CapsNets are more capable than CNNs for autonomous agent exploration of realistic scenarios. In real world navigation, rewards external to agents may be rare. In turn, reinforcement learning algorithms can struggle to form meaningful policy functions. This paper’s approach Capsules Exploration Module (Caps-EM) pairs a CapsNets architecture with an Advantage Actor Critic algorithm. Other approaches for navigating sparse environments require intrinsic reward generators, such as the Intrinsic Curiosity Module (ICM) and Augmented Curiosity Modules (ACM). CapsEM uses a more compact architecture without need for intrinsic rewards. Tested using ViZDoom, the CapsEM uses 44% and 83% fewer trainable network parameters than the ICM and Depth-Augmented Curiosity Module (D-ACM), respectively, for 1141% and 437% average time improvement over the ICM and DACM, respectively, for converging to a policy function across "My Way Home" scenariosCapsule Networks (CapsNets) have been proposed as an alternative to Convolutional Neural Networks (CNNs). This paper showcases how CapsNets are more capable than CNNs for autonomous agent exploration of realistic scenarios. In real world navigation, rewards external to agents may be rare. In turn, reinforcement learning algorithms can struggle to form meaningful policy functions. This paper’s approach Capsules Exploration Module (Caps-EM) pairs a CapsNets architecture with an Advantage Actor Critic algorithm. Other approaches for navigating sparse environments require intrinsic reward generators, such as the Intrinsic Curiosity Module (ICM) and Augmented Curiosity Modules (ACM). CapsEM uses a more compact architecture without need for intrinsic rewards. Tested using ViZDoom, the CapsEM uses 44% and 83% fewer trainable network parameters than the ICM and Depth-Augmented Curiosity Module (D-ACM), respectively, for 1141% and 437% average time improvement over the ICM and DACM, respectively, for converging to a policy function across "My Way Home" scenarios
APA, Harvard, Vancouver, ISO, and other styles
7

Ye, Hong Ling, Yao Ming Li, Yan Ming Zhang, and Yun Kang Sui. "Structural Topology Optimization with Dynamic Response Based on Independent Continuous Mapping Method." Advanced Materials Research 765-767 (September 2013): 1658–61. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.1658.

Full text
Abstract:
This paper refer to weight as objective and subject to multiple response amplitude of the harmonic excitation. The ICM method is employed for solving the topology optimization problem and dual sequence quadratic programming (DSQP) is effective to solve the algorithm. A numerical example was presented and demonstrated the validity and effectiveness of the ICM method.
APA, Harvard, Vancouver, ISO, and other styles
8

Glendinning, R. H. "An evaluation of the icm algorithm for image reconstruction." Journal of Statistical Computation and Simulation 31, no. 3 (1989): 169–85. http://dx.doi.org/10.1080/00949658908811141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guo, Hui Min, and Ling Chao Zhan. "The TEM Image Segmentation Based on ICM-MRF Algorithm." Journal of Physics: Conference Series 1087 (September 2018): 022017. http://dx.doi.org/10.1088/1742-6596/1087/2/022017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bleich, Amnon, Antje Linnemann, Benjamin Jaidi, Björn H. Diem, and Tim O. F. Conrad. "Enhancing Electrocardiogram (ECG) Analysis of Implantable Cardiac Monitor Data: An Efficient Pipeline for Multi-Label Classification." Machine Learning and Knowledge Extraction 5, no. 4 (2023): 1539–56. http://dx.doi.org/10.3390/make5040077.

Full text
Abstract:
Implantable Cardiac Monitor (ICM) devices are demonstrating, as of today, the fastest-growing market for implantable cardiac devices. As such, they are becoming increasingly common in patients for measuring heart electrical activity. ICMs constantly monitor and record a patient’s heart rhythm, and when triggered, send it to a secure server where health care professionals (HCPs) can review it. These devices employ a relatively simplistic rule-based algorithm (due to energy consumption constraints) to make alerts for abnormal heart rhythms. This algorithm is usually parameterized to an over-sensitive mode in order to not miss a case (resulting in a relatively high false-positive rate), and this, combined with the device’s nature of constantly monitoring the heart rhythm and its growing popularity, results in HCPs having to analyze and diagnose an increasingly growing number of data. In order to reduce the load on the latter, automated methods for ECG analysis are nowadays becoming a great tool to assist HCPs in their analysis. While state-of-the-art algorithms are data-driven rather than rule-based, training data for ICMs often consist of specific characteristics that make their analysis unique and particularly challenging. This study presents the challenges and solutions in automatically analyzing ICM data and introduces a method for its classification that outperforms existing methods on such data. It carries this out by combining high-frequency noise detection (which often occurs in ICM data) with a semi-supervised learning pipeline that allows for the re-labeling of training episodes and by using segmentation and dimension-reduction techniques that are robust to morphology variations of the sECG signal (which are typical to ICM data). As a result, it performs better than state-of-the-art techniques on such data with, e.g., an F1 score of 0.51 vs. 0.38 of our baseline state-of-the-art technique in correctly calling atrial fibrillation in ICM data. As such, it could be used in numerous ways, such as aiding HCPs in the analysis of ECGs originating from ICMs by, e.g., suggesting a rhythm type.
APA, Harvard, Vancouver, ISO, and other styles
11

PEÑA-MORA, FENIOSKY, SANJEEV VADHAVKAR, and SIVA KUMAR DIRISALA. "Component-based software development for integrated construction management software applications." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 15, no. 2 (2001): 173–87. http://dx.doi.org/10.1017/s0890060401152054.

Full text
Abstract:
This paper presents a framework and a prototype for designing Integrated Construction Management (ICM) software applications using reusable components. The framework supports the collaborative development of ICM software applications by a group of ICM application developers from a library of software components. The framework focuses on the use of an explicit software development process to capture and disseminate specialized knowledge that augments the description of the ICM software application components in a library. The importance of preserving and using this knowledge has become apparent with the recent trend of combining the software development process with the software application code. There are three main components in the framework: design patterns, design rationale model, and intelligent search algorithms. Design patterns have been chosen to represent, record, and reuse the recurring design structures and associated design experience in object-oriented software development. The Design Recommendation and Intent Model (DRIM) was extended in the current research effort to capture the specific implementation of reusable software components. DRIM provides a method by which design rationale from multiple ICM application designers can be partially generated, stored, and later retrieved by a computer system. To address the issues of retrieval, the paper presents a unique representation of a software component, and a search mechanism based on Reggia's setcover algorithm to retrieve a set of components that can be combined to get the required functionality is presented. This paper also details an initial, proof-of-concept prototype based on the framework. By supporting nonobtrusive capture as well as effective access of vital design rationale information regarding the ICM application development process, the framework described in this paper is expected to provide a strong information base for designing ICM software.
APA, Harvard, Vancouver, ISO, and other styles
12

Mohammadi, Yahya, Davoud Ali Saghi, Ali Reza Shahdadi, Guilherme Jordão de Magalhães Rosa, and Morteza Sattaei Mokhtari. "Inferring phenotypic causal structures among body weight traits via structural equation modeling in Kurdi sheep." Acta Scientiarum. Animal Sciences 42 (June 8, 2020): e48823. http://dx.doi.org/10.4025/actascianimsci.v42i1.48823.

Full text
Abstract:
Data collected on 2550 Kurdi lambs originated from 1505 dams and 149 sires during 1991 to 2015 in Hossein Abad Kurdi Sheep Breeding Station, located in Shirvan city, North Khorasan province, North-eastern area of Iran, were used for inferring causal relationship among the body weights at birth (BW), at weaning (WW), at six-month age (6MW), at nine-month age (9MW) and yearling age (YW). The inductive causation (IC) algorithm was employed to search for causal structure among these traits. This algorithm was applied to the posterior distribution of the residual (co)variance matrix of a standard multivariate model (SMM). The causal structure detected by the IC algorithm coupling with biological prior knowledge provides a temporal recursive causal network among the studied traits. The studied traits were analyzed under three multivariate models including SMM, fully recursive multivariate model (FRM) and IC-based multivariate model (ICM) via a Bayesian approach by 100,000 iterations, thinning interval of 10 and the first 10,000 iterations as burn-in. The three considered multivariate models (SMM, FRM and ICM) were compared using deviance information criterion (DIC) and predictive ability measures including mean square of error (MSE) and Pearson's correlation coefficient between the observed and predicted values (r(y, )) of records. In general, structural equation based models (FRM and ICM) performed better than SMM in terms of lower DIC and MSE and also higher r(y, ). Among the tested models ICM had the lowest (36678.551) and SMM had the highest (36744.107)DIC values. In each case of the traits studied, the lowest MSE and the highest r(y, ) were obtained under ICM. The causal effects of BW on WW, WW on 6MW, 6MW on 9MW and 9MW on YW were statistically significant values of 1.478, 0.737, 0.776 and 0.929 kg, respectively (99% highest posterior density intervals did not include zero).
APA, Harvard, Vancouver, ISO, and other styles
13

Zhao, H. T., X. J. Li, Y. K. Li, J. F. Ge, and X. Y. Xu. "A NOVEL ADAPTIVE REMOTE SENSING PANSHARPENING ALGORITHM BASED ON THE ICM." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-3/W2-2022 (October 27, 2022): 97–102. http://dx.doi.org/10.5194/isprs-archives-xlviii-3-w2-2022-97-2022.

Full text
Abstract:
Abstract. In the paper, a novel Intersecting Cortical Network Model (ICM) based adaptive pansharpening algorithm is proposed to solve the deficiency of spectral distortion and texture detail missing in the remote sensing image fusion. The Shuffled Frog Leaping Algorithm (SFLA) is used in the proposed method to adaptively optimize the ICM model parameters. The fitness function of SFLA is constructed by fusion evaluation index Q4 and SAM, which can generate the irregular optimal segmentation regions. Then, these regions are used to adaptively extract the detail information of the panchromatic image. Finally, the sharpened higher resolution image is obtained with the weighted details and the multispectral upsampling image. Experiments are carried out with the WorldView-2 and GF-2 high-resolution datasets. The experimental results shown that the proposed algorithm performs better compared with the existing pansharpening fusion methods both in the spectral preservation and spatial detail enhancement, which verifies the effectiveness of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
14

Chang, Lin, Hao Zhang, Hua Yang, Tingting Lv, and Ning Tang. "Virtual covariance matrix reconstruction-based adaptive beamforming for small aperture array." PLOS ONE 18, no. 10 (2023): e0293012. http://dx.doi.org/10.1371/journal.pone.0293012.

Full text
Abstract:
Recently, many robust adaptive beamforming (RAB) algorithms have been proposed to improve beamforming performance when model mismatches occur. For a uniform linear array, a larger aperture array can obtain higher array gain for beamforming when the inter-sensor spacing is fixed. However, only the small aperture array can be used in the equipment limited by platform installation space, significantly weakening beamforming output performance. This paper proposes two beamforming methods for improving beamforming output in small aperture sensor arrays. The first method employs an integration algorithm that combines angular sector and gradient vector search to reconstruct the interference covariance matrix (ICM). Then, the interference-plus-noise covariance matrix (INCM) is reconstructed combined with the estimated noise power. The INCM and ICM are used to optimize the desired signal steering vector (SV) by solving a quadratically constrained quadratic programming (QCQP) problem. The second method proposes a beamforming algorithm based on a virtual extended array to increase the degree of freedom of the beamformer. First, the virtual conjugated array element is designed based on the structural characteristics of a uniform linear array, and received data at the virtual array element are obtained using a linear prediction method. Then, the extended INCM is reconstructed, and the desired signal SV is optimized using an algorithm similar to the actual array. The simulation results demonstrate the effectiveness of the proposed methods under different conditions.
APA, Harvard, Vancouver, ISO, and other styles
15

Jong-Kae Fwu and P. M. Djuric. "Unsupervised vector image segmentation by a tree structure-ICM algorithm." IEEE Transactions on Medical Imaging 15, no. 6 (1996): 871–80. http://dx.doi.org/10.1109/42.544504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Khelifi, Lazhar, and Max Mignotte. "MC-SSM: Nonparametric Semantic Image Segmentation With the ICM Algorithm." IEEE Transactions on Multimedia 21, no. 8 (2019): 1946–59. http://dx.doi.org/10.1109/tmm.2019.2891418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kittleson, Michelle M., Khalid M. Minhas, Rafael A. Irizarry, et al. "Gene expression analysis of ischemic and nonischemic cardiomyopathy: shared and distinct genes in the development of heart failure." Physiological Genomics 21, no. 3 (2005): 299–307. http://dx.doi.org/10.1152/physiolgenomics.00255.2004.

Full text
Abstract:
Cardiomyopathy can be initiated by many factors, but the pathways from unique inciting mechanisms to the common end point of ventricular dilation and reduced cardiac output are unclear. We previously described a microarray-based prediction algorithm differentiating nonischemic (NICM) from ischemic cardiomyopathy (ICM) using nearest shrunken centroids. Accordingly, we tested the hypothesis that NICM and ICM would have both shared and distinct differentially expressed genes relative to normal hearts and compared gene expression of 21 NICM and 10 ICM samples with that of 6 nonfailing (NF) hearts using Affymetrix U133A GeneChips and significance analysis of microarrays. Compared with NF, 257 genes were differentially expressed in NICM and 72 genes in ICM. Only 41 genes were shared between the two comparisons, mainly involved in cell growth and signal transduction. Those uniquely expressed in NICM were frequently involved in metabolism, and those in ICM more often had catalytic activity. Novel genes included angiotensin-converting enzyme-2 (ACE2), which was upregulated in NICM but not ICM, suggesting that ACE2 may offer differential therapeutic efficacy in NICM and ICM. In addition, a tumor necrosis factor receptor was downregulated in both NICM and ICM, demonstrating the different signaling pathways involved in heart failure pathophysiology. These results offer novel insight into unique disease-specific gene expression that exists between end-stage cardiomyopathy of different etiologies. This analysis demonstrates that transcriptome analysis offers insight into pathogenesis-based therapies in heart failure management and complements studies using expression-based profiling to diagnose heart failure of different etiologies.
APA, Harvard, Vancouver, ISO, and other styles
18

Gong, Mengmeng, Liang Zhang, Chuan Gao, Haiyan Wang, Xingrong Chen, and Xuefeng Zhang. "A Hybrid Adaptive Covariance Inflation Method for EnKF-Based ENSO Prediction." Journal of Climate 38, no. 2 (2025): 627–43. https://doi.org/10.1175/jcli-d-24-0175.1.

Full text
Abstract:
Abstract The ensemble-based data assimilation method is usually used for the initialization of El Niño–Southern Oscillation (ENSO) prediction. Because of sampling errors caused by a finite ensemble, imperfect physical parameterizations, and other factors, the multiplicative covariance inflation method is commonly employed in the standard ensemble Kalman filter (EnKF) to increase the prior variance and alleviate filter divergence. Given computational resource constraints, utilizing larger ensemble sizes to minimize sampling errors in high-dimensional oceanic or atmospheric models poses a challenge. The authors propose a new hybrid adaptive covariance inflation scheme in small ensembles and apply this method to an intermediate coupled model (ICM) used at the Institute of Oceanology, Chinese Academy of Sciences (IOCAS), named the IOCAS ICM for ENSO prediction. Hybrid refers to performing both prior and posterior inflation. Results show that the hybrid t-X adaptive inflation scheme performs best within the ICM framework, which can reduce the analysis errors by 46% for the daily SST anomaly compared to the standard EnKF algorithm using the fixed multiplicative covariance inflation factor. The t-X adaptive algorithm enhances the standard EnKF’s forecasting ability by optimizing the initial forecast field and improving internal model errors. This method notably improves the prediction skill for the Niño-1 + 2 SST anomaly, particularly in phase transitions. Regarding SST anomaly prediction, the advantages of the hybrid t-X adaptive method over the standard EnKF scheme mainly occur in the equatorial eastern Pacific and south boundaries of the ICM.
APA, Harvard, Vancouver, ISO, and other styles
19

Panda, Susmita, and Pradipta Kumar Nanda. "MRF Model-Based Estimation of Camera Parameters and Detection of Underwater Moving Objects." International Journal of Cognitive Informatics and Natural Intelligence 14, no. 4 (2020): 1–29. http://dx.doi.org/10.4018/ijcini.2020100101.

Full text
Abstract:
The detection of underwater objects in a video is a challenging problem particularly when both the camera and the objects are in motion. In this article, this problem has been conceived as an incomplete data problem and hence the problem is formulated in expectation maximization (EM) framework. In the E-step, the frame labels are the maximum a posterior (MAP) estimates, which are obtained using simulated annealing (SA) and the iterated conditional mode (ICM) algorithm. In the M-step, the camera model parameters, both intrinsic and extrinsic, are estimated. In case of parameter estimation, the features are extracted at coarse and fine scale. In order to continuously detect the object in different video frames, EM algorithm is repeated for each frame. The performance of the proposed scheme has been compared with other algorithms and the proposed algorithm is found to outperform.
APA, Harvard, Vancouver, ISO, and other styles
20

Lin, Zeyang, Jun Lai, Xiliang Chen, Lei Cao, and Jun Wang. "Learning to Utilize Curiosity: A New Approach of Automatic Curriculum Learning for Deep RL." Mathematics 10, no. 14 (2022): 2523. http://dx.doi.org/10.3390/math10142523.

Full text
Abstract:
In recent years, reinforcement learning algorithms based on automatic curriculum learning have been increasingly applied to multi-agent system problems. However, in the sparse reward environment, the reinforcement learning agents get almost no feedback from the environment during the whole training process, which leads to a decrease in the convergence speed and learning efficiency of the curriculum reinforcement learning algorithm. Based on the automatic curriculum learning algorithm, this paper proposes a curriculum reinforcement learning method based on the curiosity model (CMCL). The method divides the curriculum sorting criteria into temporal-difference error and curiosity reward, uses the K-fold cross validation method to evaluate the difficulty priority of task samples, uses the Intrinsic Curiosity Module (ICM) to evaluate the curiosity priority of the task samples, and uses the curriculum factor to adjust the learning probability of the task samples. This study compares the CMCL algorithm with other baseline algorithms in cooperative-competitive environments, and the experimental simulation results show that the CMCL method can improve the training performance and robustness of multi-agent deep reinforcement learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
21

Opoku, Eugene A., Syed Ejaz Ahmed, Yin Song, and Farouk S. Nathoo. "Ant Colony System Optimization for Spatiotemporal Modelling of Combined EEG and MEG Data." Entropy 23, no. 3 (2021): 329. http://dx.doi.org/10.3390/e23030329.

Full text
Abstract:
Electroencephalography/Magnetoencephalography (EEG/MEG) source localization involves the estimation of neural activity inside the brain volume that underlies the EEG/MEG measures observed at the sensor array. In this paper, we consider a Bayesian finite spatial mixture model for source reconstruction and implement Ant Colony System (ACS) optimization coupled with Iterated Conditional Modes (ICM) for computing estimates of the neural source activity. Our approach is evaluated using simulation studies and a real data application in which we implement a nonparametric bootstrap for interval estimation. We demonstrate improved performance of the ACS-ICM algorithm as compared to existing methodology for the same spatiotemporal model.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Yi Chao, Yi Sheng Zhang, and De Qun Li. "Shrinkage Analysis of Injection-Compression Molding for Transparent Plastic Panel by 3D Simulation." Applied Mechanics and Materials 44-47 (December 2010): 1029–33. http://dx.doi.org/10.4028/www.scientific.net/amm.44-47.1029.

Full text
Abstract:
Injection molding which is adopted to fabricate transparent plastic panel has merits such as high efficiency and low cost, and is being used widely. However, at the end of injection molding process, product may be affected by uneven shrinkage and as a result, bring defects such as warp to the final part. This can greatly damage its mechanical and optical quality. Injection-compression molding(ICM) can significantly minimize these defects. In the present paper, 3D model and coupled calculation method of flow, temperature and pressure are used to simulate the process of ICM for an irregular geometric transparent plastic panel. This method can not only reconstruct 3D flow front, temperature and pressure field of ICM process in a much more realistic way, but also more fully demonstrate the length of 3D fiber flow line(FFL) and the variation of shrinkage rate and its homogenize process. Due to such improved algorithm, great improvement has been made to accurately calculate the shrinkage rate in depth of the panel and predict its warp data, in comparison to the traditional temperature-volume contraction index method, which has important practical value to guide and design the technological process of ICM.
APA, Harvard, Vancouver, ISO, and other styles
23

Lee, Yu-Lin, Wei-Cheng Kao, Chih-Sheng Chen, Chi-Huang Ma, Pei-Wen Hsieh, and Chi-Min Lee. "Inverse Analysis for the Convergence-Confinement Method in Tunneling." Mathematics 10, no. 8 (2022): 1223. http://dx.doi.org/10.3390/math10081223.

Full text
Abstract:
For the safety of tunnel excavation, the observation of tunnel convergence not only provides a technique for assessing the stability of the surrounding ground, but also provides an estimate of the constitutive parameters of geological materials. This estimation method belongs to an inverse algorithm process called the inverse calculation method (ICM), which utilizes the incremental concept in the convergence-confinement method (CCM) to solve the support-ground interaction of circular tunnel excavation. The method is to determine the mathematical solution of the intersection of the two nonlinear curves, the support confining curve (SCC) and the ground reaction curve (GRC) in the CCM by using Newton’s recursive method and inversely calculating the unknown parameters. To verify the validity of the developed inverse algorithm process, this study compares the results of the ICM with those of the published articles. In addition, the modulus of rock mass and unsupported span are inversely deduced using the values of convergence difference measured in the practical case of railway tunnels.
APA, Harvard, Vancouver, ISO, and other styles
24

Ramirez Sierra, Michael Alexander, and Thomas R. Sokolowski. "AI-powered simulation-based inference of a genuinely spatial-stochastic gene regulation model of early mouse embryogenesis." PLOS Computational Biology 20, no. 11 (2024): e1012473. http://dx.doi.org/10.1371/journal.pcbi.1012473.

Full text
Abstract:
Understanding how multicellular organisms reliably orchestrate cell-fate decisions is a central challenge in developmental biology, particularly in early mammalian development, where tissue-level differentiation arises from seemingly cell-autonomous mechanisms. In this study, we present a multi-scale, spatial-stochastic simulation framework for mouse embryogenesis, focusing on inner cell mass (ICM) differentiation into epiblast (EPI) and primitive endoderm (PRE) at the blastocyst stage. Our framework models key regulatory and tissue-scale interactions in a biophysically realistic fashion, capturing the inherent stochasticity of intracellular gene expression and intercellular signaling, while efficiently simulating these processes by advancing event-driven simulation techniques. Leveraging the power of Simulation-Based Inference (SBI) through the AI-driven Sequential Neural Posterior Estimation (SNPE) algorithm, we conduct a large-scale Bayesian inferential analysis to identify parameter sets that faithfully reproduce experimentally observed features of ICM specification. Our results reveal mechanistic insights into how the combined action of autocrine and paracrine FGF4 signaling coordinates stochastic gene expression at the cellular scale to achieve robust and reproducible ICM patterning at the tissue scale. We further demonstrate that the ICM exhibits a specific time window of sensitivity to exogenous FGF4, enabling lineage proportions to be adjusted based on timing and dosage, thereby extending current experimental findings and providing quantitative predictions for both mutant and wild-type ICM systems. Notably, FGF4 signaling not only ensures correct EPI-PRE lineage proportions but also enhances ICM resilience to perturbations, reducing fate-proportioning errors by 10-20% compared to a purely cell-autonomous system. Additionally, we uncover a surprising role for variability in intracellular initial conditions, showing that high gene-expression heterogeneity can improve both the accuracy and precision of cell-fate proportioning, which remains robust when fewer than 25% of the ICM population experiences perturbed initial conditions. Our work offers a comprehensive, spatial-stochastic description of the biochemical processes driving ICM differentiation and identifies the necessary conditions for its robust unfolding. It also provides a framework for future exploration of similar spatial-stochastic systems in developmental biology.
APA, Harvard, Vancouver, ISO, and other styles
25

Karimi, Mohsen, Mohammad Pichan, Adib Abrishamifar, and Mehdi Fazeli. "An improved integrated control modeling of a high-power density interleaved non-inverting buck-boost DC-DC converter." World Journal of Engineering 15, no. 6 (2018): 688–99. http://dx.doi.org/10.1108/wje-11-2017-0360.

Full text
Abstract:
PurposeThis paper aims to propose a novel integrated control method (ICM) for high-power-density non-inverting interleaved buck-boost DC-DC converter. To achieve high power conversion by conventional single phase DC-DC converter, inductor value must be increased. This converter is not suitable for industrial and high-power applications as large inductor value will increase the inductor current ripple. Thus, two-phase non-inverting interleaved buck-boost DC-DC converter is proposed.Design/methodology/approachThe proposed ICM approach is based on the theory of integrated dynamic modeling of continuous conduction mode (CCM), discontinuous conduction mode and synchronizing parallel operation mode. In addition, it involves the output voltage controller with inner current loop (inductor current controller) to make a fair balancing between two stages. To ensure fast transient performance, proposed digital ICM is implemented based on a TMS320F28335 digital signal microprocessor.FindingsThe results verify the effectiveness of the proposed ICM algorithm to achieve high voltage regulating (under 0.01 per cent), very low inductor current ripple (for boost is 1.96 per cent, for buck is 1.1) and fair input current balance between two stages (unbalancing current less than 0.5A).Originality/valueThe proposed new ICM design procedure is developed satisfactorily to ensure fast transient response even under high load variation and the solving R right-half-plane HP zeros of the CCM. In addition, the proposed method can equally divide the input current of stages and stable different parallel operation modes with large input voltage variations.
APA, Harvard, Vancouver, ISO, and other styles
26

Rezaeimozafar, Mostafa, Mohsen Eskandari, Mohammad Hadi Amini, Mohammad Hasan Moradi, and Pierluigi Siano. "A Bi-Layer Multi-Objective Techno-Economical Optimization Model for Optimal Integration of Distributed Energy Resources into Smart/Micro Grids." Energies 13, no. 7 (2020): 1706. http://dx.doi.org/10.3390/en13071706.

Full text
Abstract:
The energy management system is executed in microgrids for optimal integration of distributed energy resources (DERs) into the power distribution grids. To this end, various strategies have been more focused on cost reduction, whereas effectively both economic and technical indices/factors have to be considered simultaneously. Therefore, in this paper, a two-layer optimization model is proposed to minimize the operation costs, voltage fluctuations, and power losses of smart microgrids. In the outer-layer, the size and capacity of DERs including renewable energy sources (RES), electric vehicles (EV) charging stations and energy storage systems (ESS), are obtained simultaneously. The inner-layer corresponds to the scheduled operation of EVs and ESSs using an integrated coordination model (ICM). The ICM is a fuzzy interface that has been adopted to address the multi-objectivity of the cost function developed based on hourly demand response, state of charges of EVs and ESS, and electricity price. Demand response is implemented in the ICM to investigate the effect of time-of-use electricity prices on optimal energy management. To solve the optimization problem and load-flow equations, hybrid genetic algorithm (GA)-particle swarm optimization (PSO) and backward-forward sweep algorithms are deployed, respectively. One-day simulation results confirm that the proposed model can reduce the power loss, voltage fluctuations and electricity supply cost by 51%, 40.77%, and 55.21%, respectively, which can considerably improve power system stability and energy efficiency.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Jian Guang, Yong Xia Li, and Ping Chen. "SAR Image Segmentation Based on Bayesian Network." Advanced Materials Research 756-759 (September 2013): 1835–39. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.1835.

Full text
Abstract:
In this paper, we propose a Bayesian network model. Firstly, the Bayesian network model is introduced, and Belief Propagation (BP) algorithm is utilized for model estimation. Then ExpectationMaximization (EM) algorithm is used for parameter estimation of the Bayesian network. Finally, the SAR image is segmented by calculating the Maximum Posteriori Probability (MAP) of each pixel. Experimental results show that, comparing with the Markov Random Field - Intersecting Cortical Model (MRF-ICM), our Bayesian network model gives better results in both segmentation and time-consuming.
APA, Harvard, Vancouver, ISO, and other styles
28

Arampatzis, Marios, Maria Pempetzoglou, and Athanasios Tsadiras. "Two Lot-Sizing Algorithms for Minimizing Inventory Cost and Their Software Implementation." Information 15, no. 3 (2024): 167. http://dx.doi.org/10.3390/info15030167.

Full text
Abstract:
Effective inventory management is crucial for businesses to balance minimizing holding costs while optimizing ordering strategies. Monthly or sporadic orders over time may lead to high ordering or holding costs, respectively. In this study, we introduce two novel algorithms designed to optimize ordering replenishment quantities, minimizing total replenishment, and holding costs over a planning horizon for both partially loaded and fully loaded trucks. The novelty of the first algorithm is that it extends the classical Wagner–Whitin approach by incorporating various additional cost elements, stock retention considerations, and warehouse capacity constraints, making it more suitable for real-world problems. The second algorithm presented in this study is a variation of the first algorithm, with its contribution being that it incorporates the requirement of several suppliers to receive order quantities that regard only fully loaded trucks. These two algorithms are implemented in Python, creating the software tool called “Inventory Cost Minimizing tool” (ICM). This tool takes relevant data inputs and outputs optimal order timing and quantities, minimizing total costs. This research offers practical and novel solutions for businesses seeking to streamline their inventory management processes and reduce overall expenses.
APA, Harvard, Vancouver, ISO, and other styles
29

Filali, Houcemeddine, and Karim Kalti. "Image segmentation using MRF model optimized by a hybrid ACO-ICM algorithm." Soft Computing 25, no. 15 (2021): 10181–204. http://dx.doi.org/10.1007/s00500-021-05957-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kremer, Thomas, Emre Gazyakan, Joachim T. Maurer, et al. "Intra- and Extrathoracic Malignant Tracheoesophageal Fistula—A Differentiated Reconstructive Algorithm." Cancers 13, no. 17 (2021): 4329. http://dx.doi.org/10.3390/cancers13174329.

Full text
Abstract:
Background: Tracheoesophageal fistulae (TEF) after oncologic resections and multimodal treatment are life-threatening and surgically challenging. Radiation and prior procedures hamper wound healing and lead to high complication rates. We present an interdisciplinary algorithm for the treatment of TEF derived from the therapy of consecutive patients. Patients and methods: 18 patients (3 females, 15 males) treated for TEF from January 2015 to July 2017 were included. Two patients were treated palliatively, whereas reconstructions were attempted in 16 cases undergoing 24 procedures. Discontinuity resection and secondary gastric pull-up were performed in two patients. Pedicled reconstructions were pectoralis major (n = 2), sternocleidomastoid muscle (n = 2), latissimus dorsi (n = 1) or intercostal muscle (ICM, n = 7) flaps. Free flaps were anterolateral thigh (ALT, n = 4), combined anterolateral thigh/anteromedial thigh (ALT/AMT, n = 1), jejunum (n = 3) or combined ALT–jejunum flaps (n = 2). Results: Regarding all 18 patients, 11 of 16 reconstructive attempts were primarily successful (61%), whereas long-term success after multiple procedures was possible in 83% (n = 15). The 30-day survival was 89%. Derived from the experience, patients were divided into three subgroups (extrathoracic, cervicothoracic, intrathroracic TEF) and a treatment algorithm was developed. Primary reconstructions for extra- and cervicothoracic TEF were pedicled flaps, whereas free flaps were used in recurrent or persistent cases. Pedicled ICM flaps were mostly used for intrathoracic TEF. Conclusion: TEF after multimodal tumor treatment require concerted interdisciplinary efforts for successful reconstruction. We describe a differentiated reconstructive approach including multiple reconstructive techniques from pedicled to chimeric ALT/jejunum flaps. Hereby, successful reconstructions are mostly possible. However, disease and patient-specific morbidity has to be anticipated and requires further interdisciplinary management.
APA, Harvard, Vancouver, ISO, and other styles
31

Liang, Jingwen, Xiner Huang, and Huibing Wu. "Research on evaluation model based on D & A system." Highlights in Business, Economics and Management 2 (November 6, 2022): 475–82. http://dx.doi.org/10.54097/hbem.v2i.2406.

Full text
Abstract:
The evaluation system established by our team for the D&A system hopes to help ICM companies fully utilize the data of their own output to understand their strengths and weaknesses, and help ICM supply words to obtain higher benefits. We introduced the hierarchical analysis method (AHP) to calculate the weights of the nine tertiary indicators, among which the human capacity, technical solutions and data management had the largest weights, 0.5584, 0.5936 and 0.625. Next, we used the entropy weighting method (EWM) to calculate the weights of the secondary and tertiary indicators respectively. Subsequently, we used the genetic algorithm (GA) and combined with the conclusions drawn earlier to make the selection of the optimal solution, and the optimization was calculated to obtain the maximum value of 0.432.
APA, Harvard, Vancouver, ISO, and other styles
32

He, Yizhou, Hy Trac, and Nickolay Y. Gnedin. "A Hydro-particle-mesh Code for Efficient and Rapid Simulations of the Intracluster Medium." Astrophysical Journal 925, no. 2 (2022): 134. http://dx.doi.org/10.3847/1538-4357/ac3bcb.

Full text
Abstract:
Abstract We introduce the cosmological HYPER code based on an innovative hydro-particle-mesh (HPM) algorithm for efficient and rapid simulations of gas and dark matter. For the HPM algorithm, we update the approach of Gnedin & Hui to expand the scope of its application from the lower-density intergalactic medium (IGM) to the higher-density intracluster medium (ICM). While the original algorithm tracks only one effective particle species, the updated version separately tracks the gas and dark matter particles, as they do not exactly trace each other on small scales. For the approximate hydrodynamics solver, the pressure term in the gas equations of motion is calculated using robust physical models. In particular, we use a dark matter halo model, ICM pressure profile, and IGM temperature–density relation, all of which can be systematically varied for parameter-space studies. We show that the HYPER simulation results are in good agreement with the halo model expectations for the density, temperature, and pressure radial profiles. Simulated galaxy cluster scaling relations for Sunyaev–Zel’dovich (SZ) and X-ray observables are also in good agreement with mean predictions, with scatter comparable to that found in hydrodynamic simulations. HYPER also produces lightcone catalogs of dark matter halos and full-sky tomographic maps of the lensing convergence, SZ effect, and X-ray emission. These simulation products are useful for testing data analysis pipelines, generating training data for machine learning, understanding selection and systematic effects, and for interpreting astrophysical and cosmological constraints.
APA, Harvard, Vancouver, ISO, and other styles
33

Deng, Yonghong, and Quanzhu Zhang. "Resonance overvoltage control algorithms in long cable frequency conversion drive based on discrete mathematics." Open Physics 18, no. 1 (2020): 408–18. http://dx.doi.org/10.1515/phys-2020-0120.

Full text
Abstract:
AbstractIn order to solve the problem that the long cable variable voltage and variable frequency (VVVF) system does not adopt an effective capacitor voltage sharing control method, resulting in a poor effect of resonance overvoltage control, the resonance overvoltage control algorithm of the long cable VVVF system based on discrete mathematics is studied. First, the long cable frequency conversion drive system is established. In order to ensure voltage loss in the range of motor requirements, a frequency converter–cable–motor (ICM) system connection mode is used to maintain the system operation. Based on the research of the capacitor voltage balance control strategy of a long cable frequency conversion drive system, the discrete mathematical model of the AC side of the ICM system is established by using this control strategy. The improved constant active power controller is obtained by establishing the mathematical model, and the resonant overvoltage in a long cable frequency conversion drive is realized by using the constant active power controller. The experimental results show that the algorithm can effectively control the resonance overvoltage phenomenon in the long cable frequency control system, and the control accuracy is over 97%. It has good performance and can be applied in practice.
APA, Harvard, Vancouver, ISO, and other styles
34

Huang, Bo, Jiacheng Xie, and Jiawei Yan. "Inspection Robot Navigation Based on Improved TD3 Algorithm." Sensors 24, no. 8 (2024): 2525. http://dx.doi.org/10.3390/s24082525.

Full text
Abstract:
The swift advancements in robotics have rendered navigation an essential task for mobile robots. While map-based navigation methods depend on global environmental maps for decision-making, their efficacy in unfamiliar or dynamic settings falls short. Current deep reinforcement learning navigation strategies can navigate successfully without pre-existing map data, yet they grapple with issues like inefficient training, slow convergence, and infrequent rewards. To tackle these challenges, this study introduces an improved two-delay depth deterministic policy gradient algorithm (LP-TD3) for local planning navigation. Initially, the integration of the long–short-term memory (LSTM) module with the Prioritized Experience Re-play (PER) mechanism into the existing TD3 framework was performed to optimize training and improve the efficiency of experience data utilization. Furthermore, the incorporation of an Intrinsic Curiosity Module (ICM) merges intrinsic with extrinsic rewards to tackle sparse reward problems and enhance exploratory behavior. Experimental evaluations using ROS and Gazebo simulators demonstrate that the proposed method outperforms the original on various performance metrics.
APA, Harvard, Vancouver, ISO, and other styles
35

Yu, Zhenhua, Weijia Cui, Yuxi Du, Bin Ba, and Mengjiao Quan. "Null Broadening Robust Adaptive Beamforming Algorithm Based on Power Estimation." Sensors 22, no. 18 (2022): 6984. http://dx.doi.org/10.3390/s22186984.

Full text
Abstract:
In order to solve the problem of severely decreased performance under the situation of rapid moving sources and unstable array platforms, a null broadening robust adaptive beamforming algorithm based on power estimation is proposed in this paper. First of all, we estimate the interference signal power according to the characteristic subspace theory. Then, the correspondence between the signal power and steering vector (SV) is obtained based on the orthogonal property, and the interference covariance matrix (ICM) is reconstructed. Finally, with the aim of setting virtual interference sources, null broadening can be carried out. The proposed algorithm results in a deeper null, lower side lobes and higher tolerance of the desired signal steering vector mismatch under the condition of low complexity. The simulation results show that the algorithm also has stronger robustness.
APA, Harvard, Vancouver, ISO, and other styles
36

Ye, Hong-Ling, Ji-Cheng Li, Bo-Shuai Yuan, Nan Wei, and Yun-Kang Sui. "Acceleration Design for Continuum Topology Optimization by Using Pix2pix Neural Network." International Journal of Applied Mechanics 13, no. 04 (2021): 2150042. http://dx.doi.org/10.1142/s1758825121500423.

Full text
Abstract:
Aiming at speeding up the process of topology optimization and generating a novel high-quality topology configuration, a topology optimization design with lightweight structure based on deep learning is investigated. First, Independent Continuous Mapping (ICM) method is employed to establish the original configurations including the intermediate configurations and the corresponding final configurations, and the original configurations are padded and merged to form the dataset required by the deep learning network. Then the high-dimensional mapping relationship between the intermediate configuration and the final configuration is created by using Pix2pix neural network (Pix2pix NN), which transforms the topology optimization into image-to-image translation problem. Finally, the acceleration algorithm is utilized to accelerate the iteration process by the pre-trained network. Numerical examples show that the coupling method is feasible and efficient in topology optimization design. The method provides a new solution for topology optimization design to shorten the iteration process and broaden the application of ICM method.
APA, Harvard, Vancouver, ISO, and other styles
37

Melyanovskaya, Yu L. "Contribution of the intestinal current measurement method to assessment of the efficacy of CFTR modulators in cystic fibrosis." PULMONOLOGIYA 34, no. 2 (2024): 283–88. http://dx.doi.org/10.18093/0869-0189-2024-34-2-283-288.

Full text
Abstract:
Cystic fibrosis (CF) is a disease caused by pathogenic variants of the CFTR gene. In the last decade, the treatment algorithm has entered a new era as several drugs have become available that restore the function of the CFTR chloride channel and are called CFTR modulators. The efficacy and safety of targeted drugs in cystic fibrosis needs to be further investigated using additional assessment methods.The aim of this study was to investigate the role of intestinal current measurement (ICM) in assessing the efficacy of targeted therapy for cystic fibrosis.Methods. The efficacy of CFTR modulator therapy was evaluated in 15 patients, of which 10 were children and 5 were adults. In addition to the ICM method, patients’ clinical parameters, sweat test, and pulmonary function were also evaluated according to clinical guidelines.Results. Patients with genotypes 2143delT/7121G>T and G542X/R785X had no restoration of chloride channel function with elexacaftor + tezacaftor + ivacaftor therapy, and patients with the L467F;F508del genotype with lumacaftor + ivacaftor therapy. In patients with the F508del/F508del, N1303K/G461E, N1303K/3321delG genotype, improvements were noted in terms of the restoration of CFTR channel function during therapy with elexacaftor + tezacaftor + ivacaftor therapy, and in patients with the F508del/F508del genotype during therapy with tezacaftor + ivacaftor therapy and lumacaftor + ivacaftor.Conclusion. Restoring the function of the epithelial chloride channel (CFTR) is the basis for increasing life expectancy in CF. The crucial role of the ICM method in determining the efficacy of CFTR modulators is shown.
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Sheng-Chieh, Yuanyuan Su, Fabio Gastaldello, and Nathan Jacobs. "Semisupervised Learning for Detecting Inverse Compton Emission in Galaxy Clusters." Astrophysical Journal 977, no. 2 (2024): 176. https://doi.org/10.3847/1538-4357/ad8888.

Full text
Abstract:
Abstract Inverse Compton (IC) emission associated with the nonthermal component of the intracluster medium (ICM) has been a long-sought phenomenon in cluster physics. Traditional spectral fitting often suffers from the degeneracy between the two-temperature thermal (2T) spectrum and the one-temperature plus IC power-law (1T+IC) spectrum. We present a semisupervised deep-learning approach to search for IC emission in galaxy clusters. We employ a conditional autoencoder (CAE), which is based on an autoencoder with latent representations trained to constrain the thermal parameters of the ICM. The algorithm is trained and tested using synthetic NuSTAR X-ray spectra with instrumental and astrophysical backgrounds included. The training data set only contains 2T spectra, which is more common than 1T+IC spectra. Anomaly detection is performed on the validation and test data sets consisting of 2T spectra as the normal set and 1T+IC spectra as anomalies. With a threshold anomaly score, chosen based on cross validation, our algorithm is able to identify spectra that contain an IC component in the test data set, with a balanced accuracy (BAcc) of 0.64, which outperforms traditional spectral fitting (BAcc = 0.55) and ordinary autoencoders (BAcc = 0.55). Traditional spectral fitting is better at identifying IC cases among true IC spectra (a better recall), while IC predictions made by CAE have a higher chance of being true IC cases (a better precision), demonstrating that they mutually complement each other.
APA, Harvard, Vancouver, ISO, and other styles
39

Lalaoui, Lahouaoui, and Abdelhak Djaalab. "Markov random field model and expectation of maximization for images segmentation." Markov random field model and expectation of maximization for images segmentation 29, no. 2 (2023): 772–79. https://doi.org/10.11591/ijeecs.v29.i2.pp772-779.

Full text
Abstract:
Image segmentation is a significant issue in image processing. Among the various models and approaches that have been developed, some are commonly used the Markov random field (MRF) model, statistical techniques MRF. In this study a Markov random field proposed is based on an expectation-maximization (EM) modified (EMM) model. In this paper, the local optimization is based on a modified EM method for parameter estimation and the iterative conditional model (ICM) method for finding the solution given a fixed set of these parameters. To select the combination strategy, it is necessary to carry out a comparative study to find the best result. The effectiveness of our proposed methods has been proven by experimentation. We have applied this segmented algorithm to different types of images, exhibiting the algorithm's image segmentation strength with its best values criteria for EM statics and other methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Ouyang, S., K. Fan, H. Wang, and Z. Wang. "CHANGE DETECTION OF REMOTE SENSING IMAGES BY DT-CWT AND MRF." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1/W1 (May 30, 2017): 3–10. http://dx.doi.org/10.5194/isprs-archives-xlii-1-w1-3-2017.

Full text
Abstract:
Aiming at the significant loss of high frequency information during reducing noise and the pixel independence in change detection of multi-scale remote sensing image, an unsupervised algorithm is proposed based on the combination between Dual-tree Complex Wavelet Transform (DT-CWT) and Markov random Field (MRF) model. This method first performs multi-scale decomposition for the difference image by the DT-CWT and extracts the change characteristics in high-frequency regions by using a MRF-based segmentation algorithm. Then our method estimates the final maximum a posterior (MAP) according to the segmentation algorithm of iterative condition model (ICM) based on fuzzy c-means(FCM) after reconstructing the high-frequency and low-frequency sub-bands of each layer respectively. Finally, the method fuses the above segmentation results of each layer by using the fusion rule proposed to obtain the mask of the final change detection result. The results of experiment prove that the method proposed is of a higher precision and of predominant robustness properties.
APA, Harvard, Vancouver, ISO, and other styles
41

Retraint, Florent, Fran�oise Peyrin, and Jean Marc Dinten. "Three-dimensional regularized binary image reconstruction from three two-dimensional projections using a randomized ICM algorithm." International Journal of Imaging Systems and Technology 9, no. 2-3 (1998): 135–46. http://dx.doi.org/10.1002/(sici)1098-1098(1998)9:2/3<135::aid-ima11>3.0.co;2-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hou, Yi Min, Chun Ting Zhang, Xiao Yan Lai, and Jian Ming Di. "Research on Statistic-Based Image Segmentation Method." Advanced Materials Research 461 (February 2012): 575–78. http://dx.doi.org/10.4028/www.scientific.net/amr.461.575.

Full text
Abstract:
The paper researched the image segmentation method based on statistic. For the multiple class segmentation, the K-means segmentation was employed in the first part. The segmentation method named OTSU is discussed in the second part of this paper. To solve the problem of the image noise, the method based on the Markov Random Field (MRF) is proposed in the third part of the paper. The ICM optimization algorithm is used in the procedure of MRF segmentation. In the experiments part, the methods are compared with each other, and the results showed that the method based on MRF are more efficient to remove the noise in the images.
APA, Harvard, Vancouver, ISO, and other styles
43

Duan, Hai Jun, Guang Min Wu, Dan Liu, John D. Mai, and Jian Ming Chen. "Influence of Clique Potential Parameters on Classification Using Bayesian MRF Model for Remote Sensing Image in Dali Erhai Basin." Advanced Materials Research 658 (January 2013): 508–12. http://dx.doi.org/10.4028/www.scientific.net/amr.658.508.

Full text
Abstract:
Image classification of remote sensing data is an important topic and long-term tasks in applications [1]. Markov random field (MRF) has more advantages in processing contextual information [2]. Bayesian approach enables the incorporation of prior model and likelihood distribution, this paper has formulated a Bayesian-MRF classification model based on MAP-ICM framework. It uses Potts model in label field and assume Gaussian distribution in observation field. According to maximum a posteriori (MAP) criterion, each new classified label can be obtained by the minimum of energy using Iterated Conditional Modes (ICM) algorithm. Finally, classification tasks are carried out by Bayesian-MRF classification model. Experimental results show that: (1) Clique potential parameters affect classification greatly. When it is 0.5, the classification accuracy reaches maximum with the best classification result for study area of Dali Erhai Lake basin using landsat TM data. (2) Bayesian MRF model have obvious advantages in classification for neighbourhood pixels so that it can separate Shadow class from Water class because the Shadow in mountain areas is very similar to Water in spectrum. In this case study, the best classification accuracy reaches 95.8%. The approaches and results will have important reference value for applications such as land use/cover classification, environment/ecological monitoring etc.
APA, Harvard, Vancouver, ISO, and other styles
44

Ye, Hongling, Ning Chen, Yunkang Sui, and Jun Tie. "Three-Dimensional Dynamic Topology Optimization with Frequency Constraints Using Composite Exponential Function and ICM Method." Mathematical Problems in Engineering 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/491084.

Full text
Abstract:
The dynamic topology optimization of three-dimensional continuum structures subject to frequency constraints is investigated using Independent Continuous Mapping (ICM) design variable fields. The composite exponential function (CEF) is selected to be a filter function which recognizes the design variables and to implement the changing process of design variables from “discrete” to “continuous” and back to “discrete.” Explicit formulations of frequency constraints are given based on filter functions, first-order Taylor series expansion. And an improved optimal model is formulated using CEF and the explicit frequency constraints. Dual sequential quadratic programming (DSQP) algorithm is used to solve the optimal model. The program is developed on the platform of MSC Patran &amp; Nastran. Finally, numerical examples are given to demonstrate the validity and applicability of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
45

Rosier, Arnaud, Eliot Crespin, Arnaud Lazarus, et al. "B-PO04-037 A NOVEL PROPRIETARY ALGORITHM REDUCES THE FALSE POSITIVE RATE OF MEDTRONIC LNQ11 ICM DEVICES BY 79%." Heart Rhythm 18, no. 8 (2021): S294. http://dx.doi.org/10.1016/j.hrthm.2021.06.733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zhu, Sisi, Zaiming Geng, Yingjuan Xie, et al. "New Underwater Image Enhancement Algorithm Based on Improved U-Net." Water 17, no. 6 (2025): 808. https://doi.org/10.3390/w17060808.

Full text
Abstract:
(1) Objective: As light propagates through water, it undergoes significant attenuation and scattering, causing underwater images to experience color distortion and exhibit a bluish or greenish tint. Additionally, suspended particles in the water further degrade image quality. This paper proposes an improved U-Net network model for underwater image enhancement to generate high-quality images. (2) Method: Instead of incorporating additional complex modules into enhancement networks, we opted to simplify the classic U-Net architecture. Specifically, we replaced the standard convolutions in U-Net with our self-designed efficient basic block, which integrates a simplified channel attention mechanism. Moreover, we employed Layer Normalization to enhance the capability of training with a small number of samples and used the GELU activation function to achieve additional benefits in image denoising. Furthermore, we introduced the SK fusion module into the network to aggregate feature information, replacing traditional concatenation operations. In the experimental section, we used the “Underwater ImageNet” dataset from “Enhancing Underwater Visual Perception (EUVP)” for training and testing. EUVP, established by Islam et al., is a large-scale dataset comprising paired images (high-quality clear images and low-quality blurry images) as well as unpaired underwater images. (3) Results: We compared our proposed method with several high-performing traditional algorithms and deep learning-based methods. The traditional algorithms include He, UDCP, ICM, and ULAP, while the deep learning-based methods include CycleGAN, UGAN, UGAN-P, and FUnIEGAN. The results demonstrate that our algorithm exhibits outstanding competitiveness on the underwater imagenet-dataset. Compared to the currently optimal lightweight model, FUnIE-GAN, our method reduces the number of parameters by 0.969 times and decreases Floating-Point Operations Per Second (FLOPS) by more than half. In terms of image quality, our approach achieves a minimal UCIQE reduction of only 0.008 while improving the NIQE by 0.019 compared to state-of-the-art (SOTA) methods. Finally, extensive ablation experiments validate the feasibility of our designed network. (4) Conclusions: The underwater image enhancement algorithm proposed in this paper significantly reduces model size and accelerates inference speed while maintaining high processing performance, demonstrating strong potential for practical applications.
APA, Harvard, Vancouver, ISO, and other styles
47

Karvonen, Juha. "U-net with ResNet-34 backbone for dual-polarized C-band baltic sea-ice SAR segmentation." Annals of Glaciology, November 6, 2024, 1–15. http://dx.doi.org/10.1017/aog.2024.33.

Full text
Abstract:
Abstract In this study, the U-net with ResNet-34, i.e. a residual neural network with 34 layers, backbone semantic segmentation network is applied to C-band sea-ice SAR imagery over the Baltic Sea. Sentinel-1 Extra Wide Swath mode HH/HV-polarized SAR data acquired during the winter season 2018–2019, and corresponding segments derived from the daily Baltic Sea ice charts were used for training the segmentation algorithm. C-band SAR image mosaics of the winter season 2020–2021 were then used to evaluate the segmentation. The major objective was to study the suitability of semantic segmentation of SAR imagery for automated SAR segmentation and also to find a potential replacement for the outdated iterated conditional modes (ICM) algorithm currently in operational use. The results compared to the daily Baltic Sea ice charts and the operational ICM segmentation and visual interpretation were encouraging from the operational point of view. Open water areas were located very well and the oversegmentation produced by ICM was significantly reduced. The correspondence between the ice chart polygons and the segmentation results was also reasonably good. Based on the results, the studied method is a potential candidate to replace the operational ICM SAR segmentation used in the Copernicus Marine Service automated sea-ice products at Finnish Meteorological Institute.
APA, Harvard, Vancouver, ISO, and other styles
48

Kennedy, L. P., R. K. Reddy, K. M. Health, et al. "Accurate respiratory rate determination using a novel insertable cardiac monitor algorithm: implications for diagnostic and monitoring potentials beyond heart rhythm disorders." European Heart Journal 43, Supplement_2 (2022). http://dx.doi.org/10.1093/eurheartj/ehac544.412.

Full text
Abstract:
Abstract Background Respiratory rate (RR) is a critical vital sign that is highly relevant in patients with cardiopulmonary disorders. The implantable cardiac monitor (ICM) provides useful data pertaining to heart rhythm, but little is known up to this point regarding its potential diagnostic value in the direct measurement of respiratory parameters. The addition of respiration information could improve understanding of overall health status of heart rhythm patients with ICM. Objective The primary objective of this study was to evaluate the accuracy of the RR detected by an existing implanted ICM as compared to gold-standard polysomnography (PSG) measurement of respiration. Methods This prospective single center study enrolled 25 patients (17 male, 62.7±12.2 yesrs) with an implanted ICM and suspected sleep-disordered breathing. The ICM was custom configured with research software to collect respiration data (Fig. 1). Simultaneous, time-synchronized PSG and ICM data were evaluated in two-minute epochs episodically during the night. The offline novel prototype RR algorithm was evaluated on episodes collected by the ICM and compared against expert manual adjudicated RR from PSG by two separate investigators. The interobserver agreement was assessed using intraclass correlation coefficient (ICC). The performance of the novel algorithm was assessed using Bland-Altman analysis with 95% limits of agreement (LOA). Results A total of 495 epochs were graded by two independent observers, with good ICC of 0.83 (95% C.I. 0.79–0.86). Epochs free of severe sleep disordered breathing/apnea (n=363) were included in this analysis with 106 of these containing periods of hypopnea. The development and validation datasets were comprised of 235 and 128 epochs, respectively. In the development data, the mean RR was 14.99±3 breaths per minute, and the mean RR was 13.44±2 breaths per minute in the test data. Using Bland-Altman analysis, the bias for the novel prototype algorithm was only −0.13 and +0.32 breaths per minute with 95% LOA of −2.24 to +1.98 and −2.56 to +3.19 breaths per minute (Fig. 2), in the in the development and test dataset respectively. Conclusion The novel prototype algorithm applied to the ICM data provided accurate determination of respiratory rate as compared to gold-standard PSG data. The capability to determine respiratory rate accurately from an existing ICM platform demonstrates the potential to extend the diagnostic power of ICMs beyond heart rhythm abnormalities to address a broad range of comorbidities including breathing disorders and heart failure. Funding Acknowledgement Type of funding sources: None.
APA, Harvard, Vancouver, ISO, and other styles
49

Crespin, Eliot, Arnaud Rosier, Issam Ibnouhsein, et al. "Improved Diagnostic Performance of Insertable Cardiac Monitors by an Artificial Intelligence-Based Algorithm." Europace, January 3, 2024. http://dx.doi.org/10.1093/europace/euad375.

Full text
Abstract:
Abstract Background The increasing use of insertable cardiac monitors (ICM) produces a high rate of false positive (FP) diagnoses. Their verification results in a high workload for caregivers. Objective We evaluated the performance of an artificial intelligence (AI)-based ILR ECG Analyzer™ (ILR-ECG-A). This machine learning algorithm that reclassifies ICM-transmitted events to minimize the rate of FP diagnoses, while preserving device sensitivity. Methods We selected 546 recipients of ICM followed by the Implicity™ monitoring platform. To avoid clusterization, a single episode per ICM abnormal diagnosis (e.g. asystole, bradycardia, atrial tachycardia (AT)/atrial fibrillation (AF), ventricular tachycardia, artifact) was selected per patient and analyzed by the ILR-ECG-A, applying the same diagnoses as the ICM. All episodes were reviewed by an adjudication committee (AC) and the results compared. Results Among 879 episodes classified as abnormal by the ICM, 80 (9.1%) were adjudicated as “artifacts”, 283 (32.2%) as FP, and 516 (58.7%) as “abnormal” by the AC. The algorithm reclassified 215 of the 283 FP as normal (76.0%), and confirmed 509 of the 516 episodes as abnormal (98.6%). Seven undiagnosed false negatives were adjudicated as AT or non-specific abnormality. The overall diagnostic specificity was 76.0% and sensitivity 98.6%. Conclusion The new AI-based ILR-ECG-A lowered the rate of FP ICM diagnoses significantly, while retaining a &amp;gt; 98% sensitivity. This will likely alleviate considerably the clinical burden represented by the review of ICM events.
APA, Harvard, Vancouver, ISO, and other styles
50

Liu, Ying. "Autonomous Control Method for Object Grasping of Logistics Sorting Manipulator Considering Changes in Lighting Environment." International Journal of Vehicle Structures and Systems 15, no. 2 (2023). http://dx.doi.org/10.4273/ijvss.15.2.06.

Full text
Abstract:
The autonomous grasping operation is the key to the intelligent logistics sorting manipulators. Since the current logistics sorting manipulators mostly use visual sensors to identify objects, they are extremely vulnerable to the changes in the lighting environment. Therefore, the research considers the influence of complex lighting environment on the autonomous grasping of logistics sorting manipulators. Furthermore, it verifies the effectiveness of SAC-AE-ICM algorithm through simulations and experiments. The experimental results show that SAC-AE-ICM algorithm can quickly achieve convergence in many experiments and can obtain global optimization. In the process of automatic capture, the success rate of SAC-AE-ICM algorithm can reach 90%, which is 25% higher than that of the method without ICM and has better convergence. The success rate was as high as 77% for unknown or irregular targets. In practical experiments, SAC-AE-ICM can effectively capture under good lighting conditions. However, under low light conditions, the probability of capturing unknown targets is about 72.5%. Overall, the success rates for single-object and multi-object capture are 88% and 85%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography