To see the other types of publications on this topic, follow the link: FD4 algorithm.

Journal articles on the topic 'FD4 algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'FD4 algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bassa, C. G., J. W. Romein, B. Veenboer, S. van der Vlugt, and S. J. Wijnholds. "Fourier-domain dedispersion." Astronomy & Astrophysics 657 (January 2022): A46. http://dx.doi.org/10.1051/0004-6361/202142099.

Full text
Abstract:
We present and implement the concept of the Fourier-domain dedispersion (FDD) algorithm, a brute-force incoherent dedispersion algorithm. This algorithm corrects the frequency-dependent dispersion delays in the arrival time of radio emission from sources such as radio pulsars and fast radio bursts. Where traditional time-domain dedispersion algorithms correct time delays using time shifts, the FDD algorithm performs these shifts by applying phase rotations to the Fourier-transformed time-series data. Incoherent dedispersion to many trial dispersion measures (DMs) is compute-, memory-bandwidth-, and input-output-intensive, and dedispersion algorithms have been implemented on graphics processing units (GPUs) to achieve high computational performance. However, time-domain dedispersion algorithms have low arithmetic intensity and are therefore often memory-bandwidth-limited. The FDD algorithm avoids this limitation and is compute-limited, providing a path to exploit the potential of current and upcoming generations of GPUs. We implement the FDD algorithm as an extension of the DEDISP time-domain dedispersion software. We compare the performance and energy-to-completion of the FDD implementation using an NVIDIA Titan RTX GPU against both the standard version and an optimized version of DEDISP. The optimized implementation already provides a factor of 1.5 to 2 speedup at only 66% of the energy utilization compared to the original algorithm. We find that the FDD algorithm outperforms the optimized time-domain dedispersion algorithm by another 20% in performance and 5% in energy-to-completion when a large number of DMs (≳512) are required. The FDD algorithm provides additional performance improvements for fast-Fourier-transform-based periodicity surveys of radio pulsars, as the Fourier transform back to the time domain can be omitted. We expect that this computational performance gain will further improve in the future since the Fourier-domain dedispersion algorithm better matches the trends in technological advancements of GPU development.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Qinghua, Kai Ding, Bingsen Wu, and Quanmin Xie. "Frequency Diverse Array Target Localization Based on IPSO-BP." International Journal of Antennas and Propagation 2020 (August 27, 2020): 1–8. http://dx.doi.org/10.1155/2020/2501731.

Full text
Abstract:
For the traditional target localization algorithms of frequency diverse array (FDA), there are some problems such as angle and distance coupling in single-frequency receiving FDA mode, large amount of calculation, and weak adaptability. This paper introduces a good learning and predictive method of target localization by using BP neural network on FDA, and FDA-IPSO-BP neural network algorithm is formed. The improved particle swarm optimization (IPSO) algorithm with nonlinear weights is developed to optimize the neural network weights and biases to prevent BP neural network from easily falling into local minimum points. In addition, the decoupling of angle and distance with single frequency increment is well solved. The simulation experiments show that the proposed algorithm has better target localization effect and convergence speed, compared with FDA-BP and FDA-MUSIC algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Narasimhadhan, A. V., and Kasi Rajgopal. "FDK-Type Algorithms with No Backprojection Weight for Circular and Helical Scan CT." International Journal of Biomedical Imaging 2012 (2012): 1–12. http://dx.doi.org/10.1155/2012/969432.

Full text
Abstract:
We develop two Feldkamp-type reconstruction algorithms with no backprojection weight for circular and helical trajectory with planar detector geometry. Advances in solid-state electronic detector technologies lend importance to CT systems with the equispaced linear array, the planar (flat panel) detectors, and the corresponding algorithms. We derive two exact Hilbert filtered backprojection (FBP) reconstruction algorithms with no backprojection weight for 2D fan-beam equispace linear array detector geometry (complement of the equi-angular curved array detector). Based on these algorithms, the Feldkamp-type algorithms with no backprojection weight for 3D reconstruction are developed using the standard heuristic extension of the divergent beam FBP algorithm. The simulation results show that the axial intensity drop in the reconstructed image using the FDK algorithms with no backprojection weight with circular trajectory is similar to that obtained by using Hu's and T-FDK, algorithms. Further, we present efficient algorithms to reduce the axial intensity drop encountered in the standard FDK reconstructions in circular cone-beam CT. The proposed algorithms consist of mainly two steps: reconstruction of the object using FDK algorithm with no backprojection weight and estimation of the missing term. The efficient algorithms are compared with the FDK algorithm, Hu's algorithm, T-FDK, and Zhu et al.'s algorithm in terms of axial intensity drop and noise. Simulation shows that the efficient algorithms give similar performance in axial intensity drop as that of Zhu et al.'s algorithm while one of the efficient algorithms outperforms Zhu et al.'s algorithm in terms of computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Lei Lei, Nai Ping Cheng, and Xu Guang Liu. "Joint Synchronization of Timing and Frequency Algorithm Based on Training Symbol in SC-FDE Systems." Applied Mechanics and Materials 329 (June 2013): 461–66. http://dx.doi.org/10.4028/www.scientific.net/amm.329.461.

Full text
Abstract:
Common joint synchronization of timing and frequency algorithms based on training symbols are analyzed. Training symbols are discussed and then a new algorithm based on training symbol with special structure is proposed and applied in SC-FDE (Single Carrier Frequency Domain Equalization) systems. The proposed algorithm absorbs the advantages of the common algorithms and comparison between different algorithms shows that the proposed algorithm achieves better performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Altun, Bekir Emre, Enes Kaymaz, Mustafa Dursun, and Ugur Guvenc. "Hyper-FDB-INFO Algorithm for Optimal Placement and Sizing of FACTS Devices in Wind Power-Integrated Optimal Power Flow Problem." Energies 17, no. 23 (2024): 6087. https://doi.org/10.3390/en17236087.

Full text
Abstract:
In this study, firstly, the balance between the exploration and exploitation capabilities of the weighted mean of vectors (INFO) algorithm was developed using the fitness–distance balance (FDB) method. Then, the FDB-INFO algorithm was developed with a hyper-heuristic method to create the beginning optimal population by using Linear Population Reduction Success History-based Adaptive Differential Evolution (LSHADE) and a novel Hyper-FDB-INFO algorithm was presented. Finally, the developed Hyper-FDB-INFO algorithm was applied to solve the optimal placement and sizing of FACTS devices for the optimal power flow (OPF) problem incorporating wind energy sources. Moreover, determining the placement and sizing of FACTS devices is an additional problem to minimize the total cost of generation and reducing the power losses of the power system. The experimental results showed that the Hyper-FDB-INFO algorithm is a more effective solver than the SHADE-SF, INFO, FDB-INFO and Hyper-INFO algorithms for wind power and FACTS devices integrating the OPF problem.
APA, Harvard, Vancouver, ISO, and other styles
6

Mühlenbein, Heinz, and Thilo Mahnig. "FDA -A Scalable Evolutionary Algorithm for the Optimization of Additively Decomposed Functions." Evolutionary Computation 7, no. 4 (1999): 353–76. http://dx.doi.org/10.1162/evco.1999.7.4.353.

Full text
Abstract:
The Factorized Distribution Algorithm (FDA) is an evolutionary algorithm which combines mutation and recombination by using a distribution. The distribution is estimated from a set of selected points. In general, a discrete distribution defined for n binary variables has 2n parameters. Therefore it is too expensive to compute. For additively decomposed discrete functions (ADFs) there exist algorithms which factor the distribution into conditional and marginal distributions. This factorization is used by FDA. The scaling of FDA is investigated theoretically and numerically. The scaling depends on the ADF structure and the specific assignment of function values. Difficult functions on a chain or a tree structure are solved in about O(n√n) operations. More standard genetic algorithms are not able to optimize these functions. FDA is not restricted to exact factorizations. It also works for approximate factorizations as is shown for a circle and a grid structure. By using results from Bayes networks, FDA is extended to LFDA. LFDA computes an approximate factorization using only the data, not the ADF structure. The scaling of LFDA is compared to the scaling of FDA.
APA, Harvard, Vancouver, ISO, and other styles
7

Castillo, Oscar, Fevrier Valdez, José Soria, Leticia Amador-Angulo, Patricia Ochoa, and Cinthia Peraza. "Comparative Study in Fuzzy Controller Optimization Using Bee Colony, Differential Evolution, and Harmony Search Algorithms." Algorithms 12, no. 1 (2018): 9. http://dx.doi.org/10.3390/a12010009.

Full text
Abstract:
This paper presents a comparison among the bee colony optimization (BCO), differential evolution (DE), and harmony search (HS) algorithms. In addition, for each algorithm, a type-1 fuzzy logic system (T1FLS) for the dynamic modification of the main parameters is presented. The dynamic adjustment in the main parameters for each algorithm with the implementation of fuzzy systems aims at enhancing the performance of the corresponding algorithms. Each algorithm (modified and original versions) is analyzed and compared based on the optimal design of fuzzy systems for benchmark control problems, especially in fuzzy controller design. Simulation results provide evidence that the FDE algorithm outperforms the results of the FBCO and FHS algorithms in the optimization of fuzzy controllers. Statistically is demonstrated that the better errors are found with the implementation of the fuzzy systems to enhance each proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
8

Fan, Yuqi, Sheng Zhang, Yaping Wang, Di Xu, and Qisong Zhang. "An Improved Flow Direction Algorithm for Engineering Optimization Problems." Mathematics 11, no. 9 (2023): 2217. http://dx.doi.org/10.3390/math11092217.

Full text
Abstract:
Flow Direction Algorithm (FDA) has better searching performance than some traditional optimization algorithms. To give the basic Flow Direction Algorithm more effective searching ability and avoid multiple local minima under the searching space, and enable it to obtain better search results, an improved FDA based on the Lévy flight strategy and the self-renewable method (LSRFDA) was proposed in this paper. The Lévy flight strategy and the self-renewable approach were added to the basic Flow Direction Algorithm. Random parameters generated by the Lévy flight strategy can increase the algorithm’s diversity of feasible solutions in a short calculation time and greatly enhance the operational efficiency of the algorithm. The self-renewable method lets the algorithm quickly obtain a better possible solution and jump to the local solution space. Then, this paper tested different mathematical testing functions, including low-dimensional and high-dimensional functions, and the test results were compared with those of different algorithms. This paper includes iterative figures, box plots, and search paths to show the different performances of the LSRFDA. Finally, this paper calculated different engineering optimization problems. The test results show that the proposed algorithm in this paper has better searching ability and quicker searching speed than the basic Flow Direction Algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Ivanovа, V. R., I. Y. Ivanov, and V. V. Novokreshchenov. "Structural and parametric synthesis of anti-average control algorithms for realizing adaptive frequency operating automaticselectrotechnical systems." Power engineering: research, equipment, technology 21, no. 4 (2019): 66–76. http://dx.doi.org/10.30724/1998-9903-2019-21-4-66-76.

Full text
Abstract:
In the event of accidents in power systems associated with a reduction in frequency in order to prevent the complete repayment of the energy district and accelerate the elimination of the accident, frequency division automatics (FDA) are used. Existing technical solutions for the implementation of the FDA do not always provide a balance of the generated and consumed power under the action of the FDA, since the current circuit-mode situation is not taken into account. The aim of this work is to develop effective algorithms for the functioning of adaptive FDA, which allocates power units of a power plant to a balanced load of an isolated energy district, taking into account the current circuit-mode situation. To achieve this goal, we use the method of structural and parametric synthesis of emergency control algorithms, which consists in creating a composite algorithmic model of adaptive FDA. The result of this work is the developed algorithmic model of an adaptive FDA of a power plant, consisting of input and processing algorithms for input analog signals of a FDA, algorithms for input and processing of input discrete signals of a FDA, algorithms for generating start-up organs of a FDA, algorithms for allocating a power plant to a balanced load of a power district. The proposed algorithm for the functioning of adaptive FDA is universal and allows you to automatically allocate power units of a power plant to a balanced load of an isolated energy district, regardless of the type of accident and its cause, the configuration of the electrical network and the values of electricity generation and consumption, as well as the current circuit-mode situation.
APA, Harvard, Vancouver, ISO, and other styles
10

Rakers, Margot M., Marieke M. van Buchem, Sergej Kucenko, et al. "Availability of Evidence for Predictive Machine Learning Algorithms in Primary Care." JAMA Network Open 7, no. 9 (2024): e2432990. http://dx.doi.org/10.1001/jamanetworkopen.2024.32990.

Full text
Abstract:
ImportanceThe aging and multimorbid population and health personnel shortages pose a substantial burden on primary health care. While predictive machine learning (ML) algorithms have the potential to address these challenges, concerns include transparency and insufficient reporting of model validation and effectiveness of the implementation in the clinical workflow.ObjectivesTo systematically identify predictive ML algorithms implemented in primary care from peer-reviewed literature and US Food and Drug Administration (FDA) and Conformité Européene (CE) registration databases and to ascertain the public availability of evidence, including peer-reviewed literature, gray literature, and technical reports across the artificial intelligence (AI) life cycle.Evidence ReviewPubMed, Embase, Web of Science, Cochrane Library, Emcare, Academic Search Premier, IEEE Xplore, ACM Digital Library, MathSciNet, AAAI.org (Association for the Advancement of Artificial Intelligence), arXiv, Epistemonikos, PsycINFO, and Google Scholar were searched for studies published between January 2000 and July 2023, with search terms that were related to AI, primary care, and implementation. The search extended to CE-marked or FDA-approved predictive ML algorithms obtained from relevant registration databases. Three reviewers gathered subsequent evidence involving strategies such as product searches, exploration of references, manufacturer website visits, and direct inquiries to authors and product owners. The extent to which the evidence for each predictive ML algorithm aligned with the Dutch AI predictive algorithm (AIPA) guideline requirements was assessed per AI life cycle phase, producing evidence availability scores.FindingsThe systematic search identified 43 predictive ML algorithms, of which 25 were commercially available and CE-marked or FDA-approved. The predictive ML algorithms spanned multiple clinical domains, but most (27 [63%]) focused on cardiovascular diseases and diabetes. Most (35 [81%]) were published within the past 5 years. The availability of evidence varied across different phases of the predictive ML algorithm life cycle, with evidence being reported the least for phase 1 (preparation) and phase 5 (impact assessment) (19% and 30%, respectively). Twelve (28%) predictive ML algorithms achieved approximately half of their maximum individual evidence availability score. Overall, predictive ML algorithms from peer-reviewed literature showed higher evidence availability compared with those from FDA-approved or CE-marked databases (45% vs 29%).Conclusions and RelevanceThe findings indicate an urgent need to improve the availability of evidence regarding the predictive ML algorithms’ quality criteria. Adopting the Dutch AIPA guideline could facilitate transparent and consistent reporting of the quality criteria that could foster trust among end users and facilitating large-scale implementation.
APA, Harvard, Vancouver, ISO, and other styles
11

Paçacı, Serdar. "IMPROVEMENT OF BELUGA WHALE OPTIMIZATION ALGORITHM BY DISTANCE BALANCE SELECTION METHOD." Yalvaç Akademi Dergisi 8, no. 1 (2023): 125–44. http://dx.doi.org/10.57120/yalvac.1257808.

Full text
Abstract:
In this study, an improved version of the Beluga whale optimization (BWO) algorithm, which is a meta-heuristic optimization algorithm recently presented in the literature, is developed to provide better solutions for the problems. The fitness-distance balance (FDB) selection method was applied in the search processes in the BWO algorithm, which was developed by modeling the swimming, preying and falling characteristics of beluga whales. CEC2020 benchmark functions were used to test the performance of the BWO algorithm and the algorithm named FDBBWO. The algorithms were tested on these test functions for 30, 50 and 100 dimensions. Friedman analysis was performed on the test results and the performance ranks of the algorithms were determined. In addition, Wilcoxon rank sum test was used to analyze whether there were significant differences in the results. As a result of the experimental study, it is observed that the BWO algorithm improves the early convergence problem that may arise due to the lack of diversity in the search process. In this way, the possibility of getting stuck at local optimum points is reduced. In addition, the developed algorithm is compared with 3 different algorithms that have been recently presented in the literature. According to the comparison results, FDBBWO has a superior performance compared to other meta-heuristic algorithms.
APA, Harvard, Vancouver, ISO, and other styles
12

Yang, Kaikai, Sheng Hong, Qi Zhu, and Yanheng Ye. "Maximum Likelihood Angle-Range Estimation for Monostatic FDA-MIMO Radar with Extended Range Ambiguity Using Subarrays." International Journal of Antennas and Propagation 2020 (September 8, 2020): 1–10. http://dx.doi.org/10.1155/2020/4601208.

Full text
Abstract:
In this paper, we consider the joint angle-range estimation in monostatic FDA-MIMO radar. The transmit subarrays are first utilized to expand the range ambiguity, and the maximum likelihood estimation (MLE) algorithm is first proposed to improve the estimation performance. The range ambiguity is a serious problem in monostatic FDA-MIMO radar, which can reduce the detection range of targets. To extend the unambiguous range, we propose to divide the transmitting array into subarrays. Then, within the unambiguous range, the maximum likelihood (ML) algorithm is proposed to estimate the angle and range with high accuracy and high resolution. In the ML algorithm, the joint angle-range estimation problem becomes a high-dimensional search problem; thus, it is computationally expensive. To reduce the computation load, the alternating projection ML (AP-ML) algorithm is proposed by transforming the high-dimensional search into a series of one-dimensional search iteratively. With the proposed AP-ML algorithm, the angle and range are automatically paired. Simulation results show that transmitting subarray can extend the range ambiguity of monostatic FDA-MIMO radar and obtain a lower cramer-rao low bound (CRLB) for range estimation. Moreover, the proposed AP-ML algorithm is superior over the traditional estimation algorithms in terms of the estimation accuracy and resolution.
APA, Harvard, Vancouver, ISO, and other styles
13

Dehghani Darmain, Mahdi, and Amir Hashemi. "A Parametric $ F_4 $ Algorithm." Iranian Journal of Mathematical Sciences and Informatics 19, no. 1 (2024): 117–33. http://dx.doi.org/10.61186/ijmsi.19.1.117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Guo, Yuehao, Xianpeng Wang, Jinmei Shi, Xiang Lan, and Liangtian Wan. "Tensor-Based Target Parameter Estimation Algorithm for FDA-MIMO Radar with Array Gain-Phase Error." Remote Sensing 14, no. 6 (2022): 1405. http://dx.doi.org/10.3390/rs14061405.

Full text
Abstract:
As a new radar system, FDA-MIMO radar has recently developed rapidly, as it has broad prospects in angle-range estimation. Unfortunately, the performance of existing algorithms for FDA-MIMO radar is greatly degrading or even failing under the condition of array gain-phase error. This paper proposes an innovative solution to the joint angle and range estimation of FDA-MIMO radar under the condition of array gain-phase error and an estimation algorithm is developed. Moreover, the corresponding Cramér-Rao bound (CRB) is derived to evaluate the algorithm. The parallel factor (PARAFAC) decomposition technique can be utilized to calculate transmitter and receiver direction matrices. Taking advantage of receiver direction matrix, the angle estimation can be obtained. The range estimation can be estimated by transmitter direction matrix and angle estimation. To eliminate the error accumulation effect of array gain-phase error, the gain error and phase error are obtained separately. In this algorithm, the impact of gain-phase error on parameter estimation is removed and so is the error accumulation effect. Therefore, the proposed algorithm can provide excellent performance of angle-range and gain-phase error estimation. Numerical experiments prove the validity and advantages of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
15

Ding, Zihang, Junwei Xie, and Jiaang Ge. "Search-Free Angle, Range, and Velocity Estimation for Monostatic FDA-MIMO." International Journal of Antennas and Propagation 2022 (August 24, 2022): 1–11. http://dx.doi.org/10.1155/2022/8363100.

Full text
Abstract:
The monostatic frequency diverse array multiple-input multiple-output (FDA-MIMO) has attracted much attention recently. However, much research is concentrated on the estimation of angle-range parameters based on the FDA-MIMO radar, and the velocity has not been considered. In this study, we propose a search-free method to estimate these parameters. To overcome the problem of the high computational complexity associated with the searching estimation algorithms, the parallel factor (PARAFAC) decomposition is introduced to estimate the space-time steering vector. Next, we can utilize the least square method to solve the angle, range, and velocity of each target. In addition, the Cramér–Rao bounds (CRBs) of angle, range, and velocity are derived. Besides, the other performance analysis consists of the root mean square error, and complexity is derived. We compare the PARAFAC decomposition algorithm with the estimation of signal parameters via the rotational invariance techniques (ESPRIT) algorithm, and our method owns a superior performance. Finally, the proposed method is verified by simulations and has the ability to achieve greater estimation accuracy than existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Price, W. "Regulating Black-Box Medicine." Michigan Law Review, no. 116.3 (2017): 421. http://dx.doi.org/10.36644/mlr.116.3.regulating.

Full text
Abstract:
Data drive modern medicine. And our tools to analyze those data are growing ever more powerful. As health data are collected in greater and greater amounts, sophisticated algorithms based on those data can drive medical innovation, improve the process of care, and increase efficiency. Those algorithms, however, vary widely in quality. Some are accurate and powerful, while others may be riddled with errors or based on faulty science. When an opaque algorithm recommends an insulin dose to a diabetic patient, how do we know that dose is correct? Patients, providers, and insurers face substantial difficulties in identifying high-quality algorithms; they lack both expertise and proprietary information. How should we ensure that medical algorithms are safe and effective? Medical algorithms need regulatory oversight, but that oversight must be appropriately tailored. Unfortunately, the Food and Drug Administration (FDA) has suggested that it will regulate algorithms under its traditional framework, a relatively rigid system that is likely to stifle innovation and to block the development of more flexible, current algorithms. This Article draws upon ideas from the new governance movement to suggest a different path. FDA should pursue a more adaptive regulatory approach with requirements that developers disclose information underlying their algorithms. Disclosure would allow FDA oversight to be supplemented with evaluation by providers, hospitals, and insurers. This collaborative approach would supplement the agency’s review with ongoing real-world feedback from sophisticated market actors. Medical algorithms have tremendous potential, but ensuring that such potential is developed in high-quality ways demands a careful balancing between public and private oversight, and a role for FDA that mediates—but does not dominate—the rapidly developing industry.
APA, Harvard, Vancouver, ISO, and other styles
17

Bundschuh, Lena, Jens Buermann, Marieta Toma, et al. "A Tumor Volume Segmentation Algorithm Based on Radiomics Features in FDG-PET in Lung Cancer Patients, Validated Using Surgical Specimens." Diagnostics 14, no. 23 (2024): 2654. http://dx.doi.org/10.3390/diagnostics14232654.

Full text
Abstract:
Background: Although the integration of positron emission tomography into radiation therapy treatment planning has become part of clinical routine, the best method for tumor delineation is still a matter of debate. In this study, therefore, we analyzed a novel, radiomics-feature-based algorithm in combination with histopathological workup for patients with non-small-cell lung cancer. Methods: A total of 20 patients with biopsy-proven lung cancer who underwent [18F]fluorodeoxyglucose positron emission/computed tomography (FDG-PET/CT) examination before tumor resection were included. Tumors were segmented in positron emission tomography (PET) data using previously reported algorithms based on three different radiomics features, as well as a threshold-based algorithm. To obtain gold-standard results, lesions were measured after resection. Pathological volumes and maximal diameters were then compared with the results of the segmentation algorithms. Results: A total of 20 lesions were analyzed. For all algorithms, segmented volumes correlated well with pathological volumes. In general, the threshold-based volumes exhibited a tendency to be smaller than the radiomics-based volumes. For all lesions, conventional threshold-based segmentation produced coefficients of variation which corresponded best with pathologically based volumes; however, for lesions larger than 3 ccm, the algorithm based on Local Entropy performed best, with a significantly better coefficient of variation (p = 0.0002) than the threshold-based algorithm. Conclusions: We found that, for small lesions, results obtained using conventional threshold-based segmentation compared well with pathological volumes. For lesions larger than 3 ccm, the novel algorithm based on Local Entropy performed best. These findings confirm the results of our previous phantom studies. This algorithm is therefore worthy of inclusion in future studies for further confirmation and application.
APA, Harvard, Vancouver, ISO, and other styles
18

Chai, Song, Yubai Li, Jian Wang, and Chang Wu. "A Genetic Algorithm for Task Scheduling on NoC Using FDH Cross Efficiency." Mathematical Problems in Engineering 2013 (2013): 1–16. http://dx.doi.org/10.1155/2013/708495.

Full text
Abstract:
A CrosFDH-GA algorithm is proposed for the task scheduling problem on the NoC-based MPSoC regarding the multicriterion optimization. First of all, four common criterions, namely, makespan, data routing energy, average link load, and workload balance, are extracted from the task scheduling problem on NoC and are used to construct the DEA DMU model. Then the FDH analysis is applied to the problem, and a FDH cross efficiency formulation is derived for evaluating the relative advantage among schedule solutions. Finally, we introduce the DEA approach to the genetic algorithm and propose a CrosFDH-GA scheduling algorithm to find the most efficient schedule solution for a given scheduling problem. The simulation results show that our FDH cross efficiency formulation effectively evaluates the performance of schedule solutions. By conducting comparative simulations, our CrosFDH-GA proposal produces more metrics-balanced schedule solution than other multicriterion algorithms.
APA, Harvard, Vancouver, ISO, and other styles
19

Rodriguez-Alvarez, Maria Jose, Filomeno Sanchez, Antonio Soriano, Laura Moliner, Sebastian Sanchez, and Jose Benlloch. "QR-Factorization Algorithm for Computed Tomography (CT): Comparison With FDK and Conjugate Gradient (CG) Algorithms." IEEE Transactions on Radiation and Plasma Medical Sciences 2, no. 5 (2018): 459–69. http://dx.doi.org/10.1109/trpms.2018.2843803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Li, Jie Tang, Yuxiang Xing, and Jianping Cheng. "Analytic resolution of FDK algorithm." Journal of X-Ray Science and Technology: Clinical Applications of Diagnosis and Therapeutics 14, no. 3 (2006): 151–59. http://dx.doi.org/10.3233/xst-2006-00157.

Full text
Abstract:
The resolution of practical CT systems depends on many factors. Reconstruction algorithm is one of the most important factors. In this paper, we investigate the resolution of the standard FDK algorithm, present a formula to analytically compute the in-slice point-spread function in FDK reconstructions, and validate the analytic PSF by numerical simulations. The experimental results show that this process can predict the in-slice point spread function in FDK reconstructions quite well.
APA, Harvard, Vancouver, ISO, and other styles
21

Fraraccio, Giancarlo, Adrian Brügger, and Raimondo Betti. "Experimental Studies on Damage Detection in Frame Structures Using Vibration Measurements." Shock and Vibration 17, no. 6 (2010): 697–721. http://dx.doi.org/10.1155/2010/203891.

Full text
Abstract:
This paper presents an experimental study of frequency and time domain identification algorithms and discusses their effectiveness in structural health monitoring of frame structures using acceleration input and response data. Three algorithms were considered: 1) a frequency domain decomposition algorithm (FDD), 2) a time domain Observer Kalman IDentification algorithm (OKID), and 3) a subsequent physical parameter identification algorithm (MLK). Through experimental testing of a four-story steel frame model on a uniaxial shake table, the inherent complications of physical instrumentation and testing are explored. Primarily, this study aims to provide a dependable first-order and second-order identification of said test structure in a fully instrumented state. Once the characteristics (i.e. the stiffness matrix) for a benchmark structure have been determined, structural damage can be detected by a change in the identified structural stiffness matrix. This work also analyzes the stability of the identified structural stiffness matrix with respect to fluctuations of input excitation magnitude and frequency content in an experimental setting.
APA, Harvard, Vancouver, ISO, and other styles
22

Mun, Changmin, Dooyoung Kim, and Sungju Park. "FDR Test Compression Algorithm based on Frequency-ordered." Journal of the Institute of Electronics and Information Engineers 51, no. 5 (2014): 106–13. http://dx.doi.org/10.5573/ieie.2014.51.5.106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Stoy, L., K. del Rosario, A. Plagge, K. Lewis, S. Yasir, and Z. Chen. "Comparison of FDA-Approved vs. Mayo Clinic’s Ki-67 Immunostaining Protocols for Breast Cancer Specimens: A Quantitative Analysis Using Artificial Intelligence (AI) Based Image Analysis Algorithms." American Journal of Clinical Pathology 160, Supplement_1 (2023): S12. http://dx.doi.org/10.1093/ajcp/aqad150.026.

Full text
Abstract:
Abstract Introduction/Objective Ki-67, a nuclear protein present during the active phases of the cell cycle, is an important prognostic and predictive marker for many tumor types, including breast cancer. This study compared the results of breast cancer specimens immunohistochemically (IHC) stained with the FDA-approved vs. Mayo Clinic’s (MC) validated Ki-67 protocol using AI-based image analysis algorithms for quantitative examination at an individual cell level. Methods/Case Report Sixty-two breast cancer specimens were selected with a rough equal representation of Ki-67 percentage in low (<10%), intermediate (10-20%) and high (>20%) categories. Two adjacent sections of each specimen were stained with the FDA and MC protocols, respectively, and scanned to obtain whole slide digital images. Two AI algorithms, one with tumor cell identification and the other without, were used to quantify positive and negative cells and calculate Ki-67 percentages. Results (if a Case Study enter NA) Both algorithms show high concordance between the two staining protocols, with r-squared scores of 0.989 and 0.992, respectively, analyzing with or without tumor identification. AI identified tumor better on MC stained slides vs. FDA stained slides. The results demonstrate a high-performance concordance between the MC and the FDA IHC staining protocols. Conclusion The data also illustrates the accuracy of using AI image analysis algorithms to compare different IHC protocols at an individual cell level. The AI tumor identification difference in slides stained with two IHC protocols reflects the importance of training an algorithm with a specific staining protocol to achieve optimal tumor recognition.
APA, Harvard, Vancouver, ISO, and other styles
24

Xia, Yong, Shen Lu, Lingfeng Wen, Stefan Eberl, Michael Fulham, and David Dagan Feng. "Automated Identification of Dementia Using FDG-PET Imaging." BioMed Research International 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/421743.

Full text
Abstract:
Parametric FDG-PET images offer the potential for automated identification of the different dementia syndromes. However, various existing image features and classifiers have their limitations in characterizing and differentiating the patterns of this disease. We reported a hybrid feature extraction, selection, and classification approach, namely, the GA-MKL algorithm, for separating patients with suspected Alzheimer’s disease and frontotemporal dementia from normal controls. In this approach, we extracted three groups of features to describe the average level, spatial variation, and asymmetry of glucose metabolic rates in 116 cortical volumes. An optimal combination of features, that is, capable of classifying dementia cases was identified by a genetic algorithm- (GA-) based method. The condition of each FDG-PET study was predicted by applying the selected features to a multikernel learning (MKL) machine, in which the weighting parameter of each kernel function can be automatically estimated. We compared our approach to two state-of-the-art dementia identification algorithms on a set of 129 clinical cases and improved the performance in separating the dementia types, achieving accuracy of 94.62%. There is a very good agreement between the proposed automated technique and the diagnosis made by clinicians.
APA, Harvard, Vancouver, ISO, and other styles
25

Son, Jeongeun, and Yuncheng Du. "Model-Based Stochastic Fault Detection and Diagnosis of Lithium-Ion Batteries." Processes 7, no. 1 (2019): 38. http://dx.doi.org/10.3390/pr7010038.

Full text
Abstract:
The Lithium-ion battery (Li-ion) has become the dominant energy storage solution in many applications, such as hybrid electric and electric vehicles, due to its higher energy density and longer life cycle. For these applications, the battery should perform reliably and pose no safety threats. However, the performance of Li-ion batteries can be affected by abnormal thermal behaviors, defined as faults. It is essential to develop a reliable thermal management system to accurately predict and monitor thermal behavior of a Li-ion battery. Using the first-principle models of batteries, this work presents a stochastic fault detection and diagnosis (FDD) algorithm to identify two particular faults in Li-ion battery cells, using easily measured quantities such as temperatures. In addition, models used for FDD are typically derived from the underlying physical phenomena. To make a model tractable and useful, it is common to make simplifications during the development of the model, which may consequently introduce a mismatch between models and battery cells. Further, FDD algorithms can be affected by uncertainty, which may originate from either intrinsic time varying phenomena or model calibration with noisy data. A two-step FDD algorithm is developed in this work to correct a model of Li-ion battery cells and to identify faulty operations in a normal operating condition. An iterative optimization problem is proposed to correct the model by incorporating the errors between the measured quantities and model predictions, which is followed by an optimization-based FDD to provide a probabilistic description of the occurrence of possible faults, while taking the uncertainty into account. The two-step stochastic FDD algorithm is shown to be efficient in terms of the fault detection rate for both individual and simultaneous faults in Li-ion batteries, as compared to Monte Carlo (MC) simulations.
APA, Harvard, Vancouver, ISO, and other styles
26

Bundschuh, Lena, Vesna Prokic, Matthias Guckenberger, Stephanie Tanadini-Lang, Markus Essler, and Ralph A. Bundschuh. "A Novel Radiomics-Based Tumor Volume Segmentation Algorithm for Lung Tumors in FDG-PET/CT after 3D Motion Correction—A Technical Feasibility and Stability Study." Diagnostics 12, no. 3 (2022): 576. http://dx.doi.org/10.3390/diagnostics12030576.

Full text
Abstract:
Positron emission tomography (PET) provides important additional information when applied in radiation therapy treatment planning. However, the optimal way to define tumors in PET images is still undetermined. As radiomics features are gaining more and more importance in PET image interpretation as well, we aimed to use textural features for an optimal differentiation between tumoral tissue and surrounding tissue to segment-target lesions based on three textural parameters found to be suitable in previous analysis (Kurtosis, Local Entropy and Long Zone Emphasis). Intended for use in radiation therapy planning, this algorithm was combined with a previously described motion-correction algorithm and validated in phantom data. In addition, feasibility was shown in five patients. The algorithms provided sufficient results for phantom and patient data. The stability of the results was analyzed in 20 consecutive measurements of phantom data. Results for textural feature-based algorithms were slightly worse than those of the threshold-based reference algorithm (mean standard deviation 1.2%—compared to 4.2% to 8.6%) However, the Entropy-based algorithm came the closest to the real volume of the phantom sphere of 6 ccm with a mean measured volume of 26.5 ccm. The threshold-based algorithm found a mean volume of 25.0 ccm. In conclusion, we showed a novel, radiomics-based tumor segmentation algorithm in FDG-PET with promising results in phantom studies concerning recovered lesion volume and reasonable results in stability in consecutive measurements. Segmentation based on Entropy was the most precise in comparison with sphere volume but showed the worst stability in consecutive measurements. Despite these promising results, further studies with larger patient cohorts and histopathological standards need to be performed for further validation of the presented algorithms and their applicability in clinical routines. In addition, their application in other tumor entities needs to be studied.
APA, Harvard, Vancouver, ISO, and other styles
27

Safronova, Ksenia, Marina Pavlenko, and Natalya Rusakova. "FC4: AI Implementation in Online SAGE Test." International Psychogeriatrics 36, S1 (2024): 40. http://dx.doi.org/10.1017/s1041610224001261.

Full text
Abstract:
Objectives: To evaluate the impact of AI-technology implementation into the algorithm for assessing completed tasks of the online SAGE test to identify primary cognitive changes.In order to raise awareness among Russians about dementia and early diagnosis to reduce the risk of occurrence and development of the syndrome, the Nodementia.net project improved the online SAGE test as a convenient self-testing tool for cognitive changes. The visual- constructive and executive skills tasks in SAGE-testing required enhancement of the evaluation algorithm by the AI implementation. AI-technology is designed to highly accurately evaluate human drawings against given criteria and assign scores that correspond to the user’s cognitive status.Methods: In the process of improving the test, project experts explored and compared drawing evaluation services, but none satisfied the criteria. In order to create a fundamentally new AI model, experts analyzed 10,000 pictures and prepared algorithms to train the experimental AI model. As a result, the project specialists created AI model that evaluates pictures with 80% accuracy and implemented it into online test on the Nodementia.net website.Results: To train the fundamentally new AI model, experts analyzed more than 10,000 different images, which helped to form the evaluation logic, taking into account the shape of the picture, color, line curvature, accuracy of image repetition and more than 100 other factors. Currently, the AI model correctly evaluates about 80% of images; the next step is 95%. We have improved the mechanism for assessing tasks, reduced biases and increased the amount of users.Conclusions: To improve testing algorithms and increase the accuracy of online SAGE test results, we integrated pattern recognition technology based on a self-learning AI model. We used more than 10,000 different images for initial training, based on which the AI generated more than 100 evaluation criteria. Now the AI is expanding its library of “knowledge” and thereby honing its assessment skills, becoming an integral part of our unique online test.Key words: AI implementation; online SAGE test; dementia; Alzheimer’s disease
APA, Harvard, Vancouver, ISO, and other styles
28

Durga Bhavani, Kakirala, and Melkias Ferni Ukrit. "Enhancing fall detection and classification using Jarratt‐butterfly optimization algorithm with deep learning." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 2 (2025): 1461. https://doi.org/10.11591/ijai.v14.i2.pp1461-1470.

Full text
Abstract:
Falls pose significant risk to the health and safety of individuals, specifically for vulnerable populations as the elderly and those with specific medical conditions. The repercussions of falls can be severe, leading to injuries, loss of independence, and increased healthcare costs. Consequently, the development of effective fall detection systems is crucial for providing timely assistance and enhancing the overall well-being of affected individuals. Recent advancements in deep learning (DL) have opened new avenues for automating fall detection through the analysis of sensor data and video footage. DL algorithms are especially well-suited for this task because they can automatically learn complex features and patterns from raw data, eliminating the need for extensive manual feature engineering. This article introduces a novel approach to fall detection and classification, termed the fall detection and classification using Jarratt‐butterfly optimization algorithm with deep learning (FDC-JBOADL) algorithm. The FDC-JBOADL technique employs a median filtering (MF) method to mitigate noise and utilizes the EfficientNet model for robust feature extraction, capturing both motion patterns and appearance characteristics of individuals. Furthermore, the classification of fall events is achieved through a long short-term memory (LSTM) classifier, with hyperparameter optimization facilitated by Jarratt‐butterfly optimization algorithm (JBOA). Through a comprehensive series of experiments, the efficacy of FDC-JBOADL technique is validated, demonstrating superior performance compared to existing methodologies in the domain of fall detection.
APA, Harvard, Vancouver, ISO, and other styles
29

Kakirala, Durga Bhavani, and Ferni Ukrit Melkias. "Enhancing fall detection and classification using Jarratt‐butterfly optimization algorithm with deep learning." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 2 (2025): 1461–70. https://doi.org/10.11591/ijai.v14.i2.pp1461-1470.

Full text
Abstract:
Falls pose significant risk to the health and safety of individuals, specifically for vulnerable populations as the elderly and those with specific medical conditions. The repercussions of falls can be severe, leading to injuries, loss of independence, and increased healthcare costs. Consequently, the development of effective fall detection systems is crucial for providing timely assistance and enhancing the overall well-being of affected individuals. Recent advancements in deep learning (DL) have opened new avenues for automating fall detection through the analysis of sensor data and video footage. DL algorithms are especially well-suited for this task because they can automatically learn complex features and patterns from raw data, eliminating the need for extensive manual feature engineering. This article introduces a novel approach to fall detection and classification, termed the fall detection and classification using Jarratt‐butterfly optimization algorithm with deep learning (FDC-JBOADL) algorithm. The FDC-JBOADL technique employs a median filtering (MF) method to mitigate noise and utilizes the EfficientNet model for robust feature extraction, capturing both motion patterns and appearance characteristics of individuals. Furthermore, the classification of fall events is achieved through a long short-term memory (LSTM) classifier, with hyperparameter optimization facilitated by Jarratt‐butterfly optimization algorithm (JBOA). Through a comprehensive series of experiments, the efficacy of FDC-JBOADL technique is validated, demonstrating superior performance compared to existing methodologies in the domain of fall detection.
APA, Harvard, Vancouver, ISO, and other styles
30

Ball, Robert, Andrew H. Talal, Oanh Dang, Monica Muñoz, and Marianthi Markatou. "Trust but Verify: Lessons Learned for the Application of AI to Case-Based Clinical Decision-Making From Postmarketing Drug Safety Assessment at the US Food and Drug Administration." Journal of Medical Internet Research 26 (June 6, 2024): e50274. http://dx.doi.org/10.2196/50274.

Full text
Abstract:
Adverse drug reactions are a common cause of morbidity in health care. The US Food and Drug Administration (FDA) evaluates individual case safety reports of adverse events (AEs) after submission to the FDA Adverse Event Reporting System as part of its surveillance activities. Over the past decade, the FDA has explored the application of artificial intelligence (AI) to evaluate these reports to improve the efficiency and scientific rigor of the process. However, a gap remains between AI algorithm development and deployment. This viewpoint aims to describe the lessons learned from our experience and research needed to address both general issues in case-based reasoning using AI and specific needs for individual case safety report assessment. Beginning with the recognition that the trustworthiness of the AI algorithm is the main determinant of its acceptance by human experts, we apply the Diffusion of Innovations theory to help explain why certain algorithms for evaluating AEs at the FDA were accepted by safety reviewers and others were not. This analysis reveals that the process by which clinicians decide from case reports whether a drug is likely to cause an AE is not well defined beyond general principles. This makes the development of high performing, transparent, and explainable AI algorithms challenging, leading to a lack of trust by the safety reviewers. Even accounting for the introduction of large language models, the pharmacovigilance community needs an improved understanding of causal inference and of the cognitive framework for determining the causal relationship between a drug and an AE. We describe specific future research directions that underpin facilitating implementation and trust in AI for drug safety applications, including improved methods for measuring and controlling of algorithmic uncertainty, computational reproducibility, and clear articulation of a cognitive framework for causal inference in case-based reasoning.
APA, Harvard, Vancouver, ISO, and other styles
31

Lv, Ning, Xuefeng Ouyang, and Yujing Qiao. "Adaptive Layering Algorithm for FDM-3D Printing Based on Optimal Volume Error." Micromachines 13, no. 6 (2022): 836. http://dx.doi.org/10.3390/mi13060836.

Full text
Abstract:
The characteristics of fused deposition 3D printing lead to the inevitable step effect of surface contour in the process of forming and manufacturing, which affects molding accuracy. Traditional layering algorithms cannot take into account both printing time and molding accuracy. In this paper, an adaptive layering algorithm based on the optimal volume error is proposed. The angle between the normal vector and the layering direction is used for data optimization. The layer thickness is determined by calculating the volume error, and based on the principle of the optimal volume error, the unequal thickness adaptive layering of each printing layer of the model is realized. The experimental results show that the self-adaptive layering algorithm based on the optimal volume error has a better layering effect, greatly improves the forming efficiency and surface forming accuracy, and has a good adaptability to models with complex surfaces.
APA, Harvard, Vancouver, ISO, and other styles
32

McAllister, Murdoch K., Ellen K. Pikitch, Andre E. Punt, and Ray Hilborn. "A Bayesian Approach to Stock Assessment and Harvest Decisions Using the Sampling/Importance Resampling Algorithm." Canadian Journal of Fisheries and Aquatic Sciences 51, no. 12 (1994): 2673–87. http://dx.doi.org/10.1139/f94-267.

Full text
Abstract:
Scientific advice to fishery managers needs to be expressed in probabilistic terms to convey uncertainty about the consequences of alternative harvesting policies (policy performance indices). In most Bayesian approaches to such advice, relatively few of the model parameters used can be treated as uncertain, and deterministic assumptions about population dynamics are required; this can bias the degree of certainty and estimates of policy performance indices. We reformulate a Bayesian approach that uses the sampling/importance resampling algorithm to improve estimates of policy performance indices; it extends the number of parameters that can be treated as uncertain, does not require deterministic assumptions about population dynamics, and can use any of the types of fishery assessment models and data. Application of the approach to New Zealand's western stock of hoki (Macruronus novaezelandiae) shows that the use of Bayesian prior information for parameters such as the constant of proportionality for acoustic survey abundance indices can enhance management advice by reducing uncertainty in current stock size estimates; it also suggests that assuming historic recruitment is deterministic can create large negative biases (e.g., 26%) in estimates of biological and economic risks of alternative harvesting policies and that a stochastic recruitment assumption can be more appropriate.
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Geng, Chunyang Wang, Jian Gong, Ming Tan, and Yibin Liu. "Data-Independent Phase-Only Beamforming of FDA-MIMO Radar for Swarm Interference Suppression." Remote Sensing 15, no. 4 (2023): 1159. http://dx.doi.org/10.3390/rs15041159.

Full text
Abstract:
This paper proposes two data-independent phase-only beamforming algorithms for frequency diverse array multiple-input multiple-output radar against swarm interference. The proposed strategy can form a deep null at the interference area to achieve swarm interference suppression by tuning the phase of the weight vector, which can effectively reduce the hardware cost of the receiver. Specifically, the first algorithm imposes constant modulus constraint and sidelobe level constraint, and the phase-only weight vector is solved. The second algorithm performs a constant modulus decomposition of the weight vector to obtain two phase-only weight vectors, and uses two parallel phase shifters to synthesize one beamforming weight. Both methods can obtain the phase-only weight to realize suppression for swarm interference. Simulation results demonstrate that our strategy shows superiority in beam shape, output signal-to-interference-noise ratio, and phase shifter quantization performance, and has the potential for use in many applications, such as radar countermeasures and electronic defense.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Qi, Xianpeng Wang, Mengxing Huang, Xiang Lan, and Lu Sun. "DOA and Range Estimation for FDA-MIMO Radar with Sparse Bayesian Learning." Remote Sensing 13, no. 13 (2021): 2553. http://dx.doi.org/10.3390/rs13132553.

Full text
Abstract:
Due to grid division, the existing target localization algorithms based on sparse signal recovery for the frequency diverse array multiple-input multiple-output (FDA-MIMO) radar not only suffer from high computational complexity but also encounter significant estimation performance degradation caused by off-grid gaps. To tackle the aforementioned problems, an effective off-grid Sparse Bayesian Learning (SBL) method is proposed in this paper, which enables the calculation the direction of arrival (DOA) and range estimates. First of all, the angle-dependent component is split by reconstructing the received data and contributes to immediately extract rough DOA estimates with the root SBL algorithm, which, subsequently, are utilized to obtain the paired rough range estimates. Furthermore, a discrete grid is constructed by the rough DOA and range estimates, and the 2D-SBL model is proposed to optimize the rough DOA and range estimates. Moreover, the expectation-maximization (EM) algorithm is utilized to update the grid points iteratively to further eliminate the errors caused by the off-grid model. Finally, theoretical analyses and numerical simulations illustrate the effectiveness and superiority of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
35

Payne, W. Vance, Jaehyeok Heo, and Piotr A. Domanski. "A Data-Clustering Technique for Fault Detection and Diagnostics in Field-Assembled Air Conditioners." International Journal of Air-Conditioning and Refrigeration 26, no. 02 (2018): 1850015. http://dx.doi.org/10.1142/s2010132518500153.

Full text
Abstract:
Fault detection and diagnostics (FDD) can be used to monitor the performance of air conditioners (ACs) and heat pumps (HPs), signal any departure from their optimal performance, and provide diagnostic information indicating a possible fault if degradation of performance occurs. For packaged systems fully assembled in a factory, an FDD module can be fully developed for all units of a given model based on laboratory tests of a single unit. For field-assembled systems, laboratory tests of a representative AC or HP installation can lead to the development of a “back-bone” preliminary FDD algorithm; however, in situ adaptation of these algorithms is required because of installation variations in the field. This paper describes a method for adapting a laboratory-based FDD module to field-assembled systems by automatically customizing the in situ FDD fault-free performance correlations. We validated the developed data-clustering technique with a set of nearly 6000 data points to generate fault-free correlations for an HP operating in the cooling mode in our laboratory. The study evaluated several fault-free feature models and indicated that the use of different order correlations during stages of data collection produced better fits to the data.
APA, Harvard, Vancouver, ISO, and other styles
36

Fadhil Shazmir, M., N. Ayuni Safari, M. Azhan Anuar, A. A.Mat Isa, and Zamri A.R. "Operational Modal Analysis on a 3D Scaled Model of a 3-Storey Aluminium Structure." International Journal of Engineering & Technology 7, no. 4.27 (2018): 78. http://dx.doi.org/10.14419/ijet.v7i4.27.22485.

Full text
Abstract:
Obtaining a good experimental modal data is essential in modal analysis in order to ensure accurate extraction of modal parameters. The parameters are compared with other extraction methods to ascertain its consistency and validity. This paper demonstrates the extraction of modal parameters using various identification algorithms in Operational Modal Analysis (OMA) on a 3D scaled model of a 3-storey aluminium structure. Algorithms such as Frequency Domain Decomposition (FDD), Enhanced Frequency Domain Decomposition (EFDD) and Stochastic Subspace Identification (SSI) are applied in this study to obtain modal parameters. The model test structure is fabricated of aluminium and assembled using bolts and nuts. Accelerometers were used to collect the responses and the commercial post processing software was used to obtain the modal parameters. The resulting natural frequencies and mode shapes using FDD method are then compared with other OMA parametric technique such as EFDD and SSI algorithm by comparing the natural frequencies and Modal Assurance Criterion (MAC). Comparison of these techniques will be shown to justify the validity of each technique used and hence confirming the accuracy of the measurement taken.
APA, Harvard, Vancouver, ISO, and other styles
37

Han, Ruigang, Ning Jia, Yunfei Li, Dong Xiao, Shengming Guo, and Li Ma. "Iterative-detection–based time-domain adaptive decision feedback equalization for continuous phase modulation of underwater acoustic communication." Journal of the Acoustical Society of America 157, no. 3 (2025): 1912–25. https://doi.org/10.1121/10.0036145.

Full text
Abstract:
Continuous phase modulation (CPM), which is widely used in aviation telemetry and satellite communications, may help improve the performance of underwater acoustic (UWA) communication systems owing to its high spectral and power efficiency. However, applying conventional frequency-domain equalization (FDE) algorithms to CPM signals over time-varying UWA channels considerably degrades performance. Moreover, time-domain equalization algorithms often rely on excessive approximations for symbol detection, compromising overall reception. This study presents an iterative-detection–based time-domain adaptive decision feedback equalization (ID-TDADFE) algorithm that tracks channel variations through symbol-by-symbol detection. The symbol detection in ID-TDADFE fully considers the inherent coding gain of CPM signals can be cascaded with an adaptive equalizer, and enhances symbol detection performance by utilizing joint probability estimation. Numerical simulations with minimum-shift keying (MSK) and Gaussian MSK signals demonstrated that ID-TDADFE significantly improved communication performance over a time-varying UWA channel within one or two iterations. In a sea trial for experimental verification, ID-TDADFE reduced bit errors by 45.08% and 51.8% in the first and second iterations, respectively, compared to FDE.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Liang, Yuxiang Xing, Zhiqiang Chen, Li Zhang, and Kejun Kang. "A curve-filtered FDK (C-FDK) reconstruction algorithm for circular cone-beam CT." Journal of X-Ray Science and Technology: Clinical Applications of Diagnosis and Therapeutics 19, no. 3 (2011): 355–71. http://dx.doi.org/10.3233/xst-2011-029900299.

Full text
Abstract:
Circular cone-beam CT is one of the most popular configurations in both medical and industrial applications. The FDK algorithm is the most popular method for circular cone-beam CT. However, with increasing cone-angle the cone-beam artifacts associated with the FDK algorithm deteriorate because the circular trajectory does not satisfy the data sufficiency condition. Along with an experimental evaluation and verification, this paper proposed a curve-filtered FDK (C-FDK) algorithm. First, cone-parallel projections are rebinned from the native cone-beam geometry in two separate directions. C-FDK rebins and filters projections along different curves from T-FDK in the centrally virtual detector plane. Then, numerical experiments are done to validate the effectiveness of the proposed algorithm by comparing with both FDK and T-FDK reconstruction. Without any other extra trajectories supplemental to the circular orbit, C-FDK has a visible image quality improvement.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Yingjia, Thomas John Semrad, Rosemary Donaldson Cress, and Laurel A. Beckett. "Effect of dementia diagnosis on receipt of postoperative colon cancer chemotherapy." Journal of Clinical Oncology 33, no. 3_suppl (2015): 769. http://dx.doi.org/10.1200/jco.2015.33.3_suppl.769.

Full text
Abstract:
769 Background: Colon cancer and dementia have a high risk of co-occurrence. Prior studies found that patients with dementia have higher mortality than non-demented counterparts, mostly from non-cancer causes. We hypothesized that a dementia diagnosis using an improved algorithm would be associated with reduced use of postoperative therapy. Methods: In addition to the claims-based algorithm for dementia published by Centers for Medicare and Medicaid Services that uses SEER-Medicare Medicare Provider Analysis and Review, Carrier Claims, Home Health Agencies, and Outpatient files, we developed a medication-based algorithm using the part D file based on prescription for any of the five FDA-approved dementia drugs (donepezil, galantamine, memantine, rivastigmine, tacrine). We measured agreement between the two diagnostic algorithms with k-statistics. Using each algorithm and a final combined algorithm, we used multivariable logistic regression adjusting for demographics and disease characteristics to examine the effect of dementia on the use of post-operative colon cancer chemotherapy. Parallel analyses restricted the population to later-stage cancer patients (stage III/IV). Results: 46,126 patients diagnosed between 2007 and 2009 were identified. 20% had dementia by either of the algorithms. 9% of the dementia cases were identified through Part D data. The two algorithms showed moderate agreement (k>0.49, p=0.007). After surgery, those patients with dementia by the combined algorithm were less likely to receive chemotherapy (OR = 0.641, 95% CI: 0.597-0.688). Those with dementia identified by part D data were even less likely to receive chemotherapy than those identified by the claims algorithm (OR=0.617, 95% CI: 0.466-0.816 for medication; OR=0.767, 95% CI: 0.684-0.860 for claims). A similar pattern was detected when restricting to stage III/IV patients (OR=0.667, 95% CI: 0.457-0.973). Conclusions: Part D data increases the sensitivity for identifying dementia cases in SEER-Medicare. Patients with dementia are significantly less likely to receive post-operative chemotherapy. Thus, reduced postoperative colon cancer therapy among patients with dementia may contribute to higher cancer-related mortality.
APA, Harvard, Vancouver, ISO, and other styles
40

Xu, Chaofan, Xuesong Tian, Shuai Hou, et al. "22‐1: A Novel Gamma Prediction Algorithm for FDC Region of AMOLED Panel Based on CNN Model." SID Symposium Digest of Technical Papers 54, no. 1 (2023): 287–90. http://dx.doi.org/10.1002/sdtp.16548.

Full text
Abstract:
To ensure the uniformity of panel image quality, it is necessary to gamma correct the normal and FDC(full display camera) regions of the panel, but the gamma TT(tack time) is particularly long, especially the FDC takes about 110 seconds, resulting in lower production efficiency. Currently, the FDC correction algorithm uses the gamma value of the previous panel as the initial value, and its average error is about 60, and the error is large. To reduce the FDC gamma TT, this paper proposes a new algorithm based on CNN, which inputs normal gamma and predicts FDC gamma. The predicted results are taken as the starting value of the gamma in the FDC region, the starting value error is small and the FDC gamma TT is greatly shortened. Experimental results show that compared with the existing algorithm, the accuracy of the initial value of the FDC region is increased by 10 times. FDC gamma TT was reduced by about 3.5 times.
APA, Harvard, Vancouver, ISO, and other styles
41

Vucetic, Dejan, and Slobodan P. Simonovic. "Evaluation and application of Fuzzy Differential Evolution approach for benchmark optimization and reservoir operation problems." Journal of Hydroinformatics 15, no. 4 (2013): 1456–73. http://dx.doi.org/10.2166/hydro.2013.118.

Full text
Abstract:
The differential evolution (DE) algorithm is a powerful search technique for solving global optimization problems over continuous space. The search initialization for this algorithm is handled stochastically and therefore does not adequately capture vague preliminary knowledge. This paper proposes a novel Fuzzy Differential Evolution (FDE) algorithm, as an alternative approach, where the vague information on the search space can be represented and used to deliver a more focused search. The proposed FDE algorithm utilizes (a) fuzzy numbers to represent vague knowledge and (b) random alpha-cut levels for the search initialization. The alpha-cut intervals created during the initialization are used for fuzzy interval based mutation in successive search iterations. Four benchmark functions are used to demonstrate performance of the new FDE and its practical value. Additionally, the application of the FDE algorithm is illustrated through a reservoir operation case study problem. The new algorithm shows faster convergence in most of these functions.
APA, Harvard, Vancouver, ISO, and other styles
42

Kalantar, Bahareh, Naonori Ueda, Vahideh Saeidi, Kourosh Ahmadi, Alfian Abdul Halin, and Farzin Shabani. "Landslide Susceptibility Mapping: Machine and Ensemble Learning Based on Remote Sensing Big Data." Remote Sensing 12, no. 11 (2020): 1737. http://dx.doi.org/10.3390/rs12111737.

Full text
Abstract:
Predicting landslide occurrences can be difficult. However, failure to do so can be catastrophic, causing unwanted tragedies such as property damage, community displacement, and human casualties. Research into landslide susceptibility mapping (LSM) attempts to alleviate such catastrophes through the identification of landslide prone areas. Computational modelling techniques have been successful in related disaster scenarios, which motivate this work to explore such modelling for LSM. In this research, the potential of supervised machine learning and ensemble learning is investigated. Firstly, the Flexible Discriminant Analysis (FDA) supervised learning algorithm is trained for LSM and compared against other algorithms that have been widely used for the same purpose, namely Generalized Logistic Models (GLM), Boosted Regression Trees (BRT or GBM), and Random Forest (RF). Next, an ensemble model consisting of all four algorithms is implemented to examine possible performance improvements. The dataset used to train and test all the algorithms consists of a landslide inventory map of 227 landslide locations. From these sources, 13 conditioning factors are extracted to be used in the models. Experimental evaluations are made based on True Skill Statistic (TSS), the Receiver Operation characteristic (ROC) curve and kappa index. The results show that the best TSS (0.6986), ROC (0.904) and kappa (0.6915) were obtained by the ensemble model. FDA on its own seems effective at modelling landslide susceptibility from multiple data sources, with performance comparable to GLM. However, it slightly underperforms when compared to GBM (BRT) and RF. RF seems most capable compared to GBM, GLM, and FDA, when dealing with all conditioning factors.
APA, Harvard, Vancouver, ISO, and other styles
43

Gascuel, Didier. "Une méthode simple d'ajustement des clés taille/âge : application aux captures d'albacores (Thunnus albacares) de l'Atlantique Est." Canadian Journal of Fisheries and Aquatic Sciences 51, no. 3 (1994): 723–33. http://dx.doi.org/10.1139/f94-072.

Full text
Abstract:
An adjustment method for age–length keys is presented that incorporates yearly variability in cohort abundances. The method is based on two models (one for growth and the other for standard deviations of length versus age) and involves an iterative algorithm. The algorithm rapidly converges to stable results that yield, under some conditions, maximum likelihood estimates. It is also used to estimate parameters of the length standard deviation model. The method is applied to yellowfin tuna (Thunnus albacares) catches in the eastern Atlantic from 1975 to 1989. Monthly keys, adjusted to catches from the entire fishery, show a high annual variability. These keys are used for length–age conversion by gear type and 5 geographic squares. Compared with previous estimates based on the "slicing" method, catches by age are noticeably corrected; they show a greater temporal variability. The results show a low sensitivity to parameter estimates of the length standard deviation model. The adjustment method for age–length keys is compared with classical likelihood-based methods for the separation of mixtures of normal distributions; it fits particularly well under conditions of known growth, it offers the advantages of simplicity and adaptability, and importantly, it allows the use of particular growth models.
APA, Harvard, Vancouver, ISO, and other styles
44

Abdollah, M. A. F., R. Scoccia, and M. Aprille. "Data driven fault detection and diagnostics for hydronic and monitoring systems in a residential building." Journal of Physics: Conference Series 2385, no. 1 (2022): 012012. http://dx.doi.org/10.1088/1742-6596/2385/1/012012.

Full text
Abstract:
Abstract Buildings are responsible for 40% of the global energy use and associated with up to 30% of the total CO2 emissions. The drive to reduce the environmental impact of the built environment was the catalyst to the increasing installation of meters and sensors to monitor the energy use and environmental monitoring. This is key to cost effective Fault Detection and Diagnostics (FDD) which guarantees enhanced thermal comfort for the occupants and reduction in energy use. Most of FDD research work in buildings was focused on the commercial buildings due to the higher consumption and the higher saving potential, while limited work was directed towards residential buildings. This paper investigates the usage of two supervised machine learning algorithms, namely Random Forest, K nearest neighbour, to detect and diagnose twelve faults in both the monitoring system of the indoor/outdoor conditions, and the hydronic circuit inside an apartment located in Milan using minimal features that are easy to access and inexpensive to monitor to cut down in both computational and financial costs. The thermal zones are being conditioned using an electric air to water heat pump connected to fan coils for cooling and radiant floor for heating. The faults include valve leakage, faulty temperature sensors and recirculating pump’s inadequate flow rate. The faults were modelled in a Modelica based detailed model of the apartment. After tuning the hyper-parameters of all three algorithms, the Receiver Operator Characteristics curve for each fault were compared for each algorithm to compare the optimal one to be used. The Random Forest algorithms showed the highest accuracy with almost 89% across the twelve faults. Generalization of the trained algorithm across different weathers were tested but the results were not promising.
APA, Harvard, Vancouver, ISO, and other styles
45

Si, Lu, Weizhang Xu, Xinle Yu, and Hang Yin. "An Improved Orthogonal Matching Pursuit Algorithm for CS-Based Channel Estimation." Sensors 23, no. 23 (2023): 9509. http://dx.doi.org/10.3390/s23239509.

Full text
Abstract:
Wireless broadband transmission channels usually have time-domain-sparse properties, and the reconstruction of these channels using a greedy search-based orthogonal matching pursuit (OMP) algorithm can effectively improve channel estimation performance while decreasing the length of the reference signal. In this research, the improved OMP and SOMP algorithms for compressed-sensing (CS)-based channel estimation are proposed for single-carrier frequency domain equalization (SC-FDE) systems, which, in comparison with conventional algorithms, calculate the path gain after obtaining the path delay and updating the observation matrices. The reliability of the communication system is further enhanced because the channel path gain is calculated using longer observation vectors, which lowers the Cramér–Rao lower bound (CRLB) and results in better channel estimation performance. The developed method can also be applied to time-domain-synchronous OFDM (TDS-OFDM) systems, and it is applicable to the improvement of other matching pursuit algorithms.
APA, Harvard, Vancouver, ISO, and other styles
46

Mugo, Robinson, and Sei-Ichi Saitoh. "Ensemble Modelling of Skipjack Tuna (Katsuwonus pelamis) Habitats in the Western North Pacific Using Satellite Remotely Sensed Data; a Comparative Analysis Using Machine-Learning Models." Remote Sensing 12, no. 16 (2020): 2591. http://dx.doi.org/10.3390/rs12162591.

Full text
Abstract:
To examine skipjack tuna’s habitat utilization in the western North Pacific (WNP) we used an ensemble modelling approach, which applied a fisher- derived presence-only dataset and three satellite remote-sensing predictor variables. The skipjack tuna data were compiled from daily point fishing data into monthly composites and re-gridded into a quarter degree resolution to match the environmental predictor variables, the sea surface temperature (SST), sea surface chlorophyll-a (SSC) and sea surface height anomalies (SSHA), which were also processed at quarter degree spatial resolution. Using the sdm package operated in RStudio software, we constructed habitat models over a 9-month period, from March to November 2004, using 17 algorithms, with a 70:30 split of training and test data, with bootstrapping and 10 runs as parameter settings for our models. Model performance evaluation was conducted using the area under the curve (AUC) of the receiver operating characteristic (ROC), the point biserial correlation coefficient (COR), the true skill statistic (TSS) and Cohen’s kappa (k) metrics. We analyzed the response curves for each predictor variable per algorithm, the variable importance information and the ROC plots. Ensemble predictions of habitats were weighted with the TSS metric. Model performance varied across various algorithms, with the Support Vector Machines (SVM), Boosted Regression Trees (BRT), Random Forests (RF), Multivariate Adaptive Regression Splines (MARS), Generalized Additive Models (GAM), Classification and Regression Trees (CART), Multi-Layer Perceptron (MLP), Recursive Partitioning and Regression Trees (RPART), and Maximum Entropy (MAXENT), showing consistently high performance than other algorithms, while the Flexible Discriminant Analysis (FDA), Mixture Discriminant Analysis (MDA), Bioclim (BIOC), Domain (DOM), Maxlike (MAXL), Mahalanobis Distance (MAHA) and Radial Basis Function (RBF) had lower performance. We found inter-algorithm variations in predictor variable responses. We conclude that the multi-algorithm modelling approach enabled us to assess the variability in algorithm performance, hence a data driven basis for building the ensemble model. Given the inter-algorithm variations observed, the ensemble prediction maps indicated a better habitat utilization map of skipjack tuna than would have been achieved by a single algorithm.
APA, Harvard, Vancouver, ISO, and other styles
47

Shadra, Chhaya, James Lin Chen, Cheryl D. Cho-Phan, Aradhana Ghosh, and Jonathan Hirsch. "An algorithmic approach to deriving line of therapy in a real-world data set for non-small cell lung cancer (NSCLC)." Journal of Clinical Oncology 37, no. 15_suppl (2019): e18099-e18099. http://dx.doi.org/10.1200/jco.2019.37.15_suppl.e18099.

Full text
Abstract:
e18099 Background: Real World Data (RWD) is being used for outcomes research and regulatory submissions. A key variable needed to understand treatment outcomes is Line of Therapy (LoT). However, LoT is generally not captured in RWD sources such as electronic health records (EHR) or claims data, and is typically derived using manual abstraction. To determine whether an automated approach to LoT derivation is possible, we created an algorithm and applied it to patients (pts) in the Syapse Learning Health Network. Methods: We selected confirmed NSCLC pts from 4 health systems in the RWD set, verifying diagnosis using ICD-9/10/O3 topography and morphology codes. We analyzed the EHR-derived medication list using a regimen-independent algorithm that classified antineoplastic drugs (AD), as defined by ATC L01, into LoT. Within each LoT, we compared the top 80% of AD prescribed (by volume of pts) to the LoT as indicated on each drug’s FDA label. We then used descriptive statistical summaries to outline the alignment between automated algorithmic results and indicated usage within that LoT. Results: In a set of 10,842 NSCLC pts, a total of 106 unique AD were prescribed in the first line as identified by our algorithm, and 13 drugs were prescribed as first line for 80% of the pts. Of those, 9 (69%) of those are indicated for first line, 3 are not indicated for NSCLC, and 1 is indicated for a subsequent NSCLC line, per FDA labels. 82 unique AD were prescribed in the second line as identified by our algorithm, and 15 drugs were prescribed as second line for 80% of the pts. Of those, 11 (73%) are indicated for treatment/continuation therapy for recurrent, advanced or metastatic disease, 3 are not indicated for NSCLC, and 1 is indicated for first line NSCLC per FDA labels. 36 unique AD were prescribed in subsequent line as identified by our algorithm, and 18 drugs were prescribed as subsequent line for 80% of the pts. Of those, 12 (67%) are indicated for treatment of recurrent, advanced or metastatic disease or subsequent systemic therapy, 5 are not indicated for NSCLC and 1 is indicated for first line per FDA labels. Conclusions: An automated algorithmic approach for deriving lines of therapy may be a viable solution to scalably calculate LoT in RWD sets. A deeper analysis using statistical sensitivity and specificity assessment of such algorithms is needed to validate the potential of an algorithmic approach.
APA, Harvard, Vancouver, ISO, and other styles
48

Günay Yılmaz, Asuman, and Samoua Alsamoua. "Improved Weighted Chimp Optimization Algorithm Based on Fitness–Distance Balance for Multilevel Thresholding Image Segmentation." Symmetry 17, no. 7 (2025): 1066. https://doi.org/10.3390/sym17071066.

Full text
Abstract:
Multilevel thresholding image segmentation plays a crucial role in various image processing applications. However, achieving optimal segmentation results often poses challenges due to the intricate nature of images. In this study, a novel metaheuristic search algorithm named Weighted Chimp Optimization Algorithm with Fitness–Distance Balance (WChOA-FDB) is developed. The algorithm integrates the concept of Fitness–Distance Balance (FDB) to ensure balanced exploration and exploitation of the solution space, thus enhancing convergence speed and solution quality. Moreover, WChOA-FDB incorporates weighted Chimp Optimization Algorithm techniques to further improve its performance in handling multilevel thresholding challenges. Experimental studies were conducted to test and verify the developed method. The algorithm’s performance was evaluated using 10 benchmark functions (IEEE_CEC_2020) of different types and complexity levels. The search performance of the algorithm was analyzed using the Friedman and Wilcoxon statistical test methods. According to the analysis results, the WChOA-FDB variants consistently outperform the base algorithm across all tested dimensions, with Friedman score improvements ranging from 17.3% (Case-6) to 25.2% (Case-4), indicating that the FDB methodology provides significant optimization enhancement regardless of problem complexity. Additionally, experimental evaluations conducted on color image segmentation tasks demonstrate the effectiveness of the proposed algorithm in achieving accurate and efficient segmentation results. The WChOA-FDB method demonstrates significant improvements in Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Feature Similarity Index (FSIM) metrics with average enhancements of 0.121348 dB, 0.012688, and 0.003676, respectively, across different threshold levels (m = 2 to 12), objective functions, and termination criteria.
APA, Harvard, Vancouver, ISO, and other styles
49

Cha, Jihyoung, Sangho Ko, and Soon-Young Park. "Particle-Filter-Based Fault Diagnosis for the Startup Process of an Open-Cycle Liquid-Propellant Rocket Engine." Sensors 24, no. 9 (2024): 2798. http://dx.doi.org/10.3390/s24092798.

Full text
Abstract:
This study introduces a fault diagnosis algorithm based on particle filtering for open-cycle liquid-propellant rocket engines (LPREs). The algorithm serves as a model-based method for the startup process, accounting for more than 30% of engine failures. Similar to the previous fault detection and diagnosis (FDD) algorithm for the startup process, the algorithm in this study is composed of a nonlinear filter to generate residuals, a residual analysis, and a multiple-model (MM) approach to detect and diagnose faults from the residuals. In contrast to the previous study, this study makes use of the modified cumulative sum (CUSUM) algorithm, widely used in change-detection monitoring, and a particle filter (PF), which is theoretically the most accurate nonlinear filter. The algorithm is confirmed numerically using the CUSUM and MM methods. Subsequently, the FDD algorithm is compared with an algorithm from a previous study using a Monte Carlo simulation. Through a comparative analysis of algorithmic performance, this study demonstrates that the current PF-based FDD algorithm outperforms the algorithm based on other nonlinear filters.
APA, Harvard, Vancouver, ISO, and other styles
50

Takeda, Akiko, Hiroyuki Mitsugi, and Takafumi Kanamori. "A Unified Classification Model Based on Robust Optimization." Neural Computation 25, no. 3 (2013): 759–804. http://dx.doi.org/10.1162/neco_a_00412.

Full text
Abstract:
A wide variety of machine learning algorithms such as the support vector machine (SVM), minimax probability machine (MPM), and Fisher discriminant analysis (FDA) exist for binary classification. The purpose of this letter is to provide a unified classification model that includes these models through a robust optimization approach. This unified model has several benefits. One is that the extensions and improvements intended for SVMs become applicable to MPM and FDA, and vice versa. For example, we can obtain nonconvex variants of MPM and FDA by mimicking Perez-Cruz, Weston, Hermann, and Schölkopf's ( 2003 ) extension from convex ν-SVM to nonconvex Eν-SVM. Another benefit is to provide theoretical results concerning these learning methods at once by dealing with the unified model. We give a statistical interpretation of the unified classification model and prove that the model is a good approximation for the worst-case minimization of an expected loss with respect to the uncertain probability distribution. We also propose a nonconvex optimization algorithm that can be applied to nonconvex variants of existing learning methods and show promising numerical results.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!