To see the other types of publications on this topic, follow the link: Semi Average Methods.

Journal articles on the topic 'Semi Average Methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Semi Average Methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Barbalho, Fernando D., Gabriela F. N. da Silva, and Klebber T. M. Formiga. "Average Rainfall Estimation: Methods Performance Comparison in the Brazilian Semi-Arid." Journal of Water Resource and Protection 06, no. 02 (2014): 97–103. http://dx.doi.org/10.4236/jwarp.2014.62014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Machfiroh, Ines Saraswati, Widiya Astuti Alam Sur, and Robby Tri Pangestu. "TREND SEMI AVERAGE AND LEAST SQUARE IN FORECASTING YAMAHA MOTORCYCLE SALES." BAREKENG: Jurnal Ilmu Matematika dan Terapan 16, no. 1 (2022): 343–54. http://dx.doi.org/10.30598/barekengvol16iss1pp341-352.

Full text
Abstract:
This study compared the Semi Average and Least Square methods to determine the sales trend of Yamaha motorcycles in obtaining the best method for predicting motorcycle sales at CV Surya Prima Pelaihari. Mean Absolute Percentage Error (MAPE) was used to determine the accuracy of the Semi Average and Least Square methods in predicting the sales of CV Surya Prima Pelaihari motorbikes. Semi Average method was based on the MAPE value of 43.96%. The Least Square method has a MAPE value of 31.89%. The comparison of MAPE values shows that the Least Square method provides better predicting results because of the lower MAPE value. Therefore, the Least Square method was used to predict sales at CV Surya Prima Pelaihari. A more accurate output can be obtained than the Semi Average method.
APA, Harvard, Vancouver, ISO, and other styles
3

Ni, Nyoman Supuwiningsih. "DISTRIBUTION TREN OF SENIOR HIGH SCHOOL AS ORIGIN SCHOOL OF COLLEGE STUDENT STIKOM BALI BASE GIS." International Journal of Engineering Technologies and Management Research 6, no. 12 (2020): 78–88. https://doi.org/10.5281/zenodo.3604621.

Full text
Abstract:
GIS (Geographic Information System) is an information system that can store and integrate spatial data and non-spatial data can be used for interactive mapping of STIKOM Student School origin in Denpasar. During this time the spread of origin of high school / vocational school / equivalent STIKOM Bali students has never been mapped to find out the trend of increase or decrease in the number of origin of STIKOM Bali student schools from 2013-2018 and predict the number of students in accordance with the origin of schools in the city of Denpasar. This study aims to provide information to the management of STIKOM Bali regarding the distribution trends of the interests of prospective students to continue to tertiary level, especially STIKOM Bali. This research will collaborate between statistical science and the concept of GIS (Geographic Information System). Statistically the number of STIKOM Bali students is based on the origin of schools in Denpasar City and predicts it for the next 3 years using a trend analysis of semi-average methods (Semi Average Methods) as a material for evaluating the performance of STIKOM Bali management in improving the performance of campus promotions. This method makes trends by finding the average group of data which consists of grouping data into 2 parts, calculating average arithmetic, calculating the difference, formulating the value of change and making equations for subsequent trends. The results of these calculations are mapped with the concept of GIS (Geographic Information System) using ArcView as software to implement that integrates spatial data with non-spatial data.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Jiaxin, Chris H. Q. Ding, Sibao Chen, Chenggang He, and Bin Luo. "Semi-Supervised Remote Sensing Image Semantic Segmentation via Consistency Regularization and Average Update of Pseudo-Label." Remote Sensing 12, no. 21 (2020): 3603. http://dx.doi.org/10.3390/rs12213603.

Full text
Abstract:
Image segmentation has made great progress in recent years, but the annotation required for image segmentation is usually expensive, especially for remote sensing images. To solve this problem, we explore semi-supervised learning methods and appropriately utilize a large amount of unlabeled data to improve the performance of remote sensing image segmentation. This paper proposes a method for remote sensing image segmentation based on semi-supervised learning. We first design a Consistency Regularization (CR) training method for semi-supervised training, then employ the new learned model for Average Update of Pseudo-label (AUP), and finally combine pseudo labels and strong labels to train semantic segmentation network. We demonstrate the effectiveness of the proposed method on three remote sensing datasets, achieving better performance without more labeled data. Extensive experiments show that our semi-supervised method can learn the latent information from the unlabeled data to improve the segmentation performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Shuhui, Baohong Zhu, Yongan Zhang, Hongwei Liu, Shuaishuai Wu, and Haofeng Xie. "Microstructure Comparison for AlSn20Cu Antifriction Alloys Prepared by Semi-Continuous Casting, Semi-Solid Die Casting, and Spray Forming." Metals 12, no. 10 (2022): 1552. http://dx.doi.org/10.3390/met12101552.

Full text
Abstract:
Antifriction alloys such as AlSn20Cu are key material options for sliding bearings used in machinery. Uniform distribution and a near-equiaxed granularity tin phase are generally considered to be ideal characteristics of an AlSn20Cu antifriction alloy, although these properties vary by fabrication method. In this study, to analyze the variation of the microstructure with the fabrication method, AlSn20Cu alloys are prepared by three methods: semi-continuous casting, semi-solid die casting, and spray forming. Bearing blanks are subsequently prepared from the fabricated alloys using different processes. Morphological information, such as the total area ratio and average particle diameter of the tin phase, are quantitatively characterized. For the tin phase of the AlSn20Cu alloy, the deformation and annealing involved in semi-continuous casting leads to a prolate particle shape. The average particle diameter of the tin phase is 12.6 µm, and the overall distribution state is related to the deformation direction. The tin phase of AlSn20Cu alloys prepared by semi-solid die casting presents both nearly spherical and strip shapes, with an average particle diameter of 9.6 µm. The tin phase of AlSn20Cu alloys prepared by spray forming and blocking hot extrusion presents a nearly equilateral shape, with an average particle diameter of 6.2 µm. These results indicate that, of the three preparation methods analyzed in this study, semi-solid die casting provides the shortest process flow time, whereas a finer and more uniform tin-phase structure may be obtained using the spray-forming process. The semi-solid die casting method presents the greatest potential for industrial application, and this method therefore presents a promising possibility for further optimization.
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmed, Huda Yahya, and Munaf Yousif Hmood. "Comparison of Some Semi-parametric Methods in Partial Linear Single-Index Model." Journal of Economics and Administrative Sciences 27, no. 130 (2021): 170–84. http://dx.doi.org/10.33095/jeas.v27i130.2207.

Full text
Abstract:
The research dealt with a comparative study between some semi-parametric estimation methods to the Partial linear Single Index Model using simulation. There are two approaches to model estimation two-stage procedure and MADE to estimate this model. Simulations were used to study the finite sample performance of estimating methods based on different Single Index models, error variances, and different sample sizes , and the mean average squared errors were used as a comparison criterion between the methods were used. The results showed a preference for the two-stage procedure depending on all the cases that were used
APA, Harvard, Vancouver, ISO, and other styles
7

Sanqoor, Afra Naqib, Fatma Altamimi, Naeema Aljanahi, et al. "Comparative Study of Two Semi-automated Forensic DNA Extraction Methods." Arab Journal of Forensic Sciences and Forensic Medicine 5, no. 2 (2023): 180–90. http://dx.doi.org/10.26735/yskr7711.

Full text
Abstract:
Automation in forensic DNA analysis is crucial for analysts to reduce time, improve results, and decrease risk of contamination. With the variety of commercially available automated DNA extraction systems, comes the need for end-users to be informed of what they provide and what they might lack. Thus, this study aimed to evaluate the efficiency of two semi-automated DNA extraction systems used for forensic DNA analysis: Automate Express™ and Hamilton Microlab STAR™ system, for four parameters; reproducibility, stability, sensitivity and contamination. Overall, the results indicated that both semi-automated systems performed similarly in providing robust and reproducible DNA results while maintaining good capability to overcome PCR inhibition with low risk of contamination. The two semi-automated systems showed higher DNA recovery than organic extraction using phenol-chloroform by 22% for semen and 7% for blood samples. In addition, three sample types, blood, saliva, semen were tested to compare the two systems (total samples n=100). Overall, the data showed the average DNA recovery for Hamilton was higher than the DNA recovery by Automate Express™ for the blood and semen sample types indicating better performance of the Hamilton Microlab STAR™ in terms of recovery and sensitivity level.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Lang, Dazhi Zhang, Boying Wu, and Xiong Meng. "Fifth-Order Mapped Semi-Lagrangian Weighted Essentially Nonoscillatory Methods Near Certain Smooth Extrema." Journal of Applied Mathematics 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/127624.

Full text
Abstract:
Fifth-order mapped semi-Lagrangian weighted essentially nonoscillatory (WENO) methods at certain smooth extrema are developed in this study. The schemes contain the mapped semi-Lagrangian finite volume (M-SL-FV) WENO 5 method and the mapped compact semi-Lagrangian finite difference (M-C-SL-FD) WENO 5 method. The weights in the more common scheme lose accuracy at certain smooth extrema. We introduce mapped weighting to handle the problem. In general, a cell average is applied to construct the M-SL-FV WENO 5 reconstruction, and the M-C-SL-FD WENO 5 interpolation scheme is proposed based on an interpolation approach. An accuracy test and numerical examples are used to demonstrate that the two schemes reduce the loss of accuracy and improve the ability to capture discontinuities.
APA, Harvard, Vancouver, ISO, and other styles
9

Moraci, N., M. C. Mandaglio, and D. Ielo. "Analysis of the internal stability of granular soils using different methods." Canadian Geotechnical Journal 51, no. 9 (2014): 1063–72. http://dx.doi.org/10.1139/cgj-2014-0006.

Full text
Abstract:
The knowledge of the internal stability of granular soils is a key factor in the design of granular or geotextile filters. To evaluate the internal stability of granular soils, different semi-empirical methods are generally used. Nevertheless, the results of these methods, on the same soil, can lead to different internal stability evaluations. In this paper, to evaluate the reliability of the semi-empirical methods available in literature, the internal stability of different granular soils, reconstituted by the authors and by other researchers, has been studied by means of theoretical and experimental approaches. In particular, the theoretical analysis of the internal stability was performed using the Simulfiltr method, developed recently by the authors, while the experimental evaluation of the internal stability was carried out by means of long-term filtration tests. The comparison of the internal stability analysis performed by means of semi-empirical, theoretical, and experimental methods showed that the semi-empirical methods are not always reliable. Therefore, on the base of these results, a new chart, in terms of minimum slope Smin (%) of the grain-size distribution and of average value of finer percentage F, has been proposed to evaluate the internal stability of granular soils.
APA, Harvard, Vancouver, ISO, and other styles
10

Chumachenko, Olena, and Kirill Riazanovskiy. "Semi-supervised Segmentation of Medical Images." Electronics and Control Systems 3, no. 81 (2024): 22–29. http://dx.doi.org/10.18372/1990-5548.81.18986.

Full text
Abstract:
This article is devoted to the development of a method (algorithm) of medical image segmentation based on semi-supervised learning. Semi-supervised learning methods are shown to have significant potential for improving medical image segmentation through effective use of unlabeled data. However, challenges remain in adapting these methods to the specific characteristics of medical images, such as high variability, class imbalance, and the presence of noise and artifacts. To overcome these difficulties, it is proposed to integrate several approaches (consistency regularization, pseudo-labeling, average teacher model) into a single structure. To increase the robustness and generalizability of the model for different imaging methods, we include industry-specific data supplements tailored to the unique characteristics and challenges of each method. Large-scale experiments on magnetic resonance imaging, computed tomography, and optical coherence tomography datasets demonstrate that the proposed framework significantly outperforms fully supervised and individual semisupervised learning methods, especially in scenarios with low data labeling.
APA, Harvard, Vancouver, ISO, and other styles
11

Matthew, Saphia A. L., Refaya Rezwan, Yvonne Perrie, and F. Philipp Seib. "Volumetric Scalability of Microfluidic and Semi-Batch Silk Nanoprecipitation Methods." Molecules 27, no. 7 (2022): 2368. http://dx.doi.org/10.3390/molecules27072368.

Full text
Abstract:
Silk fibroin nanoprecipitation by organic desolvation in semi-batch and microfluidic formats provides promising bottom-up routes for manufacturing narrow polydispersity, spherical silk nanoparticles. The translation of silk nanoparticle production to pilot, clinical, and industrial scales can be aided through insight into the property drifts incited by nanoprecipitation scale-up and the identification of critical process parameters to maintain throughout scaling. Here, we report the reproducibility of silk nanoprecipitation on volumetric scale-up in low-shear, semi-batch systems and estimate the reproducibility of chip parallelization for volumetric scale-up in a high shear, staggered herringbone micromixer. We showed that silk precursor feeds processed in an unstirred semi-batch system (mixing time > 120 s) displayed significant changes in the nanoparticle physicochemical and crystalline properties following a 12-fold increase in volumetric scale between 1.8 and 21.9 mL while the physicochemical properties stayed constant following a further 6-fold increase in scale to 138 mL. The nanoparticle physicochemical properties showed greater reproducibility after a 6-fold volumetric scale-up when using lower mixing times of greater similarity (8.4 s and 29.4 s) with active stirring at 400 rpm, indicating that the bulk mixing time and average shear rate should be maintained during volumetric scale-up. Conversely, microfluidic manufacture showed high between-batch repeatability and between-chip reproducibility across four participants and microfluidic chips, thereby strengthening chip parallelization as a production strategy for silk nanoparticles at pilot, clinical, and industrial scales.
APA, Harvard, Vancouver, ISO, and other styles
12

Muhammad Farhan, Renata De La Rosa Manik, Hana Raihanatul Jannah, and Lya Hulliyyatus Suadaa. "Comparison of Naive Bayes, K-Nearest Neighbor, and Support Vector Machine Classification Methods in Semi-Supervised Learning for Sentiment Analysis of Kereta Cepat Jakarta Bandung (KCJB)." Proceedings of The International Conference on Data Science and Official Statistics 2023, no. 1 (2023): 109–20. http://dx.doi.org/10.34123/icdsos.v2023i1.332.

Full text
Abstract:
Transportation technology has developed very rapidly in the 21st century; one of them is high-speed trains. Currently, the Indonesian government is implementing the construction of the Kereta Cepat Jakarta-Bandung (KCJB) project in collaboration with China. The construction of this fast train project has attracted various comments and opinions from the public on Twitter and social media. This research aims to compare the classification methods of Naïve Bayes, K-Nearest Neighbor (K-NN), and Support Vector Machine (SVM) in classifying sentiment in tweets about high-speed trains obtained by scraping Twitter. The comparison process was carried out using semi-supervised learning, and the results showed that the semi-supervised SVM model had the best performance with an average accuracy of 86%, followed by the semi-supervised Naïve Bayes model and semi-supervised K-NN with an average accuracy of 81% and 58% respectively. Overall, the prediction results from the three models conclude that there are more tweets with negative sentiment than tweets with positive and neutral sentiment.
APA, Harvard, Vancouver, ISO, and other styles
13

Iwema, J., R. Rosolem, R. Baatz, T. Wagener, and H. R. Bogena. "Investigating temporal field sampling strategies for site-specific calibration of three soil moisture–neutron intensity parameterisation methods." Hydrology and Earth System Sciences 19, no. 7 (2015): 3203–16. http://dx.doi.org/10.5194/hess-19-3203-2015.

Full text
Abstract:
Abstract. The Cosmic-Ray Neutron Sensor (CRNS) can provide soil moisture information at scales relevant to hydrometeorological modelling applications. Site-specific calibration is needed to translate CRNS neutron intensities into sensor footprint average soil moisture contents. We investigated temporal sampling strategies for calibration of three CRNS parameterisations (modified N0, HMF, and COSMIC) by assessing the effects of the number of sampling days and soil wetness conditions on the performance of the calibration results while investigating actual neutron intensity measurements, for three sites with distinct climate and land use: a semi-arid site, a temperate grassland, and a temperate forest. When calibrated with 1 year of data, both COSMIC and the modified N0 method performed better than HMF. The performance of COSMIC was remarkably good at the semi-arid site in the USA, while the N0mod performed best at the two temperate sites in Germany. The successful performance of COSMIC at all three sites can be attributed to the benefits of explicitly resolving individual soil layers (which is not accounted for in the other two parameterisations). To better calibrate these parameterisations, we recommend in situ soil sampled to be collected on more than a single day. However, little improvement is observed for sampling on more than 6 days. At the semi-arid site, the N0mod method was calibrated better under site-specific average wetness conditions, whereas HMF and COSMIC were calibrated better under drier conditions. Average soil wetness condition gave better calibration results at the two humid sites. The calibration results for the HMF method were better when calibrated with combinations of days with similar soil wetness conditions, opposed to N0mod and COSMIC, which profited from using days with distinct wetness conditions. Errors in actual neutron intensities were translated to average errors specifically to each site. At the semi-arid site, these errors were below the typical measurement uncertainties from in situ point-scale sensors and satellite remote sensing products. Nevertheless, at the two humid sites, reduction in uncertainty with increasing sampling days only reached typical errors associated with satellite remote sensing products. The outcomes of this study can be used by researchers as a CRNS calibration strategy guideline.
APA, Harvard, Vancouver, ISO, and other styles
14

Saryanti, I. Gusti Ayu Desi, Rosalia Hadi, and I. Gusti Ngurah Ady Kusuma. "Purwarupa Aplikasi Peramalan Kebutuhan Persediaan Barang Dagang Berbasis Website dengan Semi Average Method." JTIM : Jurnal Teknologi Informasi dan Multimedia 3, no. 2 (2021): 71–78. http://dx.doi.org/10.35746/jtim.v3i2.157.

Full text
Abstract:
The Point of Sale application has now begun to be used by many trading entrepreneurs. Of course, this is because Point of Sale (POS) can record all trade transaction activities, such as recording sales and recording purchases of inventory items. However, the majority of POS applications are only limited to recording transactions and do not yet have the ability to assist traders in determining the need for merchandise. This study discusses how to build an application that can help traders predict the need for merchandise with one of the forecasting methods, namely the Semi Average Method. The application in this study was built with the waterfall method and is based on a website and utilizes sales reports from the POS application that has been used. Based on the website, users don't have to bother installing this application because it can be accessed via a browser and anytime as long as it is connected to the internet. Testing the application in this study using the Blackbox Testing method shows that this application has 100% functionality working. It is hoped that the application from this research can later be used side by side with the existing POS application so there is no need to make changes to the POS application that is already in use.
APA, Harvard, Vancouver, ISO, and other styles
15

Sineglazov, Victor, and Kyrylo Lesohorskyi. "Semi-supervised Multi-view Ensemble Learning with Consensus." Electronics and Control Systems 3, no. 81 (2024): 15–21. http://dx.doi.org/10.18372/1990-5548.81.18978.

Full text
Abstract:
This paper is devoted to enchasing existing multi-view semi-supervised ensemble learning algorithms by introducing a cross-view consensus. A detailed overview of three state-of-the-art methods is given, with relevant steps of the training highlighted. A problem statement is formed to introduce both semi-supervised framework and consider the semi-supervised learning in the context of optimization problem. A novel multi-view semi-supervised ensemble learning algorithm called multi-view semi-supervised cross consensus (MSSXC) is introduced. The algorithm is tested against 5 synthetic datasets designed for semi-supervised learning challenges. The results indicate improvement in the average accuracy of up to 10% in comparison to existing methods, especially in low-volume, high density scenarios.
APA, Harvard, Vancouver, ISO, and other styles
16

Alshaimaa, Elwasify, and Isa Zaidi. "Estimation of the Difference Parameter in ARFIMA Models Using Parametric and Semi Parametric Methods." International Journal of Science, Mathematics and Technology Learning 31, no. 2 (2023): 182–202. https://doi.org/10.5281/zenodo.8302672.

Full text
Abstract:
This paper investigates the persistence of the long memory property in the daily stock index EGX30. The long memory property of the series has been examined and a fractionally integrated autoregressive moving average (ARFIMA) model has been fitted using some parametric and semi parametric methods. Long memory has been scrutinized by Rescaled Range statistic (R/S), Aggregated Variance, and Absolute Moment. For the estimation of the model, the Geweke and Porter-Hudak’s) GPH), the Reisen’s (SPR), and the Local Whittle (LW) has been used as semi-parametric methods and Maximum Likelihood (MLE), Exact Maximum Likelihood (EML), Modified Profile Likelihood (MPL) and Conditional Sum of Squares (CSS) as parametric methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Diao, Zeyu, Lili Yue, Fanrong Zhao, and Gaorong Li. "High-Dimensional Regression Adjustment Estimation for Average Treatment Effect with Highly Correlated Covariates." Mathematics 10, no. 24 (2022): 4715. http://dx.doi.org/10.3390/math10244715.

Full text
Abstract:
Regression adjustment is often used to estimate average treatment effect (ATE) in randomized experiments. Recently, some penalty-based regression adjustment methods have been proposed to handle the high-dimensional problem. However, these existing high-dimensional regression adjustment methods may fail to achieve satisfactory performance when the covariates are highly correlated. In this paper, we propose a novel adjustment estimation method for ATE by combining the semi-standard partial covariance (SPAC) and regression adjustment methods. Under some regularity conditions, the asymptotic normality of our proposed SPAC adjustment ATE estimator is shown. Some simulation studies and an analysis of HER2 breast cancer data are carried out to illustrate the advantage of our proposed SPAC adjustment method in addressing the highly correlated problem of the Rubin causal model.
APA, Harvard, Vancouver, ISO, and other styles
18

Bilous, Andrii, Viktor Myroniuk, Viktor Svynchuk, Oleksandr Soshenskyi, Oleksandr Lesnik, and Yaroslav Kovbasa. "Semi-empirical estimation of log taper using stem profile equations." Journal of Forest Science 67, No. 7 (2021): 318–27. http://dx.doi.org/10.17221/209/2020-jfs.

Full text
Abstract:
In January 2019 the forest industry in Ukraine adopted European standards for measuring and grading of round wood based on mid-point diameters, which caused major discrepancies from traditionally used estimates of timber volume using top diameters. To compare methods of merchantable wood volume estimation, we investigated the stem form inside bark for two dominant tree species in Ukraine, i.e. Scots pine (Pinus sylvestris L.) and common oak (Quercus robur L.). We used tree stem measurements to fit stem profile equations, whereas simulation was applied to derive log taper. We found that Newnham's (1992) variable-exponent taper equation performed well for predicting stem taper for both tree species. Then, we simulated the structure of harvested wood, so that it replicated annual distribution of logs by their length and diameters. As a result, the average log taper was estimated at 0.836 ÷ 0.855 cm·m<sup>–1</sup> and 1.180 ÷ 0.121 cm·m<sup>–1</sup> for pine and oak, respectively. The study also indicated that log taper varied along stems. The higher rates of diameter decrease were found for butt logs, for which the taper was 2.5–3.5 times higher than its average for the whole stem. The results of our study ensure the stacked round wood volume conversion between estimates obtained using top and mid-point diameters.
APA, Harvard, Vancouver, ISO, and other styles
19

Sattari, Mohammad Taghi, Halit Apaydin, Shahab S. Band, Amir Mosavi, and Ramendra Prasad. "Comparative analysis of kernel-based versus ANN and deep learning methods in monthly reference evapotranspiration estimation." Hydrology and Earth System Sciences 25, no. 2 (2021): 603–18. http://dx.doi.org/10.5194/hess-25-603-2021.

Full text
Abstract:
Abstract. Timely and accurate estimation of reference evapotranspiration (ET0) is indispensable for agricultural water management for efficient water use. This study aims to estimate the amount of ET0 with machine learning approaches by using minimum meteorological parameters in the Corum region, which has an arid and semi-arid climate and is regarded as an important agricultural centre of Turkey. In this context, monthly averages of meteorological variables, i.e. maximum and minimum temperature; sunshine duration; wind speed; and average, maximum, and minimum relative humidity, are used as inputs. Two different kernel-based methods, i.e. Gaussian process regression (GPR) and support vector regression (SVR), together with a Broyden–Fletcher–Goldfarb–Shanno artificial neural network (BFGS-ANN) and long short-term memory (LSTM) models were used to estimate ET0 amounts in 10 different combinations. The results showed that all four methods predicted ET0 amounts with acceptable accuracy and error levels. The BFGS-ANN model showed higher success (R2=0.9781) than the others. In kernel-based GPR and SVR methods, the Pearson VII function-based universal kernel was the most successful (R2=0.9771). Scenario 5, with temperatures including average temperature, maximum and minimum temperature, and sunshine duration as inputs, gave the best results. The second best scenario had only the sunshine duration as the input to the BFGS-ANN, which estimated ET0 having a correlation coefficient of 0.971 (Scenario 8). Conclusively, this study shows the better efficacy of the BFGS in ANNs for enhanced performance of the ANN model in ET0 estimation for drought-prone arid and semi-arid regions.
APA, Harvard, Vancouver, ISO, and other styles
20

Matthews, Gareth, Chris Sawh, Rui Li, et al. "Human pressure-volume loops from the impedance catheter: Development and validation of semi-automated beat-by-beat analysis." Wellcome Open Research 9 (December 16, 2024): 723. https://doi.org/10.12688/wellcomeopenres.23373.1.

Full text
Abstract:
Introduction Pressure-volume (PV) loops offer a comprehensive evaluation of cardiac function. Impedance catheters enable the acquisition of synchronised intracardiac electrocardiogram (ECG), pressure, and volume data with high temporal resolution. However, current calibration methods are impractical and data interpretation is often inconsistent. Methods In the PREFER-CMR prospective, cohort study, 15 patients with suspected heart failure and preserved ejection fraction underwent same-day cardiac magnetic resonance (CMR) imaging and invasive impedance catheter studies. Signal processing algorithms were developed to semi-automatically determine PV-loop phases and calibrate impedance catheter volumes to CMR. Results of beat-by-beat and average loop analysis approaches were compared with reference methods and between each other. Results The second-order differential of the pressure-volume trace identified PV-loop phases on a beat-by-beat basis, but gradient smoothing prevented detection in average loops. Calibrated impedance catheter volumes, including left ventricular end diastolic (LVEDV) and end systolic (LVESV) volumes, correlated with CMR (r≥0.95, p<0.001) using both analysis methods. However, the average loop LVESV was overestimated by 8.1ml (p=0.031). For left ventricular end diastolic pressure, both beat-by-beat (r=0.73, p=0.002) and average loop (r=0.69, p=0.005) methods correlated with the fluid-filled manometer reference. Maximum pressure correlation was strong for both beat-by-beat (r=0.85, p<0.001) and average loop (r=0.80, p<0.001) methods, but was 10.1mmHg (p=0.040) lower in the average loop method. Between methods, significant correlations (r=0.73–0.99) were found across all pressures and volumes. Stroke work (r=0.94) and potential energy (r=0.96) significantly correlated (p<0.001) between methods, although Bland-Altman subgroup analysis suggested underestimation of stroke work in atrial fibrillation using the average loop method. Conclusions Impedance catheter volumes can be accurately calibrated using CMR. PV-loop phases can be robustly detected with a semi-automated algorithm. Both beat-by-beat and average loop approaches are viable for analysing multiple cardiac cycles, though beat-by-beat analysis may offer advantages for phase identification, pressure assessment, and in irregular rhythms. Trials registration ClinicalTrials.gov: NCT05114785. Registration date: 05/11/2021. https://clinicaltrials.gov/study/NCT05114785
APA, Harvard, Vancouver, ISO, and other styles
21

Matveeva, M. G., T. A. Zarenkova, A. V. Skripnikova, A. M. Grishin, and M. N. Alekhin. "Comparison of a semi-automatic strain analysis of left heart with manual myocardial tracing in speckle-tracking echocardiography." Ultrasound & Functional Diagnostics, no. 3 (July 31, 2024): 21–33. http://dx.doi.org/10.24835/1607-0771-271.

Full text
Abstract:
Purpose. To compare a semi-automatic strain analysis of the left ventricle and left atrium with a manual method in speckle-tracking echocardiography.Materials and methods. A strain of left ventricle and left atrium was assessed in 110 patients by two methods: manual (Q-Analysis) and semi-automatic (AutoStrain). The following parameters were evaluated: LV global longitudinal strain (LV GLS), LA longitudinal strain during the reservoir phase (LASr), LA longitudinal strain during the conduit phase (LAScd), and LA longitudinal strain during the contraction phase (LASct).Results. The ROI correction was carried out significantly more often with the semi-automatic method of measuring LV GLS than with manual (40.1% vs. 16.4%, p < 0.05). There were significant differences in LV GLS average values, LASr values, and LAScd values obtained by the semi-automatic and manual methods. LV GLS average values obtained by the semi-automatic method were lower (18.8 ± 2.8% vs. 20.0 ± 3.1%, p < 0.001), and the values of LASr and LAScd obtained by the semi-automatic method were higher (LASr 31.6 ± 9.8% vs. 30.3 ± 10.8%, p = 0.038; LAScd 17.1 ± 7.1% vs. 15.4 ± 6.8%, p < 0.001) than in manual. Semi-automatic method takes more time for LV strain analysis and less time for LA strain analysis than manual method.Conclusion. The semi-automatic method of LV and LA strain evaluation showed higher reproducibility compared with the manual method. With the semi-automatic method, the values of the LV GLS were lower, and the correction of ROI was required more often and took more time than with manual. The semi-automatic method of LA strain evaluation was characterized by higher values in the reservoir and conduit phases and required less time compared to the manual method. The LA longitudinal strain in the reservoir phase showed the highest values of reproducibility compared to other LA strain paremeters.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Qianfang, Guanqun Sheng, Xingong Tang, and Kai Xie. "Semi-Picking: A semi-supervised arrival time picking for microseismic monitoring based on the TransUGA network combined with SimMatch." Geophysical Journal International 240, no. 1 (2024): 502–34. http://dx.doi.org/10.1093/gji/ggae308.

Full text
Abstract:
SUMMARY An accurate and efficient method for picking the first arrival of microseismic signals is crucial for processing microseismic monitoring data. However, the weak magnitude and low signal-to-noise ratio (SNR) of these signals make picking arrivals challenging. Recent advancements in deep learning-based methods for picking the first arrivals of microseismic signals have effectively addressed the inefficiencies and inaccuracies of traditional methods. Nevertheless, these methods often require many training samples, and the substantial size and labelling effort significantly hinder the development of deep learning-based first-arrival picking methods. This study introduces Semi-Picking: a semi-supervised method for picking the first arrival of microseismic signals, utilizing the TransUGA network and SimMatch. This approach automatically labels microseismic signals following sample augmentation by establishing a semi-supervised learning framework, significantly reducing the time required for sample labelling. Initially, the TransUNet model is enhanced by incorporating the Self-Supervised Predictive Convolutional Attention Block (SSPCAB) module to create a Deep-TransUNet architecture, which more effectively separates signal from noise in microseismic signals with low SNR and improves the accuracy of first-arrival picking. Subsequently, the data sets for this study are compiled from microseismic traces collected from field monitoring records. Finite-difference forward modelling is applied to the microseismic data to train the network, and hyperparameter tuning is performed to optimize the UGATIT and Deep-TransUNet architecture. The outcomes of the arrival-picking experiments, conducted under conditions of low SNR using both synthetic and real microseismic records, demonstrated that Semi-Picking offers robust resistance to incorrect labels. This resilience stems from the synergistic use of the semi-supervised learning framework and self-attention mechanisms. The proposed method demonstrates superiority over the TransUNet, the SSPCAB-TransUNet, the UNet++ and the traditional short-term average/long-term average method, respectively, with the picking error rate of the Semi-Picking Net being less than 0.1 s. The proposed method outperforms the commonly used deep learning-based approaches for picking the first arrivals of microseismic signals, exhibiting superior performance.
APA, Harvard, Vancouver, ISO, and other styles
23

Indriani Dwi Lestari, Bella Puspita Rininda, Widiya Astuti Alam Sur, and Marliza Noor Hayatie. "ANALISIS PERBANDINGAN METODE PERAMALAN (FORECASTING) PENJUALAN MOTOR LISTRIK PADA CV SANTOSA ABADI MOTOR PELAIHARI." Realible Accounting Journal 4, no. 2 (2025): 119–33. https://doi.org/10.36352/raj.v4i2.899.

Full text
Abstract:
The aim of this study is to identify the most accurate and effective forecasting method for CV. Santosa Abadi Motor Pelaihari in predicting the sales of electric motor products for the upcoming period. This research is a descriptive quantitative study utilizing secondary data, consisting of electric motor product sales data from 2023 through January-June 2024, obtained from the Administration department of CV Santosa Abadi Motor Pelaihari. The analysis involves comparing three forecasting methods: Trend Moment, Semi Average, and Least Square, and evaluating the accuracy of each method using the Mean Absolute Percentage Error (MAPE). The results indicate that the Trend Moment method yields a forecasting error percentage of 71%, the Semi Average method results in 23%, and the Least Square method shows 28%. Based on these findings, it can be concluded that the Semi Average method is more suitable for forecasting electric motor sales at CV Santosa Abadi Motor Pelaihari.
APA, Harvard, Vancouver, ISO, and other styles
24

Tang, Yutao, Yongze Guo, Huayu Wang, Ting Song, and Yao Lu. "Uncertainty-Aware Semi-Supervised Method for Pectoral Muscle Segmentation." Bioengineering 12, no. 1 (2025): 36. https://doi.org/10.3390/bioengineering12010036.

Full text
Abstract:
The consistency regularization method is a widely used semi-supervised method that uses regularization terms constructed from unlabeled data to improve model performance. Poor-quality target predictions in regularization terms produce noisy gradient flows during training, resulting in a degradation in model performance. Recent semi-supervised methods usually filter out low-confidence target predictions to alleviate this problem, but also prevent the model from learning features from unlabeled data in low-confidence regions. Specifically, in medical imaging and other cross-domain scenarios, models are prone to producing large numbers of low-confidence predictions. To improve the quality of target predictions while utilizing unlabeled data more efficiently, we propose an uncertainty-aware semi-supervised method that incorporates the breast anatomical prior, for pectoral muscle segmentation. Our method has a typical teacher-student dual model structure, where uncertainty is used to distinguish between high- and low-confidence predictions in the teacher model output. A low-confidence prediction refinement module was designed to refine the low-confidence predictions by incorporating high-confidence predictions and a learned anatomical prior. The anatomical prior, as regularization of the target predictions, was learned from annotations and an auxiliary task. The final target predictions are a combination of high-confidence teacher predictions and refined low-confidence predictions. The proposed method was evaluated on a dataset containing 635 data points from three data centers. Compared with the baseline method, the proposed method showed an average improvement in DICE index of 1.76, an average reduction in IoU index of 3.21, and an average reduction in HD index of 5.48. The experimental results show that our method generalizes well to the test set and outperforms other methods in all evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
25

Venkataraman, Abinaya Priya, Delila Sirak, Rune Brautaset, and Alberto Dominguez-Vicent. "Evaluation of the Performance of Algorithm-Based Methods for Subjective Refraction." Journal of Clinical Medicine 9, no. 10 (2020): 3144. http://dx.doi.org/10.3390/jcm9103144.

Full text
Abstract:
Objective: To evaluate the performance of two subjective refraction measurement algorithms by comparing the refraction values, visual acuity, and the time taken by the algorithms with the standard subjective refraction (SSR). Methods: The SSR and two semi-automated algorithm-based subjective refraction (SR1 and SR2) in-built in the Vision-R 800 phoropter were performed in 68 subjects. In SR1 and SR2, the subject’s responses were recorded in the algorithm which continuously modified the spherical and cylindrical component accordingly. The main difference between SR1 and SR2 is the use of an initial fogging step in SR1. Results: The average difference and agreement limits intervals in the spherical equivalent between each refraction method were smaller than 0.25 D, and 2.00 D, respectively. For the cylindrical components, the average difference was almost zero and the agreement limits interval was less than 0.50 D. The visual acuities were not significantly different among the methods. The times taken for SR1 and SR2 were significantly shorter, and SR2 was on average was three times faster than SSR. Conclusions: The refraction values and the visual acuity obtained with the standard subjective refraction and algorithm-based methods were similar on average. The algorithm-based methods were significantly faster than the standard method.
APA, Harvard, Vancouver, ISO, and other styles
26

Gau, Karin, Charlotte S. M. Schmidt, Horst Urbach, et al. "Accuracy and practical aspects of semi- and fully automatic segmentation methods for resected brain areas." Neuroradiology 62, no. 12 (2020): 1637–48. http://dx.doi.org/10.1007/s00234-020-02481-1.

Full text
Abstract:
Abstract Purpose Precise segmentation of brain lesions is essential for neurological research. Specifically, resection volume estimates can aid in the assessment of residual postoperative tissue, e.g. following surgery for glioma. Furthermore, behavioral lesion-symptom mapping in epilepsy relies on accurate delineation of surgical lesions. We sought to determine whether semi- and fully automatic segmentation methods can be applied to resected brain areas and which approach provides the most accurate and cost-efficient results. Methods We compared a semi-automatic (ITK-SNAP) with a fully automatic (lesion_GNB) method for segmentation of resected brain areas in terms of accuracy with manual segmentation serving as reference. Additionally, we evaluated processing times of all three methods. We used T1w, MRI-data of epilepsy patients (n = 27; 11 m; mean age 39 years, range 16–69) who underwent temporal lobe resections (17 left). Results The semi-automatic approach yielded superior accuracy (p < 0.001) with a median Dice similarity coefficient (mDSC) of 0.78 and a median average Hausdorff distance (maHD) of 0.44 compared with the fully automatic approach (mDSC 0.58, maHD 1.32). There was no significant difference between the median percent volume difference of the two approaches (p > 0.05). Manual segmentation required more human input (30.41 min/subject) and therefore inferring significantly higher costs than semi- (3.27 min/subject) or fully automatic approaches (labor and cost approaching zero). Conclusion Semi-automatic segmentation offers the most accurate results in resected brain areas with a moderate amount of human input, thus representing a viable alternative compared with manual segmentation, especially for studies with large patient cohorts.
APA, Harvard, Vancouver, ISO, and other styles
27

Shi, Yun, Lin Yang, Mei Huang, and Jun Steed Huang. "Multi-Factorized Semi-Covariance of Stock Markets and Gold Price." Journal of Risk and Financial Management 14, no. 4 (2021): 172. http://dx.doi.org/10.3390/jrfm14040172.

Full text
Abstract:
Complex models have received significant interest in recent years and are being increasingly used to explain the stochastic phenomenon with upward and downward fluctuation such as the stock market. Different from existing semi-variance methods in traditional integer dimension construction for two variables, this paper proposes a simplified multi-factorized fractional dimension derivation with the exact Excel tool algorithm involving the fractional center moment extension to covariance, which is a complex parameter average that is a multi-factorized extension to Pearson covariance. By examining the peaks and troughs of gold price averages, the proposed algorithm provides more insight into revealing underlying stock market trends to see who is the financial market leader during good economic times. The calculation results demonstrate that the complex covariance is able to distinguish subtle differences among stock market performances and gold prices for the same field that the two variable covariance may overlook. We take London, Tokyo, Shanghai, Toronto, and Nasdaq as the representative examples.
APA, Harvard, Vancouver, ISO, and other styles
28

Dzikevičius, Audrius, and Svetlana Šaranda. "EMA versus SMA Usage to Forecast Stock Markets: The Case of S&P 500 and OMX Baltic Benchmark." Business: Theory and Practice 11, no. (3) (2010): 248–55. https://doi.org/10.3846/btp.2010.27.

Full text
Abstract:
The academic literature is showing a growing interest in such trading rules as Moving Average. The majority of researches were made using simple moving average. Although semi-professional traders use the technical analysis methods to predict the future stock prices, to identify the stock trend changes, OMX Baltic Benchmark Index was never tested. Previous researches on the S&P 500 Index using the most widely used method of technical analysis – Moving Averages are more or less appellative. Technical analysis is opponent to classical economic theory but investors use it widely all over the world. Technical Analysis methods can be less or more effective than it was thought until nowadays. This paper compares 2 trading rules of technical analysis – exponential smoothing method and simple moving average rule. Both methods were applied to US index S&P 500 and OMX Baltic Benchmark Index and the results were compared using systematic error (mean square error, the mean absolute deviation, mean forecast error, the mean absolute percentage error) and tracking signal evaluation, bias distribution estimation and appropriate Constanta level finding for each market forecast: the case of Standard and Poor's 500 and OMX Baltic Benchmark Index.
APA, Harvard, Vancouver, ISO, and other styles
29

Robertson, William H., and Ross Teller. "A Mixed Methods Examination of the Integration of Adaptive Courseware into an Online Gateway Course." European Modern Studies Journal 8, no. 5 (2024): 13–18. https://doi.org/10.59573/emsj.8(5).2024.2.

Full text
Abstract:
The purpose of this study was to explore how student behaviors, while using an adaptive courseware system, influenced performance in a gateway biology course. This study used a mixed methods approach. Quantitative data was collected from the course’s learning management system (LMS) and the adaptive courseware. This data was analyzed using correlations between several metrics, including student course average, exam scores, total time spent using the adaptive courseware, the number of times the participants accessed both LMS content and adaptive courseware content, and the average score of activities and assessments within the adaptive courseware. The qualitative data included semi-structured interviews with 21 participants and follow-up interviews with five of the original 21 participants. This data was analyzed using process, descriptive, and in vivo coding.
APA, Harvard, Vancouver, ISO, and other styles
30

Iwema, J., R. Rosolem, R. Baatz, T. Wagener, and H. R. Bogena. "Investigating temporal field sampling strategies for site-specific calibration of three soil moisture – neutron intensity parameterisation methods." Hydrology and Earth System Sciences Discussions 12, no. 2 (2015): 2349–89. http://dx.doi.org/10.5194/hessd-12-2349-2015.

Full text
Abstract:
Abstract. The Cosmic-Ray Neutron Sensor (CRNS) can provide soil moisture information at scales relevant to hydrometeorological modeling applications. Site-specific calibration is needed to translate CRNS neutron intensities into sensor footprint average soil moisture contents. We investigated temporal sampling strategies for calibration of three CRNS parameterisations (modified N0, HMF, and COSMIC) by assessing the effects of the number of sampling days and soil wetness conditions on the performance of the calibration results, for three sites with distinct climate and land use: a semi-arid site, a temperate grassland and a temperate forest. When calibrated with a year of data, COSMIC performed relatively good at all three sites, and the modified N0 method performed best at the two humid sites. It is advisable to collect soil moisture samples on more than a single day regardless of which parameterisation is used. In any case, sampling on more than ten days would, despite the strong increase in work effort, improve calibration results only little. COSMIC needed the least number of days at each site. At the semi-arid site, the N0mod method was calibrated better under average wetness conditions, whereas HMF and COSMIC were calibrated better under drier conditions. Average soil wetness condition gave better calibration results at the two humid sites. The calibration results for the HMF method were better when calibrated with combinations of days with similar soil wetness conditions, opposed to N0mod and COSMIC, which profited from using days with distinct wetness conditions. The outcomes of this study can be used by researchers as a CRNS calibration strategy guideline.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Yi, RuiHua Duan, XiangFeng Xiao, and Tiecheng Pan. "Minimally Invasive Esophagectomy with Right Bronchial Occlusion under Artificial Pneumothorax." Digestive Surgery 32, no. 2 (2015): 77–81. http://dx.doi.org/10.1159/000371747.

Full text
Abstract:
Aims: To assess the safety and feasibility of minimally invasive esophagectomy and selected three-field lymphadenectomy with the right bronchial occlusion in left semi-prone position under artificial pneumothorax. Methods: Thoracoscopic-laparoscopic subtotal esophagectomy and selected three-field lymphadenectomy were performed in 166 patients with esophageal carcinoma by the right bronchial occlusion in left semi-prone position under artificial pneumothorax. Results: 109 patients received two-field lymphadenectomy and 57 received three-field lymphadenectomy. The average operative time was 202.5 ± 21.3 min; the average thoracoscopic operative time was 98.4 ± 15.5 min. The average blood loss was 39.6 ± 4.2 ml, and no blood transfusion was needed during the surgery. The mean lymph node harvest was 28.4 ± 5.2 nodes. Hospital stay ranged from 7 to 95 days and the average was 11.3 days. The postoperative complication rate was 29.5%, and the mortality rate was 1.2%. Conclusions: It is feasible and safe to perform thoracoscopic-laparoscopic subtotal esophagectomy and selected three-field lymphadenectomy with the right bronchial occlusion in left semi-prone position under artificial pneumothorax for esophageal carcinoma. The procedure shows advantages in improved visibility and accessibility of the surgical field, and better subsequent surgical outcomes.
APA, Harvard, Vancouver, ISO, and other styles
32

de Lima, Renato A. Ferreira, Adriana M. Zanforlin Martini, Sérgius Gandolfi, and Ricardo Ribeiro Rodrigues. "Repeated disturbances and canopy disturbance regime in a tropical semi-deciduous forest." Journal of Tropical Ecology 24, no. 1 (2008): 85–93. http://dx.doi.org/10.1017/s0266467407004658.

Full text
Abstract:
Abstract:The canopy disturbance regime and the influence of gap methods on the interpretation of forest structure and dynamics were evaluated in a tropical semi-deciduous forest in south-eastern Brazil. We encountered a gap density of 11.2 gaps ha−1and an average size which varied from 121 to 333 m2depending on the gap delimitation method considered (minimum gap size was 10 m2). Although average size was slightly higher, the median value obtained (78 m2) was comparable to other tropical forest sites and the gap size-class distribution found supported the pattern described for such forest sites. Among 297 gap makers, snapping and uprooting were the most common modes of disturbance. The number and basal area of gap makers were good predictors of gap size. Almost 25% of all gaps suffered from repeated disturbance events that brought about larger gap sizes. Such processes, along with delimitation methods, strongly influenced the estimation of turnover rate and therefore the interpretation of forest dynamics. These results demonstrated the importance of further studies on repeated disturbances, which is often neglected in forest studies.
APA, Harvard, Vancouver, ISO, and other styles
33

Gier, Yvonne, Mario Rehnert, Frank Widmer, and Anna Nagl. "Eine Analyse des vorderen Augenabschnitts mit dem Oculus Pentacam Corneo- Skleral-Profil-Report." Optometry & Contact Lenses 1, no. 1 (2021): 21–31. http://dx.doi.org/10.54352/dozv.kwss2869.

Full text
Abstract:
Purpose. This study aims to describe the profile progression from cornea over limbus to sclera using the module CSP-Re- port (Corneo-Skleral-Profil-Report) of the Oculus Pentacam. Material and Methods. Seventy-four left eyes of healthy sub- jects were evaluated. Data was collected as part of previous research (bachelor-thesis) using the CSP-Report module of the Pentacam Basic. Sagittal height, limbal angle and scleral angle were analyzed over three diameters in eight semi- meridians. Results. At diameter 10.00 mm the shape of the anterior segment of the eye was nearly rotationally symmetrical, the average sagittal height of the eight semi-meridians was between 1734 µm and 1806 µm. At diameter 12.80 mm the shape was more asymmetrical, average sagittal height ranged from 2797 µm to 3000 µm. At diameter 15.00 mm the asym- metry was most pronounced, the average sagittal height was between 3595 µm and 4033 µm. The average limbal angles at 12.80 mm were in the range of 35.68° and 42.02°. For the scleral angle at 15.00 mm mean values between 34.71° and 43.11° were determined for the eight semi-meridians. The nasal segments always showed smaller sagittal heights, accompanied by smaller limbal and scleral angles compared to the temporal semi-meridians. The same pattern was obser- ved for the superior semi-meridians in relation to the inferior segments. For the nasal semi-meridians, concave transitions of the corneo-scleral profile were observed, whereas tempo- rally a convex course of the CSP was observed. Conclusion. The CSP-Report provides a method to describe the shape of the anterior segment of the eye. These additional data are useful for fitting scleral lenses. Keywords Scheimpflug technology, Pentacam, limbal sclera contour, corneo-scleral profile, scleral lens, sagittal height
APA, Harvard, Vancouver, ISO, and other styles
34

Contro, Curtis, Arthur J. Miller, David Hatcher, and Snehlata Oberoi. "Evaluating condylar head morphology as it relates to the skeletal vertical facial dimension: A three-dimensional semi-automated landmark study." APOS Trends in Orthodontics 6 (September 16, 2016): 238–45. http://dx.doi.org/10.4103/2321-1407.190724.

Full text
Abstract:
Introduction Condylar growth direction and rotation affect the occlusion, especially in the vertical dimension. The first objective of this study was to evaluate the reliability of a novel three-dimensional semi-automated landmark computer software on mapping the head of the mandibular condyle using cone-beam computed tomography (CBCT). The second objective was to evaluate qualitatively how condylar morphology differs three-dimensionally according to skeletal vertical pattern and mandibular morphology in healthy adults using CBCT. Materials and Methods A total of 242 (169 females and 73 males) participants were eligible for the study. Participants were selected at random from the 242 to create three groups of 10 participants based on their MP-SN° and assigned to a brachyfacial group, dolichofacial group, and mesofacial group. The thirty participants were also divided by mandibular symphyseal morphology according to the chin angle (Id-Pg-MP°). Each subject’s condyles were landmarked using Stratovan’s Checkpoint software. A Procrustes analysis was then used to generate an average condylar shape for each of the six groups from which to evaluate shape differences. Results Checkpoint proved to be a reliable method of placing landmarks on the condyle with a low coefficient of variation of 1.81% (standard deviation/mean). Qualitative analysis of the Procrustes averages revealed brachyfacial average showed a moderate anterior lean from the sagittal, anterior convexity from the axial, and medial lean from the coronal views. The dolichofacial average showed a mild anterior lean from the sagittal, anterior concavity from the axial, and a symmetrical half-dome shape from the coronal. The obtuse chin angle group average displayed morphology similar to the brachyfacial average, whereas the acute chin angle group average displayed morphology similar to the dolichofacial average. Conclusions Checkpoint is reliable software to landmark the temporomandibular joint. There are differences in average morphologies between all groups.
APA, Harvard, Vancouver, ISO, and other styles
35

Lankipalli, B. R., W. D. McDavid, S. B. Dove, E. Wieckowska, R. G. Waggener, and J. R. Mercier. "Comparison of five methods for the derivation of spectra for a constant potential dental X-ray unit." Dentomaxillofacial Radiology 30, no. 5 (2001): 264–69. http://dx.doi.org/10.1038/sj/dmfr/4600629.

Full text
Abstract:
OBJECTIVES To compare the diagnostic X-ray spectra derived by different methods for a constant potential dental X-ray unit. MATERIALS AND METHODS Five methods of deriving X-ray spectra for a constant potential dental X-ray unit were compared: measurement by spectrometer using cadmium-zinc-telluride (CZT) detector, calculation by Monte Carlo simulation, calculation by two different, semi-empirical methods and estimation from transmission data. The dental X-ray set was a Heliodent MD unit (Sirona, Charlotte, NC, USA) operable at 60 or 70 kV. A semiconductor detector was used in the spectrometer measurements and an ionization chamber dosimeter in the transmission measurements. From the five methods, photon-fluence spectra were derived. Based on the photon-fluence spectra, average energies and transmission curves in aluminum were calculated. RESULTS For all five methods, the average energies were within 2.4% of one another. Comparison of the transmission curves showed an average difference in the range of 1 to 6%. CONCLUSION All of the five methods of deriving spectra are in extremely good agreement with each other.
APA, Harvard, Vancouver, ISO, and other styles
36

Riganti, Paula, María Victoria Ruiz Yanzi, Juan Victor Ariel Franco, Josefina Chiodi, Mónica Regueiro, and Karin Silvana Kopitowski. "Developing a breast cancer screening decision aid in Spanish for average-risk women: a mixed methods study." Medwave 24, no. 02 (2024): e2726-e2726. http://dx.doi.org/10.5867/medwave.2024.02.2726.

Full text
Abstract:
Introduction We aimed to develop a decision aid to support shared-decision making between physicians and women with average breast cancer risk when deciding whether to participate in breast cancer screening. Methods We included women at average risk of breast cancer and physicians involved in supporting the decision of breast cancer screening from an Academic Hospital in Buenos Aires, Argentina. We followed the International Patient Decision Aid Standards to develop our decision aid. Guided by a steering group and a multidisciplinary consultancy group including a patient advocate, we reviewed the evidence about breast cancer screening and previous decision aids, explored the patients' information needs on this topic from the patients' and physicians' perspective using semi-structured interviews, and we alpha-tested the prototype to determine its usability, comprehensibility and applicability. Results We developed the first prototype of a web-based decision aid to use during the clinical encounter with women aged 40 to 69 with average breast cancer risk. After a meeting with our consultancy group, we developed a second prototype that underwent alpha-testing. Physicians and patients agreed that the tool was clear, useful and applicable during a clinical encounter. We refined our final prototype according to their feedback. Conclusion We developed the first decision aid in our region and language on this topic, developed with end-users' input and informed by the best available evidence. We expect this decision aid to help women and physicians make shared decisions during the clinical encounter when talking about breast cancer screening.
APA, Harvard, Vancouver, ISO, and other styles
37

Askarov, Khasanboy, Omonjon Sulaymonov, Ghaybullo Mamajonov, Dilmurod Yigitaliyev, Mizabobur Mirzaikromov, and Abdurakhmon Marufjonov. "Effective methods of agricultural use of light colored gray soils distributed in Kuva hill." BIO Web of Conferences 84 (2024): 01042. http://dx.doi.org/10.1051/bioconf/20248401042.

Full text
Abstract:
Fergana Valley is a tectonic depression surrounded by mountains, and its appearance is in the form of an ellipse. Its length from west to east is 300 km equal to 50 meters from south to north. 70 km The height of the valley reaches from sea level in the west to . in the 250 мeast 1000 м. The height of the surrounding high mountains of Central Ferghana is 2000 - 4000 m and more. Such a sharp geomorphological structure determines the specific characteristics of the valley’s climate. The western and central parts of the Fergana valley (up to the Margilan oasis) belong to the desert region, and the eastern part to the semi-desert region. The climatic conditions of the desert part are described by the meteorological stations “Ko’kan”, “Ultarma” and “Namangan”, and the semi-desert part by the data of the weather stations “Fergana”, “Nasriddinbek” and “Fedchenko”. In the western and northern regions of the valley, the long-term average air temperature is +13+13.5 0 C. The coldest month of the year is January, its average temperature is -2.1-2.40 C. The period with the highest temperature is July, and the average temperature is around +24.8+27.6 0 C. ( Tables 1-2 ). The minimum temperature is -27-29 0 C, the maximum temperature is +42+46 0 C. We present the detail of the morphology of the 1st soil cross-section, trying to reveal the specific characteristics of the pale gray gypsum soils in the Kuva hills. The soils formed in the Kuva hills, where we conducted research, are the result of the long-term development of the area. Kuva mountain range is located 500-600m above sea level. The pale gray soils distributed in these areas are composed of stony rocks formed on loess and loess sand. At the same time, it appeared on coarse, gravelly-soft rock formations and gravels, on top of which a thin skeleton of sand and silt is covered.
APA, Harvard, Vancouver, ISO, and other styles
38

Oluyide, Oluwakorede Monica, Jules-Raymond Tapamo, and Tom Mmbasu Walingo. "A semi-automatic motion-constrained Graph Cut algorithm for Pedestrian Detection in thermal surveillance videos." PeerJ Computer Science 8 (September 12, 2022): e1064. http://dx.doi.org/10.7717/peerj-cs.1064.

Full text
Abstract:
This article presents a semi-automatic algorithm that can detect pedestrians from the background in thermal infrared images. The proposed method is based on the powerful Graph Cut optimisation algorithm which produces exact solutions for binary labelling problems. An additional term is incorporated into the energy formulation to bias the detection framework towards pedestrians. Therefore, the proposed method obtains reliable and robust results through user-selected seeds and the inclusion of motion constraints. An additional advantage is that it enables the algorithm to generalise well across different databases. The effectiveness of our method is demonstrated on four public databases and compared with several methods proposed in the literature and the state-of-the-art. The method obtained an average precision of 98.92% and an average recall of 99.25% across the four databases considered and outperformed methods which made use of the same databases.
APA, Harvard, Vancouver, ISO, and other styles
39

Ji, Peng, Limin Jiang, Xiangdong Guo, et al. "Effect of MRI Images Based on Semi-Automatic Volume Segmentation in Patients with Acute Ischemic Stroke." Journal of Medical Imaging and Health Informatics 11, no. 1 (2021): 223–29. http://dx.doi.org/10.1166/jmihi.2021.3430.

Full text
Abstract:
Objective: To study the MRI and CT characteristics of different periods of acute ischemic stroke and evaluate its diagnostic value by using semi-automatic mention segmentation method. Methods: CT, conventional MRI and DWI were performed in 64 patients with acute ischemic stroke. The average ADC value and average relative ADC (rADC) value of infarct lesions were measured and statistically analyzed. Results: There were no significant differences in CT, conventional MRI, and DWI signal characteristics between 1 and 7 days after the onset of acute ischemic stroke. The average ADC value and the average rADC value decreased, but the average rADC in the infarct area increased with time. The rADC value was statistically significant with the onset of 1d, 2d, 3d, and 4d (P < 0.05), but not statistically significant with the onset of 5d and 6d (P > 0.05). Conclusion: In the image processing method of semi-automatic segmentation method, the characteristics of CT, conventional MRI, and DWI signals combined with the evolution of rADC values over time can help to judge the pathophysiological changes of acute ischemic stroke, which is ischemic. Stroke staging and treatment guidance are provided.
APA, Harvard, Vancouver, ISO, and other styles
40

Song, Liangliang, Zhixi Feng, Shuyuan Yang, Xinyu Zhang, and Licheng Jiao. "Self-Supervised Assisted Semi-Supervised Residual Network for Hyperspectral Image Classification." Remote Sensing 14, no. 13 (2022): 2997. http://dx.doi.org/10.3390/rs14132997.

Full text
Abstract:
Due to the scarcity and high cost of labeled hyperspectral image (HSI) samples, many deep learning methods driven by massive data cannot achieve the intended expectations. Semi-supervised and self-supervised algorithms have advantages in coping with this phenomenon. This paper primarily concentrates on applying self-supervised strategies to make strides in semi-supervised HSI classification. Notably, we design an effective and a unified self-supervised assisted semi-supervised residual network (SSRNet) framework for HSI classification. The SSRNet contains two branches, i.e., a semi-supervised and a self-supervised branch. The semi-supervised branch improves performance by introducing HSI data perturbation via a spectral feature shift. The self-supervised branch characterizes two auxiliary tasks, including masked bands reconstruction and spectral order forecast, to memorize the discriminative features of HSI. SSRNet can better explore unlabeled HSI samples and improve classification performance. Extensive experiments on four benchmarks datasets, including Indian Pines, Pavia University, Salinas, and Houston2013, yield an average overall classification accuracy of 81.65%, 89.38%, 93.47% and 83.93%, which sufficiently demonstrate that SSRNet can exceed expectations compared to state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhu, Penghang, Yao Liu, and Junsheng Li. "Optimization and Evaluation of Widely-Used Total Suspended Matter Concentration Retrieval Methods for ZY1-02D’s AHSI Imagery." Remote Sensing 14, no. 3 (2022): 684. http://dx.doi.org/10.3390/rs14030684.

Full text
Abstract:
Total suspended matter concentration (CTSM) is an important parameter in aquatic ecosystem studies. Compared with multispectral satellite images, the Advanced Hyperspectral Imager (AHSI) carried by the ZY1-02D satellite can capture finer spectral features, and the potential for CTSM retrieval is enormous. In this study, we selected seven typical Chinese inland water bodies as the study areas, and recalibrated and validated 11 empirical models and two semi-analytical models for CTSM retrieval using the AHSI data. The results showed that the semi-analytical algorithm based on the 697 nm AHSI-band achieved the highest retrieval accuracy (R2 = 0.88, average unbiased relative error = 34.43%). This is because the remote sensing reflectance at 697 nm was strongly influenced by CTSM, and the AHSI image spectra were in good agreement with the in-situ spectra. Although further validation is still needed in highly turbid waters, this study shows that AHSI images from the ZY1-02D satellite are well suited for CTSM retrieval in inland waters.
APA, Harvard, Vancouver, ISO, and other styles
42

Yang, Xiaoli, Lipei Liu, Zhenwei Li, Yuxin Xia, Zhipeng Fan, and Jiayi Zhou. "Semi-Supervised Seizure Prediction Model Combining Generative Adversarial Networks and Long Short-Term Memory Networks." Applied Sciences 13, no. 21 (2023): 11631. http://dx.doi.org/10.3390/app132111631.

Full text
Abstract:
In recent years, significant progress has been made in seizure prediction using machine learning methods. However, fully supervised learning methods often rely on a large amount of labeled data, which can be costly and time-consuming. Unsupervised learning overcomes these drawbacks but can suffer from issues such as unstable training and reduced prediction accuracy. In this paper, we propose a semi-supervised seizure prediction model called WGAN-GP-Bi-LSTM. Specifically, we utilize the Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP) as the feature learning model, using the Earth Mover’s distance and gradient penalty to guide the unsupervised training process and train a high-order feature extractor. Meanwhile, we built a prediction model based on the Bidirectional Long Short-Term Memory Network (Bi-LSTM), which enhances seizure prediction performance by incorporating the high-order time-frequency features of the brain signals. An independent, publicly available dataset, CHB-MIT, was applied to train and validate the model’s performance. The results showed that the model achieved an average AUC of 90.08%, an average sensitivity of 82.84%, and an average specificity of 85.97%. A comparison with previous research demonstrates that our proposed method outperforms traditional adversarial network models and optimizes unsupervised feature extraction for seizure prediction.
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Fenli, Shengjie Wang, Xixi Wu, et al. "Local Meteoric Water Lines in a Semi-Arid Setting of Northwest China Using Multiple Methods." Water 13, no. 17 (2021): 2380. http://dx.doi.org/10.3390/w13172380.

Full text
Abstract:
The local meteoric water lines (LMWLs) reflect water sources and the degree of sub-cloud evaporation at a specific location. Lanzhou is a semi-arid city located at the margin of the Asian monsoon, and the isotope composition in precipitation around this region has aroused attention in hydrological and paleoclimate studies. Based on an observation network of stable isotopes in precipitation in Lanzhou, LMWLs at four stations (Anning, Yuzhong, Gaolan and Yongdeng) are calculated using the event-based/monthly data and six regression methods (i.e., ordinary least squares, reduced major axis, major axis regressions, and their counterparts weighted using precipitation amount). Compared with the global meteoric water line, the slope and intercept of LMWL in Lanzhou are smaller. The slopes and intercepts calculated using different methods are slightly different. Among these methods, precipitation-weighted least squares regression (PWLSR) usually had the minimum average value of root mean sum of squared error (rmSSEav), indicating that the result of the precipitation weighted method is relatively stable. Higher precipitation amount and lower air temperature result in larger slopes and intercepts on an annual scale, which is out of accordance with the summertime.
APA, Harvard, Vancouver, ISO, and other styles
44

Xiao, Jin, Buhong Wang, Ruochen Dong, Zhengyang Zhao, and Bofu Zhao. "SatGuard: Satellite Networks Penetration Testing and Vulnerability Risk Assessment Methods." Aerospace 12, no. 5 (2025): 431. https://doi.org/10.3390/aerospace12050431.

Full text
Abstract:
Satellite networks face escalating cybersecurity threats from evolving attack vectors and systemic complexities. This paper proposes SatGuard, a novel framework integrating a three-dimensional penetration testing methodology and a nonlinear risk assessment mechanism tailored for satellite security. To address limitations of conventional tools in handling satellite-specific vulnerabilities, SatGuard employs large language models (LLMs) like GPT-4 and DeepSeek-R1. By leveraging their contextual reasoning and code-generation abilities, SatGuard enables semi-automated vulnerability analysis and exploitation. Validated in a simulated ground station environment, the framework achieved a 73.3% success rate (22/30 attempts) across critical ports, with an average of 5.5 human interactions per test. By bridging AI-driven automation with satellite-specific risk modeling, SatGuard advances cybersecurity for next-generation space infrastructure through scalable, ethically aligned solutions.
APA, Harvard, Vancouver, ISO, and other styles
45

Yin, Shih-Hsun. "Vibration of a Simple Beam Subjected to a Moving Sprung Mass with Initial Velocity and Constant Acceleration." International Journal of Structural Stability and Dynamics 16, no. 03 (2016): 1450109. http://dx.doi.org/10.1142/s0219455414501090.

Full text
Abstract:
In this paper, a semi-analytical solution to the problem of a simply supported beam subjected to a moving sprung mass with initial velocity and constant acceleration or deceleration was presented, which serves as a benchmark for checking the performance of other numerical methods. Herein, a finite element modeling procedure was adopted to tackle the vehicle–bridge interaction, and the responses of the vehicle and bridge were computed by time integration schemes such as the Newmark average acceleration, HHT-[Formula: see text], and Wilson-[Formula: see text] methods. In comparison with the semi-analytical solution, the acceleration response of the beam solved by the Newmark average acceleration method shows spurious high-frequency oscillations caused by the finite element discretization. In contrast, the HHT-[Formula: see text] and Wilson-[Formula: see text] methods can dissipate these oscillations and show more accurate results. Moreover, we found that the dynamic responses of the beam and sprung mass were mainly determined by the initial velocity of the sprung mass, but not by the acceleration or deceleration.
APA, Harvard, Vancouver, ISO, and other styles
46

Veselov, A. V., S. I. Achkasov, O. I. Sushkov, A. I. Moskalev, and I. S. Lantsov. "Clinical and economic efficiency of the loop ileostomy closure by various methods." PHARMACOECONOMICS. Modern pharmacoeconomics and pharmacoepidemiology 11, no. 2 (2018): 38–43. http://dx.doi.org/10.17749/2070-4909.2018.11.2.038-043.

Full text
Abstract:
in the context of optimizing the financial mechanisms of the national healthcare system, introducing the single-channel financial principle and further developing the insurance-based medicine in Russia, a competent financial accounting becomes an important element of the entire healthcare system.Aim – compare the economic effectiveness of various methods of closing a loop ileostomy.Materials and methods. The study included 327 patients randomized into 3 groups. in group 1, the closure of an ileostomy was performed manually with the formation of an end-to-end ileo-ileoastomosis; in group 2, the anastomosis was formed in the ”side by side” manner; and in group 3, a semi-automated surgical technique was used for the anastomosis formation.Results. The average cost of the treatments (per patient) in groups 1 and 2 was 131,704.90 rubles. and 145,473.70 rubles, respectively, while in group 3, the cost was higher – 167,443,60 rubles (p <0,001). This cost increase in Group 3 was mainly due to the cost of a disposable stapler and cassettes.Conclusion. The formation of a manual ileoileoanastomosis of the end-to-end type was less budget-consuming in comparison with the other methods. The semi-automated procedure based on disposable parts was the most expensive method of closing a loop ileostomy.
APA, Harvard, Vancouver, ISO, and other styles
47

Eden, Carsten. "Relating Lagrangian, Residual, and Isopycnal Means." Journal of Physical Oceanography 42, no. 7 (2012): 1057–64. http://dx.doi.org/10.1175/jpo-d-11-068.1.

Full text
Abstract:
Abstract Three alternative methods of averaging the general conservation equation of a fluid property in a turbulent flow in the Boussinesq approximation are compared: Lagrangian, residual, and isopycnal (or semi-Lagrangian) mean. All methods differentiate consistently but in different ways between effects of advection and irreversible changes of the average property. Because the three average properties differ, the mean transport velocities and the mean irreversible changes in the mean conservation equation differ in general. The Lagrangian and the semi-Lagrangian (or isopycnal) mean frameworks are shown to be approximately equivalent only for weak irreversible changes, small amplitudes of the turbulent fluctuations, and particle excursion predominantly along the mean property gradient. In that case, the divergent Stokes velocity of the Lagrangian mean framework can be replaced in the Lagrangian mean conservation equation by a nondivergent, three-dimensional version of the quasi-Stokes velocity of T. J. McDougall and P. C. McIntosh, for which a closed analytical form for the streamfunction in terms of Eulerian mean quantities is given.
APA, Harvard, Vancouver, ISO, and other styles
48

Ayuningtyas, Puji, Siti Khomsah, and Sudianto Sudianto. "Pelabelan Sentimen Berbasis Semi-Supervised Learning menggunakan Algoritma LSTM dan GRU." JISKA (Jurnal Informatika Sunan Kalijaga) 9, no. 3 (2024): 217–29. http://dx.doi.org/10.14421/jiska.2024.9.3.217-229.

Full text
Abstract:
In the sentiment analysis research process, there are problems when still using manual labeling methods by humans (expert annotation), which are related to subjectivity, long time, and expensive costs. Another way is to use computer assistance (machine annotator). However, the use of machine annotators also has the research problem of not being able to detect sarcastic sentences. Thus, the researcher proposed a sentiment labeling method using Semi-Supervised Learning. Semi-supervised learning is a labeling method that combines human labeling techniques (expert annotation) and machine labeling (machine annotation). This research uses machine annotators in the form of Deep Learning algorithms, namely the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) algorithms. The word weighting method used in this research is Word2Vec Continuous Bag of Word (CBoW). The results showed that the GRU algorithm tends to have a better accuracy rate than the LSTM algorithm. The average accuracy of the training results of the LSTM and GRU algorithm models is 0.904 and 0.913. In contrast, the average accuracy of labeling by LSTM and GRU is 0.569 and 0.592, respectively.
APA, Harvard, Vancouver, ISO, and other styles
49

Yadav, Kaushal Kumar, Sukanta Dash, Baidya Nath Mandal, and Rajender Parsad. "Construction of Partially Balanced Semi-Latin Rectangles in Block Size Three." INTERNATIONAL JOURNAL OF AGRICULTURAL AND STATISTICAL SCIENCES 21, no. 01 (2025): 167. https://doi.org/10.59467/ijass.2025.21.167.

Full text
Abstract:
Semi-Latin rectangles (SLR) are row-column designs, where each intersection contains the same number of experimental units, denoted as k > 1. Moreover, each treatment appears an equal number of times in each row (nr, say ) and in each column (nc, say ) (nr > 1 and nc > 1 may or may not be equal). Balanced Semi-Latin rectangles (BSLR) form a subset of SLR, extending the concepts of Latin squares and Semi-Latin squares (SLS). These designs are applicable in various agricultural and industrial experiments, particularly in scenarios where one effect is considered a column effect and the other a row effect, with each intersection accommodating exactly 3 units. This article introduced Partially Balanced Semi-Latin rectangles (PBSLR) and also proposed two methods for constructing PBSLR designs with a block size of 3. Additionally, an R package has been developed for generating these designs.. KEYWORDS :Semi-Latin rectangles (SLR), Partially balanced semi-latin rectangles, Canonical efficiency factor, Average efficiency factor.
APA, Harvard, Vancouver, ISO, and other styles
50

Xie, Danny F., Christian Crouzet, Krystal LoPresti, et al. "Semi-automated protocol to quantify and characterize fluorescent three-dimensional vascular images." PLOS ONE 19, no. 5 (2024): e0289109. http://dx.doi.org/10.1371/journal.pone.0289109.

Full text
Abstract:
The microvasculature facilitates gas exchange, provides nutrients to cells, and regulates blood flow in response to stimuli. Vascular abnormalities are an indicator of pathology for various conditions, such as compromised vessel integrity in small vessel disease and angiogenesis in tumors. Traditional immunohistochemistry enables the visualization of tissue cross-sections containing exogenously labeled vasculature. Although this approach can be utilized to quantify vascular changes within small fields of view, it is not a practical way to study the vasculature on the scale of whole organs. Three-dimensional (3D) imaging presents a more appropriate method to visualize the vascular architecture in tissue. Here we describe the complete protocol that we use to characterize the vasculature of different organs in mice encompassing the methods to fluorescently label vessels, optically clear tissue, collect 3D vascular images, and quantify these vascular images with a semi-automated approach. To validate the automated segmentation of vascular images, one user manually segmented one hundred random regions of interest across different vascular images. The automated segmentation results had an average sensitivity of 83±11% and an average specificity of 91±6% when compared to manual segmentation. Applying this procedure of image analysis presents a method to reliably quantify and characterize vascular networks in a timely fashion. This procedure is also applicable to other methods of tissue clearing and vascular labels that generate 3D images of microvasculature.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography