To see the other types of publications on this topic, follow the link: Segmentation algorithms assessment.

Journal articles on the topic 'Segmentation algorithms assessment'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Segmentation algorithms assessment.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Barboriak, Daniel, Katy Peters, Allan Friedman, Henry Friedman, and Annick Desjardins. "NEIM-03. FEASIBILITY OF AUTOMATED ASSESSMENT OF PROGRESSIVE ENHANCEMENT ON MRI IN PATIENTS WITH NEWLY DIAGNOSED HIGH-GRADE GLIOMA USING A FEATURE-BASED ALGORITHM." Neuro-Oncology Advances 3, Supplement_4 (2021): iv7. http://dx.doi.org/10.1093/noajnl/vdab112.024.

Full text
Abstract:
Abstract BACKGROUND Approximately 50% of patients with newly diagnosed high-grade glioma (HGG) develop progressive enhancement between their post-operative MRI scan and 12 weeks after radiation and temozolomide. Inter-reader variability on the assessment of progressive enhancement in this patient group is a significant barrier in designing multi-center biomarker trials to distinguish true progression from pseudoprogression. Although enhancement segmentation algorithms have become more widely available, more automated and reproducible techniques to identify patients who develop progressive enhancement are needed to facilitate acquisition of non-standard of care biomarkers when this occurs. We explored the feasibility of using a feature-based algorithm in tandem with freely available / open source automated segmentation algorithms to identify this subset of patients. METHODS An automated algorithm using subtraction of registered segmentations to detect new areas of localized thickness of enhancement was developed. Criteria for feasibility (50% within 95% CI of percent patients identified, and sensitivity of >85% of patients assessed as progressed [P+] identified) were determined prospectively. The algorithm was implemented across five different automated enhancement segmentation techniques, then evaluated using a retrospective dataset of 73 patients with newly diagnosed HGG (age 50.8±13.2 years, 37 men, 36 women, 50 GBM, 23 Grade III). Standardized post-baseline brain tumor imaging protocol MR acquisitions were obtained on 1.5T and 3T scanners (GE and Siemens). On chart review, 53% of patients were assessed by neuroradiologists and/or neuro-oncologists as P+ (progression vs. pseudoprogression). RESULTS 50% was within the 95% CI of percent of patients identified for all five segmentation algorithms. Sensitivity was over 85% for three segmentation algorithms, with the MIC-DKFZ algorithm having highest sensitivity of 92%. For this algorithm, specificity was 77%, PPV was 81% and NPV was 90%. CONCLUSION A feature-based algorithm in tandem with open source segmentation algorithms showed preliminary feasibility for automated identification of patients with progressive enhancement.
APA, Harvard, Vancouver, ISO, and other styles
2

De Kerf, Thomas, Navid Hasheminejad, Johan Blom, and Steve Vanlanduit. "Qualitative Comparison of 2D and 3D Atmospheric Corrosion Detection Methods." Materials 14, no. 13 (2021): 3621. http://dx.doi.org/10.3390/ma14133621.

Full text
Abstract:
In this article, we report the use of a Confocal Laser Scanning Microscope (CLSM) to apply a qualitative assessment of atmospheric corrosion on steel samples. From the CLSM, we obtain high-resolution images, together with a 3D heightmap. The performance of four different segmentation algorithms that use the high-resolution images as input is qualitatively assessed and discussed. A novel 3D segmentation algorithm based on the shape index is presented and compared to the 2D segmentation algorithms. From this analysis, we conclude that there is a significant difference in performance between the 2D segmentation algorithms and that the 3D method can be an added value to the detection of corrosion.
APA, Harvard, Vancouver, ISO, and other styles
3

Węgliński, Tomasz, and Anna Fabijańska. "Survey of Modern Image Segmentation Algorithms on CT Scans of Hydrocephalic Brains." Image Processing & Communications 17, no. 4 (2012): 223–30. http://dx.doi.org/10.2478/v10248-012-0050-y.

Full text
Abstract:
Abstract Paper presents the concept of applying image segmentation algorithms for precise extraction of cerebrospinal fluid (CSF) from CT brain scans. Accurate segmentation of the CSF from the intracranial brain area is crucial for further reliable analysis and quantitative assessment of hydrocephalus. Presented research was aimed at the comparison of effectiveness of three modern segmentation approaches used for this purpose. Specifically, random walk, level set and min-cut/max-flow algorithms were considered. The visual and numerical comparison of the segmentation results leads to conclusion that the most effective algorithm for the considered problem is level set, although the positive medical verification of the results revealed that either of considered algorithms can be successfully applied in the diagnostic applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Warfield, Simon K., Kelly H. Zou, and William M. Wells. "Validation of image segmentation by estimating rater bias and variance." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366, no. 1874 (2008): 2361–75. http://dx.doi.org/10.1098/rsta.2008.0040.

Full text
Abstract:
The accuracy and precision of segmentations of medical images has been difficult to quantify in the absence of a ‘ground truth’ or reference standard segmentation for clinical data. Although physical or digital phantoms can help by providing a reference standard, they do not allow the reproduction of the full range of imaging and anatomical characteristics observed in clinical data. An alternative assessment approach is to compare with segmentations generated by domain experts. Segmentations may be generated by raters who are trained experts or by automated image analysis algorithms. Typically, these segmentations differ due to intra-rater and inter-rater variability. The most appropriate way to compare such segmentations has been unclear. We present here a new algorithm to enable the estimation of performance characteristics, and a true labelling, from observations of segmentations of imaging data where segmentation labels may be ordered or continuous measures. This approach may be used with, among others, surface, distance transform or level-set representations of segmentations, and can be used to assess whether or not a rater consistently overestimates or underestimates the position of a boundary.
APA, Harvard, Vancouver, ISO, and other styles
5

Sudarvizhi, D., and M. Akila. "Wound Assessment in Pedobarography Using Image Segmentation Techniques." Journal of Medical Imaging and Health Informatics 11, no. 5 (2021): 1403–9. http://dx.doi.org/10.1166/jmihi.2021.3657.

Full text
Abstract:
Pedobarography is elementary for kinetic gait analysis along with the analysis and exploration of multiple neurological and musculoskeletal diseases. One person among 11 adults suffer from Diabetes Mellitus. Also, Foot ulcers (FU) is a most harmful as well as associated chronic complications springing from diabetes mellitus (DM). Recently, there has been an evolving awareness that understanding the biomechanical factors beneath the diabetic ulcer in a better manner could result in improving the control activities over the disease, with considerable socio-economic effects. Diabetic Foot Ulcers (DFU) is a primary concern of this health issue, and if this is not addressed right can result in amputation. So in this research, the Image segmentation algorithms and Perimeter pixel comparison is carried out for wound classification depending on the simulation algorithm like Adaptive K-means, Clustering K means, Fuzzy C means, and Region growing approaches and among them, Fuzzy C means is found to achieve greatest accuracy of perimeter pixel values, which are 603, 462 and 356 pixel values in stages one, two and three. The time taken for execution among all the four simulation algorithms are observed and it can be revealed that Adaptive K means yields the least execution time for carrying out the simulation of foot ulcer. An evaluation on the self-assessment of wounds caused during diabetic foot ulcer employing image segmentation is developed. It is ultimately found that the objective of the image analysis pertaining to the ulcer in foot is the dynamic evaluation and definition of regions of high pressure in a diabetic patient’s foot depending on the estimations made on the perimeter pixel comparison and execution time.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Hongbing, Xiaoyu Diao, and Huaping Guo. "Quantitative analysis for image segmentation by granular computing clustering from the view of set." Journal of Algorithms & Computational Technology 13 (January 2019): 174830181983305. http://dx.doi.org/10.1177/1748301819833050.

Full text
Abstract:
As partition method of set, granular computing clustering is applied to image segmentation evaluated by global consistency error, variation of Information, and Rand index from the view of set. Firstly, quantitative assessment of clustering is evaluated from the view of set. Secondly, granular computing clustering algorithms are induced by the distance formulas, the granules with different shapes are defined as the forms of vectors by different distance norms, especially, the atomic granule is induced by a point of space, the union operator realizes the transformation between two granule spaces and is used to form granular computing clustering algorithms. Thirdly, the image segmentations by granular computing clustering are evaluated from the view of set, such as global consistency error, variation of Information, and Rand index. Segmentations of the color images selected from BSD300 are used to show the superiority and feasibility for image segmentation by granular computing clustering compared with Kmeans and fuzzy c-means by experiments.
APA, Harvard, Vancouver, ISO, and other styles
7

Ghattas, Andrew E., Reinhard R. Beichel, and Brian J. Smith. "A unified framework for simultaneous assessment of accuracy, between-, and within-reader variability of image segmentations." Statistical Methods in Medical Research 29, no. 11 (2020): 3135–52. http://dx.doi.org/10.1177/0962280220920894.

Full text
Abstract:
Medical imaging is utilized in a wide range of clinical applications. To enable a detailed quantitative analysis, medical images must often be segmented to label (delineate) structures of interest; for example, a tumor. Frequently, manual segmentation is utilized in clinical practice (e.g., in radiation oncology) to define such structures of interest. However, it can be quite time consuming and subject to substantial between-, and within-reader variability. A more reproducible, less variable, and more time efficient segmentation approach is likely to improve medical treatment. This potential has spurred the development of segmentation algorithms which harness computational power. Segmentation algorithms’ widespread use is limited due to difficulty in quantifying their performance relative to manual segmentation, which itself is subject to variation. This paper presents a statistical model which simultaneously estimates segmentation method accuracy, and between- and within-reader variability. The model is simultaneously fit for multiple segmentation methods within a unified Bayesian framework. The Bayesian model is compared to other methods used in literature via a simulation study, and application to head and neck cancer PET/CT data. The modeling framework is flexible and can be employed in numerous comparison applications. Several alternate applications are discussed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
8

Ramot, Yuval, Gil Zandani, Zecharia Madar, Sanket Deshmukh, and Abraham Nyska. "Utilization of a Deep Learning Algorithm for Microscope-Based Fatty Vacuole Quantification in a Fatty Liver Model in Mice." Toxicologic Pathology 48, no. 5 (2020): 702–7. http://dx.doi.org/10.1177/0192623320926478.

Full text
Abstract:
Quantification of fatty vacuoles in the liver, with differentiation from lumina of liver blood vessels and bile ducts, is an example where the traditional semiquantitative pathology assessment can be enhanced with artificial intelligence (AI) algorithms. Using glass slides of mice liver as a model for nonalcoholic fatty liver disease, a deep learning AI algorithm was developed. This algorithm uses a segmentation framework for vacuole quantification and can be deployed to analyze live histopathology fields during the microscope-based pathology assessment. We compared the manual semiquantitative microscope-based assessment with the quantitative output of the deep learning algorithm. The deep learning algorithm was able to recognize and quantify the percent of fatty vacuoles, exhibiting a strong and significant correlation ( r = 0.87, P < .001) between the semiquantitative and quantitative assessment methods. The use of deep learning algorithms for difficult quantifications within the microscope-based pathology assessment can help improve outputs of toxicologic pathology workflows.
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, H., Y. Ding, Y. Huang, and G. Wang. "CRITICAL ASSESSMENT OF OBJECT SEGMENTATION IN AERIAL IMAGE USING GEO-HAUSDORFF DISTANCE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (June 13, 2016): 187–94. http://dx.doi.org/10.5194/isprsarchives-xli-b4-187-2016.

Full text
Abstract:
Aerial Image records the large-range earth objects with the ever-improving spatial and radiometric resolution. It becomes a powerful tool for earth observation, land-coverage survey, geographical census, etc., and helps delineating the boundary of different kinds of objects on the earth both manually and automatically. In light of the geo-spatial correspondence between the pixel locations of aerial image and the spatial coordinates of ground objects, there is an increasing need of super-pixel segmentation and high-accuracy positioning of objects in aerial image. Besides the commercial software package of eCognition and ENVI, many algorithms have also been developed in the literature to segment objects of aerial images. But how to evaluate the segmentation results remains a challenge, especially in the context of the geo-spatial correspondence. The Geo-Hausdorff Distance (GHD) is proposed to measure the geo-spatial distance between the results of various object segmentation that can be done with the manual ground truth or with the automatic algorithms.Based on the early-breaking and random-sampling design, the GHD calculates the geographical Hausdorff distance with nearly-linear complexity. Segmentation results of several state-of-the-art algorithms, including those of the commercial packages, are evaluated with a diverse set of aerial images. They have different signal-to-noise ratio around the object boundaries and are hard to trace correctly even for human operators. The GHD value is analyzed to comprehensively measure the suitability of different object segmentation methods for aerial images of different spatial resolution. By critically assessing the strengths and limitations of the existing algorithms, the paper provides valuable insight and guideline for extensive research in automating object detection and classification of aerial image in the nation-wide geographic census. It is also promising for the optimal design of operational specification of remote sensing interpretation under the constraints of limited resource.
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, H., Y. Ding, Y. Huang, and G. Wang. "CRITICAL ASSESSMENT OF OBJECT SEGMENTATION IN AERIAL IMAGE USING GEO-HAUSDORFF DISTANCE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (June 13, 2016): 187–94. http://dx.doi.org/10.5194/isprs-archives-xli-b4-187-2016.

Full text
Abstract:
Aerial Image records the large-range earth objects with the ever-improving spatial and radiometric resolution. It becomes a powerful tool for earth observation, land-coverage survey, geographical census, etc., and helps delineating the boundary of different kinds of objects on the earth both manually and automatically. In light of the geo-spatial correspondence between the pixel locations of aerial image and the spatial coordinates of ground objects, there is an increasing need of super-pixel segmentation and high-accuracy positioning of objects in aerial image. Besides the commercial software package of eCognition and ENVI, many algorithms have also been developed in the literature to segment objects of aerial images. But how to evaluate the segmentation results remains a challenge, especially in the context of the geo-spatial correspondence. The Geo-Hausdorff Distance (GHD) is proposed to measure the geo-spatial distance between the results of various object segmentation that can be done with the manual ground truth or with the automatic algorithms.Based on the early-breaking and random-sampling design, the GHD calculates the geographical Hausdorff distance with nearly-linear complexity. Segmentation results of several state-of-the-art algorithms, including those of the commercial packages, are evaluated with a diverse set of aerial images. They have different signal-to-noise ratio around the object boundaries and are hard to trace correctly even for human operators. The GHD value is analyzed to comprehensively measure the suitability of different object segmentation methods for aerial images of different spatial resolution. By critically assessing the strengths and limitations of the existing algorithms, the paper provides valuable insight and guideline for extensive research in automating object detection and classification of aerial image in the nation-wide geographic census. It is also promising for the optimal design of operational specification of remote sensing interpretation under the constraints of limited resource.
APA, Harvard, Vancouver, ISO, and other styles
11

Sabeti, Malihe, Laleh Karimi, Naemeh Honarvar, Mahsa Taghavi, and Reza Boostani. "QUANTUMIZED GENETIC ALGORITHM FOR SEGMENTATION AND OPTIMIZATION TASKS." Biomedical Engineering: Applications, Basis and Communications 32, no. 03 (2020): 2050022. http://dx.doi.org/10.4015/s1016237220500222.

Full text
Abstract:
Specialists mostly assess the skeletal maturity of short-height children by observing their left hand X-Ray image (radiograph), whereas precise separation of areas capturing the bones and growing plates is always not possible by visual inspection. Although a few attempts are made to estimate a suitable threshold for segmenting digitized radiograph images, their results are not still promising. To finely estimate segmentation thresholds, this paper presents the quantumized genetic algorithm (QGA) that is the integration of quantum representation scheme in the basic genetic algorithm (GA). This hybridization between quantum inspired computing and GA has led to an efficient hybrid framework that achieves better balance between the exploration and the exploitation capabilities. To assess the performance of the proposed quantitative bone maturity assessment framework, we have collected an exclusive dataset including 65 left-hand digitized images, aged from 3 to 13 years. Thresholds are estimated by the proposed method and the results are compared to harmony search algorithm (HSA), particle swarm optimization (PSO), quantumized PSO and standard GA. In addition, for more comparison of the proposed method and the other mentioned evolutionary algorithms, ten known benchmarks of complex functions are considered for optimization task. Our results in both segmentation and optimization tasks show that QGA and GA provide the best optimization results in comparison with the other mentioned algorithms. Moreover, the empirical results demonstrate that QGA is able to provide better diversity than that of GA.
APA, Harvard, Vancouver, ISO, and other styles
12

Coric, Danko, Axel Petzold, Bernard M. J. Uitdehaag, and Lisanne J. Balk. "Software updates of OCT segmentation algorithms influence longitudinal assessment of retinal atrophy." Journal of the Neurological Sciences 387 (April 2018): 16–20. http://dx.doi.org/10.1016/j.jns.2018.01.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sanches, Silvio R. R., Antonio C. Sementille, Romero Tori, Ricardo Nakamura, and Valdinei Freire. "PAD: a perceptual application-dependent metric for quality assessment of segmentation algorithms." Multimedia Tools and Applications 78, no. 22 (2019): 32393–417. http://dx.doi.org/10.1007/s11042-019-07958-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wu, Xiangqian, Xin Shen, Lin Cao, Guibin Wang, and Fuliang Cao. "Assessment of Individual Tree Detection and Canopy Cover Estimation using Unmanned Aerial Vehicle based Light Detection and Ranging (UAV-LiDAR) Data in Planted Forests." Remote Sensing 11, no. 8 (2019): 908. http://dx.doi.org/10.3390/rs11080908.

Full text
Abstract:
Canopy cover is a key forest structural parameter that is commonly used in forest inventory, sustainable forest management and maintaining ecosystem services. Recently, much attention has been paid to the use of unmanned aerial vehicle (UAV)-based light detection and ranging (LiDAR) due to the flexibility, convenience, and high point density advantages of this method. In this study, we used UAV-based LiDAR data with individual tree segmentation-based method (ITSM), canopy height model-based method (CHMM), and a statistical model method (SMM) with LiDAR metrics to estimate the canopy cover of a pure ginkgo (Ginkgo biloba L.) planted forest in China. First, each individual tree within the plot was segmented using watershed, polynomial fitting, individual tree crown segmentation (ITCS) and point cloud segmentation (PCS) algorithms, and the canopy cover was calculated using the segmented individual tree crown (ITSM). Second, the CHM-based method, which was based on the CHM height threshold, was used to estimate the canopy cover in each plot. Third, the canopy cover was estimated using the multiple linear regression (MLR) model and assessed by leave-one-out cross validation. Finally, the performance of three canopy cover estimation methods was evaluated and compared by the canopy cover from the field data. The results demonstrated that, the PCS algorithm had the highest accuracy (F = 0.83), followed by the ITCS (F = 0.82) and watershed (F = 0.79) algorithms; the polynomial fitting algorithm had the lowest accuracy (F = 0.77). In the sensitivity analysis, the three CHM-based algorithms (i.e., watershed, polynomial fitting and ITCS) had the highest accuracy when the CHM resolution was 0.5 m, and the PCS algorithm had the highest accuracy when the distance threshold was 2 m. In addition, the ITSM had the highest accuracy in estimation of canopy cover (R2 = 0.92, rRMSE = 3.5%), followed by the CHMM (R2 = 0.94, rRMSE = 5.4%), and the SMM had a relative low accuracy (R2 = 0.80, rRMSE = 5.9%).The UAV-based LiDAR data can be effectively used in individual tree crown segmentation and canopy cover estimation at plot-level, and CC estimation methods can provide references for forest inventory, sustainable management and ecosystem assessment.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhou, Zhen Huan. "Comparison and Assessment of Different Image Registration Algorithms Based on ITK." Applied Mechanics and Materials 442 (October 2013): 515–19. http://dx.doi.org/10.4028/www.scientific.net/amm.442.515.

Full text
Abstract:
A lot of image registration algorithms are proposed in recent year, among these algorithms, which one is better or faster than the other can be only validated by experiments. In this paper, ITK (Insight Segmentation and Registration Toolkit) is used for verifying different algorithms as a framework. ITK framework requires the following components: a fixed image, a moving image, a transform, a metric, an interpolator and an optimizer. Dozens of classical algorithms are tested under the same conditions and their experimental results are demonstrated with different metrics, interpolators or optimizers. By comparison of registration time and accuracy, those practical and useful algorithms are selected for developing software in image analysis. These kinds of experiments are very valuable for software engineering, they can shorten the cycle of software development and greatly reduce the development costs.
APA, Harvard, Vancouver, ISO, and other styles
16

Fu, Zhongliang, Yangjie Sun, Liang Fan, and Yutao Han. "Multiscale and Multifeature Segmentation of High-Spatial Resolution Remote Sensing Images Using Superpixels with Mutual Optimal Strategy." Remote Sensing 10, no. 8 (2018): 1289. http://dx.doi.org/10.3390/rs10081289.

Full text
Abstract:
High spatial resolution (HSR) image segmentation is considered to be a major challenge for object-oriented remote sensing applications that have been extensively studied in the past. In this paper, we propose a fast and efficient framework for multiscale and multifeatured hierarchical image segmentation (MMHS). First, the HSR image pixels were clustered into a small number of superpixels using a simple linear iterative clustering algorithm (SLIC) on modern graphic processing units (GPUs), and then a region adjacency graph (RAG) and nearest neighbors graph (NNG) were constructed based on adjacent superpixels. At the same time, the RAG and NNG successfully integrated spectral information, texture information, and structural information from a small number of superpixels to enhance its expressiveness. Finally, a multiscale hierarchical grouping algorithm was implemented to merge these superpixels using local-mutual best region merging (LMM). We compared the experiments with three state-of-the-art segmentation algorithms, i.e., the watershed transform segmentation (WTS) method, the mean shift (MS) method, the multiresolution segmentation (MRS) method integrated in commercial software, eCognition9, on New York HSR image datasets, and the ISPRS Potsdam dataset. Computationally, our algorithm was dozens of times faster than the others, and it also had the best segmentation effect through visual assessment. The supervised and unsupervised evaluation results further proved the superiority of the MMHS algorithm.
APA, Harvard, Vancouver, ISO, and other styles
17

Nava, Rodrigo, Duc Fehr, Frank Petry, and Thomas Tamisier. "Objective Tire Footprint Segmentation Assessment from High-Speed Videos." Tire Science and Technology 48, no. 4 (2019): 315–28. http://dx.doi.org/10.2346/tire.19.180203.

Full text
Abstract:
ABSTRACT The tire establishes the contact between the vehicle and the road. It transmits all forces and moments to the road via its contact patch or footprint and vice versa. The visual inspection of this contact patch using modern optical equipment and image processing techniques is essential for evaluating tire performance. Quantitative image-based analysis can be useful for accurate determination of tire footprint under various operating conditions. Very frequently, methods used in tire footprint segmentation cannot be assessed quantitatively due to the lack of a reference contact area to which the different algorithms could be compared. In this work, we present a novel methodology to characterize the dynamic tire footprint and evaluate the quality of its segmentation from various video sequences in the absence of a ground truth.
APA, Harvard, Vancouver, ISO, and other styles
18

Sangari A, Siva, and Saraswady D. "Analyzing the Optimal Performance of Pest Image Segmentation using Non Linear Objective Assessments." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 6 (2016): 2789. http://dx.doi.org/10.11591/ijece.v6i6.11564.

Full text
Abstract:
<p>In modern agricultural field, pest detection is a major role in plant cultivation. In order to increase the Production rate of agricultural field, the presence of whitefly pests which cause leaf discoloration is the major problem. This emphasizes the necessity of image segmentation, which divides an image into parts that have strong correlations with objects to reflect the actual information collected from the real world. Image processing is affected by illumination conditions, random noise and environmental disturbances due to atmospheric pressure or temperature fluctuation. The quality of pest images is directly affected by atmosphere medium, pressure and temperature. The fuzzy c means (FCM) have been proposed to identify accurate location of whitefly pests. The watershed transform has interesting properties that make it useful for many different image segmentation applications: it is simple and intuitive, can be parallelized, and always produces a complete division of the image. However, when applied to pest image analysis, it has important drawbacks (over segmentation, sensitivity to noise). In this paper, pest image segmentation using marker controlled watershed segmentation is presented. Objective of this paper is segmenting the pest image and comparing the results of fuzzy c means algorithm and marker controlled watershed transformation. The performance of an image segmentation algorithms are compared using nonlinear objective assessment or the quantitative measures like structural content, peak signal to noise ratio, normalized correlation coefficient, average difference and normalized absolute error. Out of the above methods the experimental results show that fuzzy c means algorithm performs better than watershed transformation algorithm in processing pest images.</p>
APA, Harvard, Vancouver, ISO, and other styles
19

Sangari A, Siva, and Saraswady D. "Analyzing the Optimal Performance of Pest Image Segmentation using Non Linear Objective Assessments." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 6 (2016): 2789. http://dx.doi.org/10.11591/ijece.v6i6.pp2789-2796.

Full text
Abstract:
<p>In modern agricultural field, pest detection is a major role in plant cultivation. In order to increase the Production rate of agricultural field, the presence of whitefly pests which cause leaf discoloration is the major problem. This emphasizes the necessity of image segmentation, which divides an image into parts that have strong correlations with objects to reflect the actual information collected from the real world. Image processing is affected by illumination conditions, random noise and environmental disturbances due to atmospheric pressure or temperature fluctuation. The quality of pest images is directly affected by atmosphere medium, pressure and temperature. The fuzzy c means (FCM) have been proposed to identify accurate location of whitefly pests. The watershed transform has interesting properties that make it useful for many different image segmentation applications: it is simple and intuitive, can be parallelized, and always produces a complete division of the image. However, when applied to pest image analysis, it has important drawbacks (over segmentation, sensitivity to noise). In this paper, pest image segmentation using marker controlled watershed segmentation is presented. Objective of this paper is segmenting the pest image and comparing the results of fuzzy c means algorithm and marker controlled watershed transformation. The performance of an image segmentation algorithms are compared using nonlinear objective assessment or the quantitative measures like structural content, peak signal to noise ratio, normalized correlation coefficient, average difference and normalized absolute error. Out of the above methods the experimental results show that fuzzy c means algorithm performs better than watershed transformation algorithm in processing pest images.</p>
APA, Harvard, Vancouver, ISO, and other styles
20

Swinnen, Thijs Willem, Milica Milosevic, Sabine Van Huffel, Wim Dankaerts, Rene Westhovens, and Kurt de Vlam. "Instrumented BASFI (iBASFI) Shows Promising Reliability and Validity in the Assessment of Activity Limitations in Axial Spondyloarthritis." Journal of Rheumatology 43, no. 8 (2016): 1532–40. http://dx.doi.org/10.3899/jrheum.150439.

Full text
Abstract:
Objective.The Bath Ankylosing Spondylitis Functional Index (BASFI) is the most popular method to assess activity capacity in axial spondyloarthritis (axSpA), to our knowledge. It is endorsed by the Assessment of Spondyloarthritis international Society. But it may have recall bias or aberrant self-judgments in individual patients. Therefore, we aimed to (1) develop the instrumented BASFI (iBASFI) by adding a body-worn accelerometer with automated algorithms to performance-based measurements (PBM), (2) study the iBASFI’s core psychometric properties, and (3) reduce the number of iBASFI items.Methods.Twenty-eight patients with axSpA wore a 2-axial accelerometer while completing 12 PBM derived from the BASFI. A chronometer and both manual and “automated algorithm-based” acceleration segmentation identified movement time. Test-retest trials and methods (algorithm vs manual segmentation/chronometer/BASFI) were compared with ICC, standard error of measurement [percentage of movement time (SEM%)], and Spearman ρ correlation coefficients. Linear regression identified the optimal set of reliable iBASFI PBM.Results.Good to excellent test-retest reliability was found for 8/12 iBASFI items (ICC range 0.812–0.997, SEM range 0.4–30.4%), typically with repeated and fast movements. Automated algorithms excellently mimicked manual segmentation (ICC range 0.900–0.998) and the chronometer (ICC range 0.878–0.998) for 10/12 iBASFI items. Construct validity compared with the BASFI was confirmed for 7/12 iBASFI items (δ range 0.504–0.755). Together, sit-to-stand speed test (stBeta 0.483), cervical rotation (stBeta −0.392), and height (stBeta −0.375) explained 59% of the variance in the BASFI (p < 0.01).Conclusion.The proof-of-concept iBASFI showed promising reliability and validity in measuring activity capacity. The number of the iBASFI’s PBM may be minimized, but further validation in larger axSpA cohorts is needed before its clinical use.
APA, Harvard, Vancouver, ISO, and other styles
21

Et. al., Rajendra Prasad Bellapu,. "PERFORMANCE COMPARISON OF UNSUPERVISED SEGMENTATION ALGORITHMS ON RICE, GROUNDNUT, AND APPLE PLANT LEAF IMAGES." INFORMATION TECHNOLOGY IN INDUSTRY 9, no. 2 (2021): 1090–105. http://dx.doi.org/10.17762/itii.v9i2.457.

Full text
Abstract:
This paper focuses on plant leaf image segmentation by considering the aspects of various unsupervised segmentation techniques for automatic plant leaf disease detection. The segmented plant leaves are crucial in the process of automatic disease detection, quantification, and classification of plant diseases. Accurate and efficient assessment of plant diseases is required to avoid economic, social, and ecological losses. This may not be easy to achieve in practice due to multiple factors. It is challenging to segment out the affected area from the images of complex background. Thus, a robust semantic segmentation for automatic recognition and analysis of plant leaf disease detection is extremely demanded in the area of precision agriculture. This breakthrough is expected to lead towards the demand for an accurate and reliable technique for plant leaf segmentation. We propose a hybrid variant that incorporates Graph Cut (GC) and Multi-Level Otsu (MOTSU) in this paper. We compare the segmentation performance implemented on rice, groundnut, and apple plant leaf images for various unsupervised segmentation algorithms. Boundary Displacement error (BDe), Global Consistency error (GCe), Variation of Information (VoI), and Probability Rand index (PRi), are the index metrics used to evaluate the performance of the proposed model. By comparing the outcomes of the simulation, it demonstrates that our proposed technique, Graph Cut based Multi-level Otsu (GCMO), provides better segmentation results as compared to other existing unsupervised algorithms.
APA, Harvard, Vancouver, ISO, and other styles
22

Kriti, Jitendra Virmani, and Ravinder Agarwal. "Assessment of despeckle filtering algorithms for segmentation of breast tumours from ultrasound images." Biocybernetics and Biomedical Engineering 39, no. 1 (2019): 100–121. http://dx.doi.org/10.1016/j.bbe.2018.10.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Poudel, Prabal, Alfredo Illanes, Debdoot Sheet, and Michael Friebe. "Evaluation of Commonly Used Algorithms for Thyroid Ultrasound Images Segmentation and Improvement Using Machine Learning Approaches." Journal of Healthcare Engineering 2018 (September 23, 2018): 1–13. http://dx.doi.org/10.1155/2018/8087624.

Full text
Abstract:
The thyroid is one of the largest endocrine glands in the human body, which is involved in several body mechanisms like controlling protein synthesis and the body's sensitivity to other hormones and use of energy sources. Hence, it is of prime importance to track the shape and size of thyroid over time in order to evaluate its state. Thyroid segmentation and volume computation are important tools that can be used for thyroid state tracking assessment. Most of the proposed approaches are not automatic and require long time to correctly segment the thyroid. In this work, we compare three different nonautomatic segmentation algorithms (i.e., active contours without edges, graph cut, and pixel-based classifier) in freehand three-dimensional ultrasound imaging in terms of accuracy, robustness, ease of use, level of human interaction required, and computation time. We figured out that these methods lack automation and machine intelligence and are not highly accurate. Hence, we implemented two machine learning approaches (i.e., random forest and convolutional neural network) to improve the accuracy of segmentation as well as provide automation. This comparative study intends to discuss and analyse the advantages and disadvantages of different algorithms. In the last step, the volume of the thyroid is computed using the segmentation results, and the performance analysis of all the algorithms is carried out by comparing the segmentation results with the ground truth.
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Yijiang, Andrew Janowczyk, and Anant Madabhushi. "Quantitative Assessment of the Effects of Compression on Deep Learning in Digital Pathology Image Analysis." JCO Clinical Cancer Informatics, no. 4 (September 2020): 221–33. http://dx.doi.org/10.1200/cci.19.00068.

Full text
Abstract:
PURPOSE Deep learning (DL), a class of approaches involving self-learned discriminative features, is increasingly being applied to digital pathology (DP) images for tasks such as disease identification and segmentation of tissue primitives (eg, nuclei, glands, lymphocytes). One application of DP is in telepathology, which involves digitally transmitting DP slides over the Internet for secondary diagnosis by an expert at a remote location. Unfortunately, the places benefiting most from telepathology often have poor Internet quality, resulting in prohibitive transmission times of DP images. Image compression may help, but the degree to which image compression affects performance of DL algorithms has been largely unexplored. METHODS We investigated the effects of image compression on the performance of DL strategies in the context of 3 representative use cases involving segmentation of nuclei (n = 137), segmentation of lymph node metastasis (n = 380), and lymphocyte detection (n = 100). For each use case, test images at various levels of compression (JPEG compression quality score ranging from 1-100 and JPEG2000 compression peak signal-to-noise ratio ranging from 18-100 dB) were evaluated by a DL classifier. Performance metrics including F1 score and area under the receiver operating characteristic curve were computed at the various compression levels. RESULTS Our results suggest that DP images can be compressed by 85% while still maintaining the performance of the DL algorithms at 95% of what is achievable without any compression. Interestingly, the maximum compression level sustainable by DL algorithms is similar to where pathologists also reported difficulties in providing accurate interpretations. CONCLUSION Our findings seem to suggest that in low-resource settings, DP images can be significantly compressed before transmission for DL-based telepathology applications.
APA, Harvard, Vancouver, ISO, and other styles
25

Mirzaalian Dastjerdi, Houman, Dominique Töpfer, Stefan J. Rupitsch, and Andreas Maier. "Measuring Surface Area of Skin Lesions with 2D and 3D Algorithms." International Journal of Biomedical Imaging 2019 (January 15, 2019): 1–9. http://dx.doi.org/10.1155/2019/4035148.

Full text
Abstract:
Purpose. The treatment of skin lesions of various kinds is a common task in clinical routine. Apart from wound care, the assessment of treatment efficacy plays an important role. In this paper, we present a new approach to measure the skin lesion surface in two and three dimensions. Methods. For the 2D approach, a single photo containing a flexible paper ruler is taken. After semi-automatic segmentation of the lesion, evaluation is based on local scale estimation using the ruler. For the 3D approach, reconstruction is based on Structure from Motion. Roughly outlining the region of interest around the lesion is required for both methods. Results. The measurement evaluation was performed on 117 phantom images and five phantom videos for 2D and 3D approach, respectively. We found an absolute error of 0.99±1.18 cm2 and a relative error 9.89± 9.31% for 2D. These errors are <1 cm2 and <5% for five test phantoms in our 3D case. As expected, the error of 2D surface area measurement increased by approximately 10% for wounds on the bent surface compared to wounds on the flat surface. Using our method, the only user interaction is to roughly outline the region of interest around the lesion. Conclusions. We developed a new wound segmentation and surface area measurement technique for skin lesions even on a bent surface. The 2D technique provides the user with a fast, user-friendly segmentation and measurement tool with reasonable accuracy for home care assessment of treatment. For 3D only preliminary results could be provided. Measurements were only based on phantoms and have to be repeated with real clinical data.
APA, Harvard, Vancouver, ISO, and other styles
26

Aubry-Kientz, Mélaine, Raphaël Dutrieux, Antonio Ferraz, et al. "A Comparative Assessment of the Performance of Individual Tree Crowns Delineation Algorithms from ALS Data in Tropical Forests." Remote Sensing 11, no. 9 (2019): 1086. http://dx.doi.org/10.3390/rs11091086.

Full text
Abstract:
Tropical forest canopies are comprised of tree crowns of multiple species varying in shape and height, and ground inventories do not usually reliably describe their structure. Airborne laser scanning data can be used to characterize these individual crowns, but analytical tools developed for boreal or temperate forests may require to be adjusted before they can be applied to tropical environments. Therefore, we compared results from six different segmentation methods applied to six plots (39 ha) from a study site in French Guiana. We measured the overlap of automatically segmented crowns projection with selected crowns manually delineated on high-resolution photography. We also evaluated the goodness of fit following automatic matching with field inventory data using a model linking tree diameter to tree crown width. The different methods tested in this benchmark segmented highly different numbers of crowns having different characteristics. Segmentation methods based on the point cloud (AMS3D and Graph-Cut) globally outperformed methods based on the Canopy Height Models, especially for small crowns; the AMS3D method outperformed the other methods tested for the overlap analysis, and AMS3D and Graph-Cut performed the best for the automatic matching validation. Nevertheless, other methods based on the Canopy Height Model performed better for very large emergent crowns. The dense foliage of tropical moist forests prevents sufficient point densities in the understory to segment subcanopy trees accurately, regardless of the segmentation method.
APA, Harvard, Vancouver, ISO, and other styles
27

Bieniecki, Wojciech, and Sebastian Stoliński. "Identification and Assessment of Selected Handwritten Function Graphs Using Least Square Approximation Combined with General Hough Transform." Image Processing & Communications 22, no. 4 (2017): 23–42. http://dx.doi.org/10.1515/ipc-2017-0019.

Full text
Abstract:
Abstract The paper provides a comparison of three variants of algorithms for automatic assessment of some examination tasks involving sketching a function graph based on image processing. Three types of functions have been considered: linear, quadratic, and trigonometric. The assumption adopted in the design of the algorithm is to map the way the examiner assesses the solutions and to achieve the evaluation quality close to the one obtained in manual evaluation. In particular, the algorithm should not reject a partly correct solution and also extract the correct solution from other lines, deletions and corrections made by a student. Essential subproblems to solve in our scheme concern image segmentation, object identification and automatic understanding. We consider several techniques based on Hough Transform, least square fitting and nearest neighbor based classification. The most reliable solution is an algorithm combining least square fitting and Hough Transform.
APA, Harvard, Vancouver, ISO, and other styles
28

Tomasi, Giampaolo, Tony Shepherd, Federico Turkheimer, Dimitris Visvikis, and Eric Aboagye. "Comparative assessment of segmentation algorithms for tumor delineation on a test-retest [11C]choline dataset." Medical Physics 39, no. 12 (2012): 7571–79. http://dx.doi.org/10.1118/1.4761952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Eramian, Mark, Christopher Power, Stephen Rau, and Pulkit Khandelwal. "Benchmarking Human Performance in Semi-Automated Image Segmentation." Interacting with Computers 32, no. 3 (2020): 233–45. http://dx.doi.org/10.1093/iwcomp/iwaa017.

Full text
Abstract:
Abstract Semi-automated segmentation algorithms hold promise for improving extraction and identification of objects in images such as tumors in medical images of human tissue, counting plants or flowers for crop yield prediction or other tasks where object numbers and appearance vary from image to image. By blending markup from human annotators to algorithmic classifiers, the accuracy and reproducability of image segmentation can be raised to very high levels. At least, that is the promise of this approach, but the reality is less than clear. In this paper, we review the state-of-the-art in semi-automated image segmentation performance assessment and demonstrate it to be lacking the level of experimental rigour needed to ensure that claims about algorithm accuracy and reproducability can be considered valid. We follow this review with two experiments that vary the type of markup that annotators make on images, either points or strokes, in tightly controlled experimental conditions in order to investigate the effect that this one particular source of variation has on the accuracy of these types of systems. In both experiments, we found that accuracy substantially increases when participants use a stroke-based interaction. In light of these results, the validity of claims about algorithm performance are brought into sharp focus, and we reflect on the need for a far more control on variables for benchmarking the impact of annotators and their context on these types of systems.
APA, Harvard, Vancouver, ISO, and other styles
30

Khawaja, Ahsan, Tariq M. Khan, Mohammad A. U. Khan, and Syed Junaid Nawaz. "A Multi-Scale Directional Line Detector for Retinal Vessel Segmentation." Sensors 19, no. 22 (2019): 4949. http://dx.doi.org/10.3390/s19224949.

Full text
Abstract:
The assessment of transformations in the retinal vascular structure has a strong potential in indicating a wide range of underlying ocular pathologies. Correctly identifying the retinal vessel map is a crucial step in disease identification, severity progression assessment, and appropriate treatment. Marking the vessels manually by a human expert is a tedious and time-consuming task, thereby reinforcing the need for automated algorithms capable of quick segmentation of retinal features and any possible anomalies. Techniques based on unsupervised learning methods utilize vessel morphology to classify vessel pixels. This study proposes a directional multi-scale line detector technique for the segmentation of retinal vessels with the prime focus on the tiny vessels that are most difficult to segment out. Constructing a directional line-detector, and using it on images having only the features oriented along the detector’s direction, significantly improves the detection accuracy of the algorithm. The finishing step involves a binarization operation, which is again directional in nature, helps in achieving further performance improvements in terms of key performance indicators. The proposed method is observed to obtain a sensitivity of 0.8043, 0.8011, and 0.7974 for the Digital Retinal Images for Vessel Extraction (DRIVE), STructured Analysis of the Retina (STARE), and Child Heart And health Study in England (CHASE_DB1) datasets, respectively. These results, along with other performance enhancements demonstrated by the conducted experimental evaluation, establish the validity and applicability of directional multi-scale line detectors as a competitive framework for retinal image segmentation.
APA, Harvard, Vancouver, ISO, and other styles
31

Lu, Tao, Jiaming Wang, Huabing Zhou, Junjun Jiang, Jiayi Ma, and Zhongyuan Wang. "Rectangular-Normalized Superpixel Entropy Index for Image Quality Assessment." Entropy 20, no. 12 (2018): 947. http://dx.doi.org/10.3390/e20120947.

Full text
Abstract:
Image quality assessment (IQA) is a fundamental problem in image processing that aims to measure the objective quality of a distorted image. Traditional full-reference (FR) IQA methods use fixed-size sliding windows to obtain structure information but ignore the variable spatial configuration information. In order to better measure the multi-scale objects, we propose a novel IQA method, named RSEI, based on the perspective of the variable receptive field and information entropy. First, we find that consistence relationship exists between the information fidelity and human visual of individuals. Thus, we reproduce the human visual system (HVS) to semantically divide the image into multiple patches via rectangular-normalized superpixel segmentation. Then the weights of each image patches are adaptively calculated via their information volume. We verify the effectiveness of RSEI by applying it to data from the TID2008 database and denoise algorithms. Experiments show that RSEI outperforms some state-of-the-art IQA algorithms, including visual information fidelity (VIF) and weighted average deep image quality measure (WaDIQaM).
APA, Harvard, Vancouver, ISO, and other styles
32

Sharif, Mhd Saeed, Maysam Abbod, Abbes Amira, and Habib Zaidi. "Artificial Neural Network-Based System for PET Volume Segmentation." International Journal of Biomedical Imaging 2010 (2010): 1–11. http://dx.doi.org/10.1155/2010/105610.

Full text
Abstract:
Tumour detection, classification, and quantification in positron emission tomography (PET) imaging at early stage of disease are important issues for clinical diagnosis, assessment of response to treatment, and radiotherapy planning. Many techniques have been proposed for segmenting medical imaging data; however, some of the approaches have poor performance, large inaccuracy, and require substantial computation time for analysing large medical volumes. Artificial intelligence (AI) approaches can provide improved accuracy and save decent amount of time. Artificial neural networks (ANNs), as one of the best AI techniques, have the capability to classify and quantify precisely lesions and model the clinical evaluation for a specific problem. This paper presents a novel application of ANNs in the wavelet domain for PET volume segmentation. ANN performance evaluation using different training algorithms in both spatial and wavelet domains with a different number of neurons in the hidden layer is also presented. The best number of neurons in the hidden layer is determined according to the experimental results, which is also stated Levenberg-Marquardt backpropagation training algorithm as the best training approach for the proposed application. The proposed intelligent system results are compared with those obtained using conventional techniques including thresholding and clustering based approaches. Experimental and Monte Carlo simulated PET phantom data sets and clinical PET volumes of nonsmall cell lung cancer patients were utilised to validate the proposed algorithm which has demonstrated promising results.
APA, Harvard, Vancouver, ISO, and other styles
33

Fabijańska, Anna, Tomasz Węgliński, Krzysztof Zakrzewski, and Emilia Nowosławska. "Assessment of hydrocephalus in children based on digital image processing and analysis." International Journal of Applied Mathematics and Computer Science 24, no. 2 (2014): 299–312. http://dx.doi.org/10.2478/amcs-2014-0022.

Full text
Abstract:
Abstract Hydrocephalus is a pathological condition of the central nervous system which often affects neonates and young children. It manifests itself as an abnormal accumulation of cerebrospinal fluid within the ventricular system of the brain with its subsequent progression. One of the most important diagnostic methods of identifying hydrocephalus is Computer Tomography (CT). The enlarged ventricular system is clearly visible on CT scans. However, the assessment of the disease progress usually relies on the radiologist’s judgment and manual measurements, which are subjective, cumbersome and have limited accuracy. Therefore, this paper regards the problem of semi-automatic assessment of hydrocephalus using image processing and analysis algorithms. In particular, automated determination of popular indices of the disease progress is considered. Algorithms for the detection, semi-automatic segmentation and numerical description of the lesion are proposed. Specifically, the disease progress is determined using shape analysis algorithms. Numerical results provided by the introduced methods are presented and compared with those calculated manually by a radiologist and a trained operator. The comparison proves the correctness of the introduced approach.
APA, Harvard, Vancouver, ISO, and other styles
34

Kurrant, Douglas, Muhammad Omer, Nasim Abdollahi, Pedram Mojabi, Elise Fear, and Joe LoVetri. "Evaluating Performance of Microwave Image Reconstruction Algorithms: Extracting Tissue Types with Segmentation Using Machine Learning." Journal of Imaging 7, no. 1 (2021): 5. http://dx.doi.org/10.3390/jimaging7010005.

Full text
Abstract:
Evaluating the quality of reconstructed images requires consistent approaches to extracting information and applying metrics. Partitioning medical images into tissue types permits the quantitative assessment of regions that contain a specific tissue. The assessment facilitates the evaluation of an imaging algorithm in terms of its ability to reconstruct the properties of various tissue types and identify anomalies. Microwave tomography is an imaging modality that is model-based and reconstructs an approximation of the actual internal spatial distribution of the dielectric properties of a breast over a reconstruction model consisting of discrete elements. The breast tissue types are characterized by their dielectric properties, so the complex permittivity profile that is reconstructed may be used to distinguish different tissue types. This manuscript presents a robust and flexible medical image segmentation technique to partition microwave breast images into tissue types in order to facilitate the evaluation of image quality. The approach combines an unsupervised machine learning method with statistical techniques. The key advantage for using the algorithm over other approaches, such as a threshold-based segmentation method, is that it supports this quantitative analysis without prior assumptions such as knowledge of the expected dielectric property values that characterize each tissue type. Moreover, it can be used for scenarios where there is a scarcity of data available for supervised learning. Microwave images are formed by solving an inverse scattering problem that is severely ill-posed, which has a significant impact on image quality. A number of strategies have been developed to alleviate the ill-posedness of the inverse scattering problem. The degree of success of each strategy varies, leading to reconstructions that have a wide range of image quality. A requirement for the segmentation technique is the ability to partition tissue types over a range of image qualities, which is demonstrated in the first part of the paper. The segmentation of images into regions of interest corresponding to various tissue types leads to the decomposition of the breast interior into disjoint tissue masks. An array of region and distance-based metrics are applied to compare masks extracted from reconstructed images and ground truth models. The quantitative results reveal the accuracy with which the geometric and dielectric properties are reconstructed. The incorporation of the segmentation that results in a framework that effectively furnishes the quantitative assessment of regions that contain a specific tissue is also demonstrated. The algorithm is applied to reconstructed microwave images derived from breasts with various densities and tissue distributions to demonstrate the flexibility of the algorithm and that it is not data-specific. The potential for using the algorithm to assist in diagnosis is exhibited with a tumor tracking example. This example also establishes the usefulness of the approach in evaluating the performance of the reconstruction algorithm in terms of its sensitivity and specificity to malignant tissue and its ability to accurately reconstruct malignant tissue.
APA, Harvard, Vancouver, ISO, and other styles
35

Kurrant, Douglas, Muhammad Omer, Nasim Abdollahi, Pedram Mojabi, Elise Fear, and Joe LoVetri. "Evaluating Performance of Microwave Image Reconstruction Algorithms: Extracting Tissue Types with Segmentation Using Machine Learning." Journal of Imaging 7, no. 1 (2021): 5. http://dx.doi.org/10.3390/jimaging7010005.

Full text
Abstract:
Evaluating the quality of reconstructed images requires consistent approaches to extracting information and applying metrics. Partitioning medical images into tissue types permits the quantitative assessment of regions that contain a specific tissue. The assessment facilitates the evaluation of an imaging algorithm in terms of its ability to reconstruct the properties of various tissue types and identify anomalies. Microwave tomography is an imaging modality that is model-based and reconstructs an approximation of the actual internal spatial distribution of the dielectric properties of a breast over a reconstruction model consisting of discrete elements. The breast tissue types are characterized by their dielectric properties, so the complex permittivity profile that is reconstructed may be used to distinguish different tissue types. This manuscript presents a robust and flexible medical image segmentation technique to partition microwave breast images into tissue types in order to facilitate the evaluation of image quality. The approach combines an unsupervised machine learning method with statistical techniques. The key advantage for using the algorithm over other approaches, such as a threshold-based segmentation method, is that it supports this quantitative analysis without prior assumptions such as knowledge of the expected dielectric property values that characterize each tissue type. Moreover, it can be used for scenarios where there is a scarcity of data available for supervised learning. Microwave images are formed by solving an inverse scattering problem that is severely ill-posed, which has a significant impact on image quality. A number of strategies have been developed to alleviate the ill-posedness of the inverse scattering problem. The degree of success of each strategy varies, leading to reconstructions that have a wide range of image quality. A requirement for the segmentation technique is the ability to partition tissue types over a range of image qualities, which is demonstrated in the first part of the paper. The segmentation of images into regions of interest corresponding to various tissue types leads to the decomposition of the breast interior into disjoint tissue masks. An array of region and distance-based metrics are applied to compare masks extracted from reconstructed images and ground truth models. The quantitative results reveal the accuracy with which the geometric and dielectric properties are reconstructed. The incorporation of the segmentation that results in a framework that effectively furnishes the quantitative assessment of regions that contain a specific tissue is also demonstrated. The algorithm is applied to reconstructed microwave images derived from breasts with various densities and tissue distributions to demonstrate the flexibility of the algorithm and that it is not data-specific. The potential for using the algorithm to assist in diagnosis is exhibited with a tumor tracking example. This example also establishes the usefulness of the approach in evaluating the performance of the reconstruction algorithm in terms of its sensitivity and specificity to malignant tissue and its ability to accurately reconstruct malignant tissue.
APA, Harvard, Vancouver, ISO, and other styles
36

Hsieh, Chia-Yeh, Hsiang-Yun Huang, Kai-Chun Liu, Kun-Hui Chen, Steen Jun-Ping Hsu, and Chia-Tai Chan. "Subtask Segmentation of Timed Up and Go Test for Mobility Assessment of Perioperative Total Knee Arthroplasty." Sensors 20, no. 21 (2020): 6302. http://dx.doi.org/10.3390/s20216302.

Full text
Abstract:
Total knee arthroplasty (TKA) is one of the most common treatments for people with severe knee osteoarthritis (OA). The accuracy of outcome measurements and quantitative assessments for perioperative TKA is an important issue in clinical practice. Timed up and go (TUG) tests have been validated to measure basic mobility and balance capabilities. A TUG test contains a series of subtasks, including sit-to-stand, walking-out, turning, walking-in, turning around, and stand-to-sit tasks. Detailed information about subtasks is essential to aid clinical professionals and physiotherapists in making assessment decisions. The main objective of this study is to design and develop a subtask segmentation approach using machine-learning models and knowledge-based postprocessing during the TUG test for perioperative TKA. The experiment recruited 26 patients with severe knee OA (11 patients with bilateral TKA planned and 15 patients with unilateral TKA planned). A series of signal-processing mechanisms and pattern recognition approaches involving machine learning-based multi-classifiers, fragmentation modification and subtask inference are designed and developed to tackle technical challenges in typical classification algorithms, including motion variability, fragmentation and ambiguity. The experimental results reveal that the accuracy of the proposed subtask segmentation approach using the AdaBoost technique with a window size of 128 samples is 92%, which is an improvement of at least 15% compared to that of the typical subtask segmentation approach using machine-learning models only.
APA, Harvard, Vancouver, ISO, and other styles
37

Suri, Jasjit, Yujun Guo, Cara Coad, Tim Danielson, Idris Elbakri, and Roman Janer. "Image Quality Assessment via Segmentation of Breast Lesion in X-ray and Ultrasound Phantom Images from Fischer's Full Field Digital Mammography and Ultrasound (FFDMUS) System." Technology in Cancer Research & Treatment 4, no. 1 (2005): 83–92. http://dx.doi.org/10.1177/153303460500400111.

Full text
Abstract:
Fischer has been developing a fused full-field digital mammography and ultrasound (FFDMUS) system funded by the National Institute of Health (NIH). In FFDMUS, two sets of acquisitions are performed: 2-D X-ray and 3-D ultrasound. The segmentation of acquired lesions in phantom images is important: (i) to assess the image quality of X-ray and ultrasound images; (ii) to register multi-modality images; and (iii) to establish an automatic lesion detection methodology to assist the radiologist. In this paper we developed lesion segmentation strategies for ultrasound and X-ray images acquired using FFDMUS. For ultrasound lesion segmentation, a signal-to-noise (SNR)-based method was adapted. For X-ray segmentation, we used gradient vector flow (GVF)-based deformable model. The performance of these segmentation algorithms was evaluated. We also performed partial volume correction (PVC) analysis on the segmentation of ultrasound images. For X-ray lesion segmentation, we also studied the effect of PDE smoothing on GVF's ability to segment the lesion. We conclude that ultrasound image qualities from FFDMUS and Hand-Held ultrasound (HHUS) are comparable. The mean percentage error with PVC was 4.56% (4.31%) and 6.63% (5.89%) for 5 mm lesion and 3 mm lesion respectively. The mean average error from the segmented X-ray images with PDE yielded an average error of 9.61%. We also tested our program on synthetic datasets. The system was developed for Linux workstation using C/C++.
APA, Harvard, Vancouver, ISO, and other styles
38

Karegowda, Asha Gowda, D. Poornima, N. Sindhu, and P. T. Bharathi. "Performance Assessment of k-Means, FCM, ARKFCM and PSO Segmentation Algorithms for MR Brain Tumour Images." International Journal of Data Mining And Emerging Technologies 8, no. 1 (2018): 18. http://dx.doi.org/10.5958/2249-3220.2018.00003.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Yu, Yang, Dong-Hoon Lee, Shin-Lei Peng, et al. "Assessment of Glioma Response to Radiotherapy Using Multiple MRI Biomarkers with Manual and Semiautomated Segmentation Algorithms." Journal of Neuroimaging 26, no. 6 (2016): 626–34. http://dx.doi.org/10.1111/jon.12354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ohura, Norihiko, Ryota Mitsuno, Masanobu Sakisaka, et al. "Convolutional neural networks for wound detection: the role of artificial intelligence in wound care." Journal of Wound Care 28, Sup10 (2019): S13—S24. http://dx.doi.org/10.12968/jowc.2019.28.sup10.s13.

Full text
Abstract:
Objective: Telemedicine is an essential support system for clinical settings outside the hospital. Recently, the importance of the model for assessment of telemedicine (MAST) has been emphasised. The development of an eHealth-supported wound assessment system using artificial intelligence is awaited. This study explored whether or not wound segmentation of a diabetic foot ulcer (DFU) and a venous leg ulcer (VLU) by a convolutional neural network (CNN) was possible after being educated using sacral pressure ulcer (PU) data sets, and which CNN architecture was superior at segmentation. Methods: CNNs with different algorithms and architectures were prepared. The four architectures were SegNet, LinkNet, U-Net and U-Net with the VGG16 Encoder Pre-Trained on ImageNet (Unet_VGG16). Each CNN learned the supervised data of sacral pressure ulcers (PUs). Results: Among the four architectures, the best results were obtained with U-Net. U-Net demonstrated the second-highest accuracy in terms of the area under the curve (0.997) and a high specificity (0.943) and sensitivity (0.993), with the highest values obtained with Unet_VGG16. U-Net was also considered to be the most practical architecture and superior to the others in that the segmentation speed was faster than that of Unet_VGG16. Conclusion: The U-Net CNN constructed using appropriately supervised data was capable of segmentation with high accuracy. These findings suggest that eHealth wound assessment using CNNs will be of practical use in the future.
APA, Harvard, Vancouver, ISO, and other styles
41

Amisse, C., M. E. Jijon-Palma, and J. A. S. Centeno. "MAPPING EXTENSION AND MAGNITUDE OF CHANGES INDUCED BY CYCLONE IDAI WITH MULTI-TEMPORAL LANDSAT AND SAR IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W12-2020 (November 6, 2020): 273–77. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w12-2020-273-2020.

Full text
Abstract:
Abstract. In this paper it is described a study case of a rapid assessment of change detections for post-cyclone Idai vegetated damage and flood extension estimation by fusion of multi-temporal Landsat and sentinel-1 SAR images. For automated change detection, after disasters, many algorithms have been proposed. To visualize the changes induced by cyclone we tested and compared two automated change detection techniques namely: Principal Components Analysis (PCA), Normalized Difference Vegetation Index (NDVI) and image segmentation. With the image segmentation of multispectral and SAR images, it was possible to visualize the extension of the wet area. For this specific application, PCA was identified as the optimal change detection indicator than NDVI. This study suggested that image segmentation, principal components analysis, and normalized difference vegetation index can be used for change detection of surface water due to flood and disasters especially in prone countries like Mozambique.
APA, Harvard, Vancouver, ISO, and other styles
42

Kleczek, Pawel, Grzegorz Dyduch, Agnieszka Graczyk-Jarzynka, and Joanna Jaworek-Korjakowska. "A New Approach to Border Irregularity Assessment with Application in Skin Pathology." Applied Sciences 9, no. 10 (2019): 2022. http://dx.doi.org/10.3390/app9102022.

Full text
Abstract:
The border irregularity assessment of tissue structures is an important step in medical diagnostics (e.g., in dermatoscopy, pathology, and cardiology). The diagnostic criteria based on the degree of uniformity and symmetry of border irregularities are particularly vital in dermatopathology, to distinguish between benign and malignant skin lesions. We propose a new method for the segmentation of individual border projections and measuring their morphometry. It is based mainly on analyzing the curvature of the object’s border to identify endpoints of projection bases, and on analyzing object’s skeleton in the graph representation to identify bases of projections and their location along the object’s main axis. The proposed segmentation method has been tested on 25 skin whole slide images of common melanocytic lesions. In total, 825 out of 992 (83%) manually segmented retes (projections of epidermis) were detected correctly and the Jaccard similarity coefficient for the task of detecting retes was 0.798. Experimental results verified the effectiveness of the proposed approach. Our method is particularly well suited for assessing the border irregularity of human epidermis and thus could help develop computer-aided diagnostic algorithms for skin cancer detection.
APA, Harvard, Vancouver, ISO, and other styles
43

de la Rosa, Ezequiel, Désiré Sidibé, Thomas Decourselle, Thibault Leclercq, Alexandre Cochet, and Alain Lalande. "Myocardial Infarction Quantification from Late Gadolinium Enhancement MRI Using Top-Hat Transforms and Neural Networks." Algorithms 14, no. 8 (2021): 249. http://dx.doi.org/10.3390/a14080249.

Full text
Abstract:
Late gadolinium enhancement (LGE) MRI is the gold standard technique for myocardial viability assessment. Although the technique accurately reflects the damaged tissue, there is no clinical standard to quantify myocardial infarction (MI). Moreover, commercial software used in clinical practice are mostly semi-automatic, and hence require direct intervention of experts. In this work, a new automatic method for MI quantification from LGE-MRI is proposed. Our novel segmentation approach is devised for accurately detecting not only hyper-enhanced lesions, but also microvascular obstruction areas. Moreover, it includes a myocardial disease detection step which extends the algorithm for working under healthy scans. The method is based on a cascade approach where firstly, diseased slices are identified by a convolutional neural network (CNN). Secondly, by means of morphological operations a fast coarse scar segmentation is obtained. Thirdly, the segmentation is refined by a boundary-voxel reclassification strategy using an ensemble of very light CNNs. We tested the method on a LGE-MRI database with healthy (n = 20) and diseased (n = 80) cases following a 5-fold cross-validation scheme. Our approach segmented myocardial scars with an average Dice coefficient of 77.22 ± 14.3% and with a volumetric error of 1.0 ± 6.9 cm3. In a comparison against nine reference algorithms, the proposed method achieved the highest agreement in volumetric scar quantification with the expert delineations (p< 0.001 when compared to the other approaches). Moreover, it was able to reproduce the scar segmentation intra- and inter-rater variability. Our approach was shown to be a good first attempt towards automatic and accurate myocardial scar segmentation, although validation over larger LGE-MRI databases is needed.
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Baoxian, Kelvin C. P. Wang, Allen Zhang, Yue Fei, and Giuseppe Sollazzo. "Automatic Segmentation and Enhancement of Pavement Cracks Based on 3D Pavement Images." Journal of Advanced Transportation 2019 (February 18, 2019): 1–9. http://dx.doi.org/10.1155/2019/1813763.

Full text
Abstract:
Pavement cracking is a significant symptom of pavement deterioration and deficiency. Conventional manual inspections of road condition are gradually replaced by novel automated inspection systems. As a result, a great amount of pavement surface information is digitized by these systems with a high resolution. With pavement surface data, pavement cracks can be detected using crack detection algorithms. In this paper, a fully automated algorithm for segmenting and enhancing pavement crack is proposed, which consists of four major procedures. First, a preprocessing procedure is employed to remove spurious noise and rectify the original 3D pavement data. Second, crack saliency maps are segmented from 3D pavement data using steerable matched filter bank. Third, 2D tensor voting is applied to crack saliency maps to achieve better curve continuity of crack structure and higher accuracy. Finally, postprocessing procedures are used to remove redundant noises. The proposed procedures were evaluated over 200 asphalt pavement images with diverse cracks. The experimental results demonstrated that the proposed method showed a high performance and could achieve average precision of 88.38%, recall of 93.15%, and F-measure of 90.68%, respectively. Accordingly, the proposed approach can be helpful in automated pavement condition assessment.
APA, Harvard, Vancouver, ISO, and other styles
45

Fathima, S. M. Nazia, R. Tamil Selvi, and M. Parisa Beham. "Assessment of BMD and Statistical Analysis for Osteoporosis Detection." Biomedical and Pharmacology Journal 12, no. 04 (2019): 1907–14. http://dx.doi.org/10.13005/bpj/1822.

Full text
Abstract:
Biomedical engineering is one of the promising disciplines in engineering that deals with technology advancement in human health. Osteoporosis is a common metabolic disease categorized by decreased bone mass and increased liability to fractures. Bone densitometry is a broad term comprising the art and science of measuring the bone mineral content (BMC) and bone mineral density (BMD) of particular skeletal sites or the whole body. There are various methods to measure bone mineral density which differs based on the differential absorption of ionizing radiation or the sound waves. The methods are SPA (Single Photon Absorptiometry), DPA (Dual Photon Absorptiometry), SEXA (Single Energy X ray Absorptiometry), DEXA (Dual Energy X ray Absorptiometry), QCT (Quantitative Computed Tomography), QUS (Quantitative Ultra Sound) and RA (Radiographic absorptiometry). The DEXA test can measure the whole body but usually the lower spine and hips. A major disadvantage of DEXA is that currently there is a lack of standardization in bone and soft tissue measurements. Furthermore, for a given manufacturer, results may vary by the model of the instrument, the mode of operation or the version of the software used to analyze the data. In addition to that, DEXA scan images are only for the confirmation of correct positioning of the patient and correct placement of the regions of interest (ROI). Motivated by the above issues, this paper can pave a way for analysis in the measurement of BMD, measurement of T-score, and Z-score from the DEXA scan images. This proposed methodology includes segmentation algorithms such as k means clustering & mean –shift algorithm and comparison of the accuracy of algorithms. Also in addition, a novel mathematical analysis is also proposed to measure the T–score values in DEXA images with a new parameter ‘S’ from BMD values in order to detect the osteoporosis condition accurately.
APA, Harvard, Vancouver, ISO, and other styles
46

Krauz, Lukáš, Petr Janout, Martin Blažek, and Petr Páta. "Assessing Cloud Segmentation in the Chromacity Diagram of All-Sky Images." Remote Sensing 12, no. 11 (2020): 1902. http://dx.doi.org/10.3390/rs12111902.

Full text
Abstract:
All-sky imaging systems are currently very popular. They are used in ground-based meteorological stations and as a crucial part of the weather monitors for autonomous robotic telescopes. Data from all-sky imaging cameras provide important information for controlling meteorological stations and telescopes, and they have specific characteristics different from widely-used imaging systems. A particularly promising and useful application of all-sky cameras is for remote sensing of cloud cover. Post-processing of the image data obtained from all-sky imaging cameras for automatic cloud detection and for cloud classification is a very demanding task. Accurate and rapid cloud detection can provide a good way to forecast weather events such as torrential rainfalls. However, the algorithms that are used must be specifically calibrated on data from the all-sky camera in order to set up an automatic cloud detection system. This paper presents an assessment of a modified k-means++ color-based segmentation algorithm specifically adjusted to the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) ground-based remote all-sky imaging system for cloud detection. The segmentation method is assessed in two different color-spaces (L*a*b and XYZ). Moreover, the proposed algorithm is tested on our public WMD database (WILLIAM Meteo Database) of annotated all-sky image data, which was created specifically for testing purposes. The WMD database is available for public use. In this paper, we present a comparison of selected color-spaces and assess their suitability for the cloud color segmentation based on all-sky images. In addition, we investigate the distribution of the segmented cloud phenomena present on the all-sky images based on the color-spaces channels. In the last part of this work, we propose and discuss the possible exploitation of the color-based k-means++ segmentation method as a preprocessing step towards cloud classification in all-sky images.
APA, Harvard, Vancouver, ISO, and other styles
47

Jamaspishvili, Tamara, Stephanie Harmon, Palak Patel, et al. "Deep learning-based approach for automated assessment of PTEN status." Journal of Clinical Oncology 38, no. 6_suppl (2020): 294. http://dx.doi.org/10.1200/jco.2020.38.6_suppl.294.

Full text
Abstract:
294 Background: PTEN loss is associated with adverse outcomes in prostate cancer and has the potential to be clinically implemented as a prognostic biomarker. Deep learning algorithms applied to digital pathology can provide automated and objective assessment of biomarkers. The objective of this work was to develop an artificial intelligence (AI) system for automated detection and localization of PTEN loss in prostate cancer samples. Methods: Immunohistochemistry (IHC) was used to measure PTEN protein levels on prostate tissue microarrays (TMA) from two institutions (in-house n=272 and external n=125 patients). TMA cores were visually scored for PTEN loss by pathologists and, if present, spatially annotated. In-house cohort (N=1239 cores) were divided into 70/20/10 training/validation/testing sets. Two algorithms were developed: a) Class I=core-based, to label each core for biomarker status and b) Class II=pixel-based, to spatially distinguish areas of PTEN loss within each core. ResNet101 architecture was used to train a multi-resolution ensemble of classifiers at 5x, 10x, and 20x for Class I task and a single classifier at simulated 40x for Class II segmentation. Results: For Class I algorithm, accuracy of PTEN status was 88.3% and 93.4% in validation and testing cohorts, respectively (Table). AI-based probability of PTEN loss was higher in cores with complete loss vs partial loss. Accuracy was improved to 90.7% in validation and 93.5% in test cohorts using the Class II region-based algorithm, with median dice scores 0.833 and 0.831, respectively. Direct application to external set demonstrated a high false positive rate. Loading trained model and conservatively re-training (“fine-tuning”) on 48/320 external cohort cores improved accuracy to 93.4%. Conclusions: Results demonstrate feasibility and robustness for fully automated detection and localization of PTEN loss in prostate cancer tissue samples and possibility for time/cost-effectiveness of sample processing/scoring in research and clinical laboratories.[Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
48

YANG, XIN, WANJI HE, KAITONG LI, et al. "A REVIEW ON ARTERY WALL SEGMENTATION TECHNIQUES AND INTIMA-MEDIA THICKNESS MEASUREMENT FOR CAROTID ULTRASOUND IMAGES." Journal of Innovative Optical Health Sciences 05, no. 01 (2012): 1230001. http://dx.doi.org/10.1142/s1793545812300017.

Full text
Abstract:
Stroke and heart attack, which could be led by a kind of cerebrovascular and cardiovascular disease named as atherosclerosis, would seriously cause human morbidity and mortality. It is important for the early stage diagnosis and monitoring medical intervention of the atherosclerosis. Carotid stenosis is a classical atherosclerotic lesion with vessel wall narrowing down and accumulating plaques burden. The carotid artery of intima-media thickness (IMT) is a key indicator to the disease. With the development of computer assisted diagnosis technology, the imaging techniques, segmentation algorithms, measurement methods, and evaluation tools have made considerable progress. Ultrasound imaging, being real-time, economic, reliable, and safe, now seems to become a standard in vascular assessment methodology especially for the measurement of IMT. This review firstly attempts to discuss the clinical relevance of measurements in clinical practice at first, and then followed by the challenges that one has to face when approaching the segmentation of ultrasound images. Secondly, the commonly used methods for the IMT segmentation and measurement are presented. Thirdly, discussion and evaluation of different segmentation techniques are performed. An overview of summary and future perspectives is given finally.
APA, Harvard, Vancouver, ISO, and other styles
49

Jiao, Yishan, Amy LaCross, Visar Berisha, and Julie Liss. "Objective Intelligibility Assessment by Automated Segmental and Suprasegmental Listening Error Analysis." Journal of Speech, Language, and Hearing Research 62, no. 9 (2019): 3359–66. http://dx.doi.org/10.1044/2019_jslhr-s-19-0119.

Full text
Abstract:
Purpose Subjective speech intelligibility assessment is often preferred over more objective approaches that rely on transcript scoring. This is, in part, because of the intensive manual labor associated with extracting objective metrics from transcribed speech. In this study, we propose an automated approach for scoring transcripts that provides a holistic and objective representation of intelligibility degradation stemming from both segmental and suprasegmental contributions, and that corresponds with human perception. Method Phrases produced by 73 speakers with dysarthria were orthographically transcribed by 819 listeners via Mechanical Turk, resulting in 63,840 phrase transcriptions. A protocol was developed to filter the transcripts, which were then automatically analyzed using novel algorithms developed for measuring phoneme and lexical segmentation errors. The results were compared with manual labels on a randomly selected sample set of 40 transcribed phrases to assess validity. A linear regression analysis was conducted to examine how well the automated metrics predict a perceptual rating of severity and word accuracy. Results On the sample set, the automated metrics achieved 0.90 correlation coefficients with manual labels on measuring phoneme errors, and 100% accuracy on identifying and coding lexical segmentation errors. Linear regression models found that the estimated metrics could predict a significant portion of the variance in perceptual severity and word accuracy. Conclusions The results show the promising development of an objective speech intelligibility assessment that identifies intelligibility degradation on multiple levels of analysis.
APA, Harvard, Vancouver, ISO, and other styles
50

Kim, Jung-Yeon, Geunsu Park, Seong-A. Lee, and Yunyoung Nam. "Analysis of Machine Learning-Based Assessment for Elbow Spasticity Using Inertial Sensors." Sensors 20, no. 6 (2020): 1622. http://dx.doi.org/10.3390/s20061622.

Full text
Abstract:
Spasticity is a frequently observed symptom in patients with neurological impairments. Spastic movements of their upper and lower limbs are periodically measured to evaluate functional outcomes of physical rehabilitation, and they are quantified by clinical outcome measures such as the modified Ashworth scale (MAS). This study proposes a method to determine the severity of elbow spasticity, by analyzing the acceleration and rotation attributes collected from the elbow of the affected side of patients and machine-learning algorithms to classify the degree of spastic movement; this approach is comparable to assigning an MAS score. We collected inertial data from participants using a wearable device incorporating inertial measurement units during a passive stretch test. Machine-learning algorithms—including decision tree, random forests (RFs), support vector machine, linear discriminant analysis, and multilayer perceptrons—were evaluated in combinations of two segmentation techniques and feature sets. A RF performed well, achieving up to 95.4% accuracy. This work not only successfully demonstrates how wearable technology and machine learning can be used to generate a clinically meaningful index but also offers rehabilitation patients an opportunity to monitor the degree of spasticity, even in nonhealthcare institutions where the help of clinical professionals is unavailable.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography