To see the other types of publications on this topic, follow the link: Image processing Mathematical statistics.

Journal articles on the topic 'Image processing Mathematical statistics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image processing Mathematical statistics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Doukovska, Lyubka, Venko Petkov, Emil Mihailov, and Svetla Vassileva. "Image Processing for Technological Diagnostics of Metallurgical Facilities." Cybernetics and Information Technologies 12, no. 4 (December 1, 2012): 66–76. http://dx.doi.org/10.2478/cait-2012-0031.

Full text
Abstract:
Abstract The paper presents an overview of the image-processing techniques. The set of basic theoretical instruments includes methods of mathematical analysis, linear algebra, probability theory and mathematical statistics, theory of digital processing of one-dimensional and multidimensional signals, wavelet-transforms and theory of information. This paper describes a methodology that aims to detect and diagnose faults, using thermographs approaches for the digital image processing technique.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Hong Chao, Hong Ya Yue, Fei Ye, and Jing Liu. "A Method to Evaluate Mixture Segregation Based on Space Grid of Image Processing Technology." Applied Mechanics and Materials 482 (December 2013): 49–52. http://dx.doi.org/10.4028/www.scientific.net/amm.482.49.

Full text
Abstract:
A space grid method, based on image processing technology, was proposed to evaluate asphalt mixture segregation, which used the ratio of statistical data variance and its square of mathematical expectation to describe material uniformity by statistics of distribution of particles on space grid. Computer procedure was developed and some experiments are made. The results indicate that this method is simple and can be used to evaluate asphalt mixture uniformity quantitatively with a good adaptability.
APA, Harvard, Vancouver, ISO, and other styles
3

Svynchuk, Olga, Oleg Barabash, Joanna Nikodem, Roman Kochan, and Oleksandr Laptiev. "Image Compression Using Fractal Functions." Fractal and Fractional 5, no. 2 (April 14, 2021): 31. http://dx.doi.org/10.3390/fractalfract5020031.

Full text
Abstract:
The rapid growth of geographic information technologies in the field of processing and analysis of spatial data has led to a significant increase in the role of geographic information systems in various fields of human activity. However, solving complex problems requires the use of large amounts of spatial data, efficient storage of data on on-board recording media and their transmission via communication channels. This leads to the need to create new effective methods of compression and data transmission of remote sensing of the Earth. The possibility of using fractal functions for image processing, which were transmitted via the satellite radio channel of a spacecraft, is considered. The information obtained by such a system is presented in the form of aerospace images that need to be processed and analyzed in order to obtain information about the objects that are displayed. An algorithm for constructing image encoding–decoding using a class of continuous functions that depend on a finite set of parameters and have fractal properties is investigated. The mathematical model used in fractal image compression is called a system of iterative functions. The encoding process is time consuming because it performs a large number of transformations and mathematical calculations. However, due to this, a high degree of image compression is achieved. This class of functions has an interesting property—knowing the initial sets of numbers, we can easily calculate the value of the function, but when the values of the function are known, it is very difficult to return the initial set of values, because there are a huge number of such combinations. Therefore, in order to de-encode the image, it is necessary to know fractal codes that will help to restore the raster image.
APA, Harvard, Vancouver, ISO, and other styles
4

Васильева, Ирина Карловна, and Владимир Васильевич Лукин. "АНАЛИЗ МЕТОДОВ ПОСТКЛАССИФИКАЦИОННОЙ ОБРАБОТКИ МНОГОКАНАЛЬНЫХ ИЗОБРАЖЕНИЙ." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 1 (March 23, 2019): 17–28. http://dx.doi.org/10.32620/reks.2019.1.02.

Full text
Abstract:
The subject matter of the article is the methods of morphological spatial filtering of images in pseudo-colors obtained as a result of statistical segmentation of multichannel satellite images. The aim is to study the effectiveness of various methods of post-classification image processing in order to increase the probability of correct recognition for observed objects. The tasks to be solved are: to select a mathematical model describing the training sets of objects’ classes; to implement the procedure of statistical controlled classification by the maximum likelihood method; to evaluate the results of objects’ recognition on the test image by the criterion of the empirical probability of correct recognition; to formalize the procedures of local object-oriented filtering of a segmented image; to investigate the effectiveness of rank filtering as well as weighted median filtering procedures taking into account the results of the classification by k-nearest neighbors in the filter window. The methods used are methods of empirical distributions’ approximation, statistical recognition methods, methods of probability theory and mathematical statistics, methods of local spatial filtering. The following results were obtained. A method for synthesizing a universal mathematical model has been proposed for describing non-Gaussian signal characteristics of objects on multichannel images based on a multi-dimensional variant of Johnson SB distribution; this model was used for statistical pixel-by-pixel classification of the original satellite image. Algorithms for local post-classification processing in the neighborhood of the selected segments boundaries have been implemented. The analysis of the developed algorithms’ effectiveness based on estimates of classes’ correct recognition probability is performed. Conclusions. The scientific novelty of the results obtained is as follows: combined approaches to the pattern recognition procedures have been further developed – it has been shown that the use of methods of local object-oriented filtering of segmented images allows to reduce the number of point errors for element-wise classification of spatial objects.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Xin Hua, Shu Hao Xu, Li Li Wu, Yin Hua Du, Zhi Jun Duan, and Jian Ping Chen. "Large Sections Peak Traffic Flow Subway Station Simulation Modeling." Advanced Materials Research 1065-1069 (December 2014): 3325–28. http://dx.doi.org/10.4028/www.scientific.net/amr.1065-1069.3325.

Full text
Abstract:
This paper, three subway lines converge site meet a change to the peak time for passenger flow analysis, with the analysis of passenger flow field physical statistics, image processing technology to guide passenger flow data, analysis of local area biggest traffic speed and density, traffic dynamics theory, the application of mathematical software Matlab, establish mathematical model.
APA, Harvard, Vancouver, ISO, and other styles
6

Tanaka, Kazuyuki. "Statistical-mechanical approach to image processing." Journal of Physics A: Mathematical and General 35, no. 37 (September 5, 2002): R81—R150. http://dx.doi.org/10.1088/0305-4470/35/37/201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Lingxiao, Qiao Qiao, Lanxiang Zheng, Libo Liu, Wenjuan Zhao, Hefang Jing, and Chunguang Li. "Numerical simulation of three dimensional flow in Yazidang Reservoir based on image processing." Journal of Intelligent & Fuzzy Systems 39, no. 2 (August 31, 2020): 1591–600. http://dx.doi.org/10.3233/jifs-179932.

Full text
Abstract:
In order to study the water flow movement of the Yazidang Reservoir, this paper generates the initial terrain for the researched water area with the image stitching technology and image edge detection technology, establishes a 3D k - ɛ mathematical model, solves the equations discretely by FVM and SIMPLEC algorithms, studies the numerical simulation of the water flow movement of the reservoir under four working conditions, and analyzes the flow field on the surface and at the bottom of the reservoir. The results show the improved terrain pre-processing accuracy and efficiency of the researched water area and the rationality of the water flow field and rate simulation results, which means that the established 3D turbulence mathematical model can be applied to the numerical simulation of the reservoirs similar to the Yazidang Reservoir. The numerical simulation of 3D turbulence in Yazidang Reservoir provides a theoretical basis and practical application value for the numerical simulation of similar reservoirs.
APA, Harvard, Vancouver, ISO, and other styles
8

Trebbia, P. "Application of Cross Entropy and Factorial Analysis to Image Processing." Proceedings, annual meeting, Electron Microscopy Society of America 48, no. 1 (August 12, 1990): 470–71. http://dx.doi.org/10.1017/s0424820100181105.

Full text
Abstract:
Most of the time, the problem that image analysis has to face is the following : from a given data set obtained with independant measurements, how to extract the signal (the “useful information”) which is mixed with a background (coming from the object itself or from the detector) and statistical noise? If a pre-knowledge of the specimen (or of the experimental procedure) gives some reasonable hypothesis about an empirical mathematical description of the background and of the noise, then, one can try some “fitting” method (maximum likelihood for example) in order to solve the problem. But if no a-priori model is available, one is reduced to try to understand the “meaning” of the images through an analysis of the information content of the data set. Two main statistical approaches can be of some help in that process: ) first, one can measure from the histogram of the data set some characteristic statistical parameters (mean and variance) and compare it with a gaussian histogram of same mean and variance which would be obtained under the same experimental conditions (same number of detected electrons for example) in a pure random process, that is with a “neutral” specimen showing absolutely no contrast. Then from these two histograms, one may estimate the relative cross entropy, that is the information value added by the real presence of a given object in the specimen chamber. Moreover, one can make a 2D-plot of this information in order to localize the pixels, in the data set, whose intensities significantly differ from a pure random process.) the second possibility is to make a variance analysis of the data set : if we have a pre-knowledge that some of the images in this set do not contain the “contrast information” we are looking for, then, there must be some quantitative difference between these images and the other ones. Among the various algorithms available from multivariate statistics, the factorial analysis has been proved to be suited to this kind of analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Choudhari, Khoobaram S., Pacheeripadikkal Jidesh, Parampalli Sudheendra, and Suresh D. Kulkarni. "Quantification and Morphology Studies of Nanoporous Alumina Membranes: A New Algorithm for Digital Image Processing." Microscopy and Microanalysis 19, no. 4 (May 24, 2013): 1061–72. http://dx.doi.org/10.1017/s1431927613001542.

Full text
Abstract:
AbstractA new mathematical algorithm is reported for the accurate and efficient analysis of pore properties of nanoporous anodic alumina (NAA) membranes using scanning electron microscope (SEM) images. NAA membranes of the desired pore size were fabricated using a two-step anodic oxidation process. Surface morphology of the NAA membranes with different pore properties was studied using SEM images along with computerized image processing and analysis. The main objective was to analyze the SEM images of NAA membranes quantitatively, systematically, and quickly. The method uses a regularized shock filter for contrast enhancement, mathematical morphological operators, and a segmentation process for efficient determination of pore properties. The algorithm is executed using MATLAB, which generates a statistical report on the morphology of NAA membrane surfaces and performs accurate quantification of the parameters such as average pore-size distribution, porous area fraction, and average interpore distances. A good comparison between the pore property measurements was obtained using our algorithm and ImageJ software. This algorithm, with little manual intervention, is useful for optimizing the experimental process parameters during the fabrication of such nanostructures. Further, the algorithm is capable of analyzing SEM images of similar or asymmetrically porous nanostructures where sample and background have distinguishable contrast.
APA, Harvard, Vancouver, ISO, and other styles
10

Baltaev, Rodion Khamzaevich. "Method of covert information transfer in still images using a chaotic oscillator." Программные системы и вычислительные методы, no. 2 (February 2020): 1–7. http://dx.doi.org/10.7256/2454-0714.2020.2.32359.

Full text
Abstract:
The subject of the research is the steganographic method of embedding information in digital images. Steganography is able to hide not only the content of information, but also the fact of its existence. The paper presents a method of embedding and extracting information into digital images using a chaotic dynamic system. Chaotic systems are sensitive to certain signals and at the same time immune to noise. These properties allow the use of chaotic systems for embedding information with small image distortions in statistical and visual terms. The methodological basis of the study is the methods of the theory of dynamical systems, mathematical statistics, as well as the theory of image processing. The novelty of the study lies in the development of a new method of embedding information in static images. The author examines in detail the problem of using a chaotic dynamic Duffing system for embedding and extracting information in digital still images. It is shown that the proposed method allows you to embed information in digital images without significant distortion.
APA, Harvard, Vancouver, ISO, and other styles
11

Ponomaryov, Volodymyr, Francisco Gallegos-Funes, and Alberto Rosales-Silva. "Real-Time Color Image Processing Using Order Statistics Filters." Journal of Mathematical Imaging and Vision 23, no. 3 (November 2005): 315–19. http://dx.doi.org/10.1007/s10851-005-2025-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Rodríguez-Aragón, Licesio J., and Anatoly Zhigljavsky. "Singular spectrum analysis for image processing." Statistics and Its Interface 3, no. 3 (2010): 419–26. http://dx.doi.org/10.4310/sii.2010.v3.n3.a14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Puchala, Dariusz, Kamil Stokfiszewski, and Mykhaylo Yatsymirskyy. "Image Statistics Preserving Encrypt-then-Compress Scheme Dedicated for JPEG Compression Standard." Entropy 23, no. 4 (March 31, 2021): 421. http://dx.doi.org/10.3390/e23040421.

Full text
Abstract:
In this paper, the authors analyze in more details an image encryption scheme, proposed by the authors in their earlier work, which preserves input image statistics and can be used in connection with the JPEG compression standard. The image encryption process takes advantage of fast linear transforms parametrized with private keys and is carried out prior to the compression stage in a way that does not alter those statistical characteristics of the input image that are crucial from the point of view of the subsequent compression. This feature makes the encryption process transparent to the compression stage and enables the JPEG algorithm to maintain its full compression capabilities even though it operates on the encrypted image data. The main advantage of the considered approach is the fact that the JPEG algorithm can be used without any modifications as a part of the encrypt-then-compress image processing framework. The paper includes a detailed mathematical model of the examined scheme allowing for theoretical analysis of the impact of the image encryption step on the effectiveness of the compression process. The combinatorial and statistical analysis of the encryption process is also included and it allows to evaluate its cryptographic strength. In addition, the paper considers several practical use-case scenarios with different characteristics of the compression and encryption stages. The final part of the paper contains the additional results of the experimental studies regarding general effectiveness of the presented scheme. The results show that for a wide range of compression ratios the considered scheme performs comparably to the JPEG algorithm alone, that is, without the encryption stage, in terms of the quality measures of reconstructed images. Moreover, the results of statistical analysis as well as those obtained with generally approved quality measures of image cryptographic systems, prove high strength and efficiency of the scheme’s encryption stage.
APA, Harvard, Vancouver, ISO, and other styles
14

Nagy, Marius, and Naya Nagy. "Image processing: why quantum?" Quantum Information and Computation 20, no. 7&8 (June 2020): 616–26. http://dx.doi.org/10.26421/qic20.7-8-6.

Full text
Abstract:
Quantum Image Processing has exploded in recent years with dozens of papers trying to take advantage of quantum parallelism in order to offer a better alternative to how current computers are dealing with digital images. The vast majority of these papers define or make use of quantum representations based on very large superposition states spanning as many terms as there are pixels in the image they try to represent. While such a representation may apparently offer an advantage in terms of space (number of qubits used) and speed of processing (due to quantum parallelism), it also harbors a fundamental flaw: only one pixel can be recovered from the quantum representation of the entire image, and even that one is obtained non-deterministically through a measurement operation applied on the superposition state. We investigate in detail this measurement bottleneck problem by looking at the number of copies of the quantum representation that are necessary in order to recover various fractions of the original image. The results clearly show that any potential advantage a quantum representation might bring with respect to a classical one is paid for dearly with the huge amount of resources (space and time) required by a quantum approach to image processing.
APA, Harvard, Vancouver, ISO, and other styles
15

Васильева, Ирина Карловна, and Владимир Васильевич Лукин. "ИССЛЕДОВАНИЕ ЭФФЕКТИВНОСТИ МЕТОДОВ ПОСТ-КЛАССИФИКАЦИОННОЙ ОБРАБОТКИ ЗАШУМЛЕННЫХ МНОГОКАНАЛЬНЫХ ИЗОБРАЖЕНИЙ." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 2 (June 21, 2019): 45–59. http://dx.doi.org/10.32620/reks.2019.2.04.

Full text
Abstract:
The subject matter of the article are the methods of local spatial post-processing of images obtained as a result of statistical per-pixel classification of multichannel satellite images distorted by additive Gaussian noise. The aim is to investigate the effectiveness of some variants of post-classification image processing methods over a wide range of signal-to-noise ratio; as a criterion of effectiveness, observed objects classification reliability indicators have been taken. The tasks to be solved are: to generate random values of the noise components brightness, ensuring that they coincide with the adopted probabilistic model; to implement a procedure of statistical controlled classification by the maximum likelihood method for images distorted by noise; to evaluate the results of the objects selection in noisy images by the criterion of the empirical probability of correct recognition; to implement procedures for local object-oriented post-processing of images; to investigate the effect of noise variance on the effectiveness of post-processing procedures. The methods used are: methods of stochastic simulation, methods of approximation of empirical dependencies, statistical methods of recognition, methods of probability theory and mathematical statistics, methods of local spatial filtering. The following results have been obtained. Algorithms of rank and weighted median post-processing with considering the results of classification by k-nearest neighbors in the filter window were implemented. The developed algorithms efficiency analysis that based on estimates of the correct recognition probability for objects on noisy images was carried out. Empirical dependences of the estimates of the overall recognition errors probability versus the additive noise variance were obtained. Conclusions. The scientific novelty of the results obtained is as follows: combined approaches to building decision rules, taking into account destabilizing factors, have been further developed – it has been shown that the use of methods of local object-oriented filtering of segmented images reduces the number of point errors in the element-based classification of objects, as well as partially restores the connectedness and spatial distribution of image structure elements.
APA, Harvard, Vancouver, ISO, and other styles
16

Mishra, Harshita, and Anuradha Misra. "Techniques for Image Segmentation: A Critical Review." International Journal of Research in Advent Technology 9, no. 3 (April 10, 2021): 1–4. http://dx.doi.org/10.32622/ijrat.93202101.

Full text
Abstract:
In today’s world there is requirement of some techniques or methods that will be helpful for retrieval of the information from the images. Information those are important for finding solution to the problems in the present time are needed. In this review we will study the processing involved in the digitalization of the image. The set or proper array of the pixels that is also called as picture element is known as image. The positioning of these pixels is in matrix which is formed in columns and rows. The image undergoes the process of digitalization by which a digital image is formed. This process of digitalization is called digital image processing of the image (D.I.P). Electronic devices as such computers are used for the processing of the image into digital image. There are various techniques that are used for image segmentation process. In this review we will also try to understand the involvement of data mining for the extraction of the information from the image. The process of the identifying patterns in the large stored data with the help of statistic and mathematical algorithms is data mining. The pixel wise classification of the image segmentation uses data mining technique.
APA, Harvard, Vancouver, ISO, and other styles
17

Loo, Chu Kiong, Mitja Peruš, and Horst Bischof. "Associative Memory Based Image and Object Recognition by Quantum Holography." Open Systems & Information Dynamics 11, no. 03 (September 2004): 277–89. http://dx.doi.org/10.1023/b:opsy.0000047571.17774.8d.

Full text
Abstract:
A quantum associative memory, much more natural than those of “quantum computers”, is presented. Neural-net-like processing with real-valued variables is transformed into processing with quantum waves. Successful computer simulations of image storage and retrieval are reported. Our Hopfield-like algorithm allows quantum implementation with holographic procedure using present-day quantum-optics techniques. This brings many advantages over classical Hopfield neural nets and quantum computers with logic gates.
APA, Harvard, Vancouver, ISO, and other styles
18

Qian, Chun Hua, He Qun Qiang, and Sheng Rong Gong. "An Image Classification Algorithm Based on SVM." Applied Mechanics and Materials 738-739 (March 2015): 542–45. http://dx.doi.org/10.4028/www.scientific.net/amm.738-739.542.

Full text
Abstract:
Image classification is a image processing method which to distinguish between different categories of objectives according to the different features of images. It is widely used in pattern recognition and computer vision. Support Vector Machine (SVM) is a new machine learning method base on statistical learning theory, it has a rigorous mathematical foundation, builts on the structural risk minimization criterion. We design an image classification algorithm based on SVM in this paper, use Gabor wavelet transformation to extract the image feature, use Principal Component Analysis (PCA) to reduce the dimension of feature matrix. We use orange images and LIBSVM software package in our experiments, select RBF as kernel function. The experimetal results demonstrate that the classification accuracy rate of our algorithm beyond 95%.
APA, Harvard, Vancouver, ISO, and other styles
19

FILIPPINI, LUIGI. "METEOSAT IR IMAGE PROCESSING AND MPEG ANIMATION." International Journal of Modern Physics C 05, no. 05 (October 1994): 831–33. http://dx.doi.org/10.1142/s0129183194000957.

Full text
Abstract:
An HTML demo page, based on the results of a collaboration between CRS4 and CSP, is presented explaining some image processing of meteorogical data. Raw data, obtained every 30 minutes from the METEOSAT satellite, is processed to obtain colored high-contrast images, multiple images are then compressed using the MPEG video standard. The HTML page explains the details of the transformation process and contains links to sample images and the CRS4 MPEG wheather movies archive.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhai, Weifang, Terry Gao, and Juan Feng. "Research on Pre-Processing Methods for License Plate Recognition." International Journal of Computer Vision and Image Processing 11, no. 1 (January 2021): 47–79. http://dx.doi.org/10.4018/ijcvip.2021010104.

Full text
Abstract:
The license plate recognition technology is an important part of the construction of an intelligent traffic management system. This paper mainly researches the image preprocessing, license plate location, and character segmentation in the license plate recognition system. In the preprocessing part of the image, the edge detection method based on convolutional neural network (CNN) is used for edge detection. In the design of the license plate location, this paper proposes a location method based on a combination of mathematical morphology and statistical jump points. First, the license plate area is initially located using mathematical morphology-related operations and then the location of the license plate is accurately located using statistical jump points. Finally, the plate with tilt is corrected. In the process of character segmentation, the border and delimiter are first removed, then the character vertical projection method and the character boundary are used to segment the character for actually using cases.
APA, Harvard, Vancouver, ISO, and other styles
21

Lordo, Robert A. "Image Processing and Jump Regression Analysis." Technometrics 48, no. 2 (May 2006): 312–13. http://dx.doi.org/10.1198/tech.2006.s395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Fryzlewicz, Piotr, and Catherine Timmermans. "SHAH: SHape-Adaptive Haar Wavelets for Image Processing." Journal of Computational and Graphical Statistics 25, no. 3 (July 2, 2016): 879–98. http://dx.doi.org/10.1080/10618600.2015.1048345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Huang, Hong, and Risheng Deng. "Analysis Technology of Tennis Sports Match Based on Data Mining and Image Feature Retrieval." Complexity 2020 (October 14, 2020): 1–15. http://dx.doi.org/10.1155/2020/8877161.

Full text
Abstract:
Tennis game technical analysis is affected by factors such as complex background and on-site noise, which will lead to certain deviations in the results, and it is difficult to obtain scientific and effective tennis technical training strategies through a few game videos. In order to improve the performance of tennis game technical analysis, based on machine learning algorithms, this paper combines image analysis to identify athletes’ movement characteristics and image feature recognition processing with image recognition technology, realizes real-time tracking of athletes’ dynamic characteristics, and records technical characteristics. Moreover, this paper combines data mining technology to obtain effective data from massive video and image data, uses mathematical statistics and data mining technology for data processing, and scientifically analyzes tennis game technology with the support of ergonomics. In addition, this paper designs a controlled experiment to verify the technical analysis effect of the tennis match and the performance of the model itself. The research results show that the model constructed in this paper has certain practical effects and can be applied to actual competitions.
APA, Harvard, Vancouver, ISO, and other styles
24

Racetin, Ivan, and Andrija Krtalić. "Systematic Review of Anomaly Detection in Hyperspectral Remote Sensing Applications." Applied Sciences 11, no. 11 (May 26, 2021): 4878. http://dx.doi.org/10.3390/app11114878.

Full text
Abstract:
Hyperspectral sensors are passive instruments that record reflected electromagnetic radiation in tens or hundreds of narrow and consecutive spectral bands. In the last two decades, the availability of hyperspectral data has sharply increased, propelling the development of a plethora of hyperspectral classification and target detection algorithms. Anomaly detection methods in hyperspectral images refer to a class of target detection methods that do not require any a-priori knowledge about a hyperspectral scene or target spectrum. They are unsupervised learning techniques that automatically discover rare features on hyperspectral images. This review paper is organized into two parts: part A provides a bibliographic analysis of hyperspectral image processing for anomaly detection in remote sensing applications. Development of the subject field is discussed, and key authors and journals are highlighted. In part B an overview of the topic is presented, starting from the mathematical framework for anomaly detection. The anomaly detection methods were generally categorized as techniques that implement structured or unstructured background models and then organized into appropriate sub-categories. Specific anomaly detection methods are presented with corresponding detection statistics, and their properties are discussed. This paper represents the first review regarding hyperspectral image processing for anomaly detection in remote sensing applications.
APA, Harvard, Vancouver, ISO, and other styles
25

Priyadi, Priyadi. "Analysis of Spatio Temporal Change of Land Use of Chrysanthemum Farm in Semarang Regency Using Landsat Image 8 OLI." Indonesian Journal of Computing and Modeling 1, no. 2 (October 15, 2018): 49–54. http://dx.doi.org/10.24246/j.icm.2018.v1.i2.p49-54.

Full text
Abstract:
Agriculture is a business issue that has long existed and will likely continue to exist throughout the ages. Similarly, on the issue of agricultural land use chrysanthemum that needs to be developed method of analysis using technology approach and computer science. This research intends to analyze and develop effective and efficient methods of chrysanthemum land use. The satellite imagery data covering pre-processing, processing and post processing in this research is mostly done with QGIS software. Satelite image used in this research is LANDSAT 8 OLI and band used are band 6, bang 5, band 2 and band 8. Stages of working this research include image cutting in the research area of all bands and all temporal images involved in research, pansharpening using "Orfeo toolbox", image classification with "semi-automatic clasification", then completed with analysis of land use change using "molusce". The result of this research is spatio-temporal data of two images of land use chrysanthemum in April 2017 and August 2017. The result of mathematical research is a statistical table of changes of agricultural land use on two temporal of image used as research object. In addition to the two results of the analysis data, this study also describes the digital image analysis method of agricultural land use chrysanthemum which is expected to be useful for remote sensing research on other research object or extension to this research.
APA, Harvard, Vancouver, ISO, and other styles
26

Hall, Peter. "On the amount of detail that can be recovered from a degraded signal." Advances in Applied Probability 19, no. 2 (June 1987): 371–95. http://dx.doi.org/10.2307/1427424.

Full text
Abstract:
Motivated by applications in digital image processing, we discuss information-theoretic bounds to the amount of detail that can be recovered from a defocused, noisy signal. Mathematical models are constructed for test-pattern, defocusing and noise. Using these models, upper bounds are derived for the amount of detail that can be recovered from the degraded signal, using any method of image restoration. The bounds are used to assess the performance of the class of linear restorative procedures. Certain members of the class are shown to be optimal, in the sense that they attain the bounds, while others are shown to be sub-optimal. The effect of smoothness of point-spread function on the amount of resolvable detail is discussed concisely.
APA, Harvard, Vancouver, ISO, and other styles
27

Tosi, Sébastien, Lídia Bardia, Maria Jose Filgueira, Alexandre Calon, and Julien Colombelli. "LOBSTER: an environment to design bioimage analysis workflows for large and complex fluorescence microscopy data." Bioinformatics 36, no. 8 (December 20, 2019): 2634–35. http://dx.doi.org/10.1093/bioinformatics/btz945.

Full text
Abstract:
Abstract Summary Open source software such as ImageJ and CellProfiler greatly simplified the quantitative analysis of microscopy images but their applicability is limited by the size, dimensionality and complexity of the images under study. In contrast, software optimized for the needs of specific research projects can overcome these limitations, but they may be harder to find, set up and customize to different needs. Overall, the analysis of large, complex, microscopy images is hence still a critical bottleneck for many Life Scientists. We introduce LOBSTER (Little Objects Segmentation and Tracking Environment), an environment designed to help scientists design and customize image analysis workflows to accurately characterize biological objects from a broad range of fluorescence microscopy images, including large images exceeding workstation main memory. LOBSTER comes with a starting set of over 75 sample image analysis workflows and associated images stemming from state-of-the-art image-based research projects. Availability and implementation LOBSTER requires MATLAB (version ≥ 2015a), MATLAB Image processing toolbox, and MATLAB statistics and machine learning toolbox. Code source, online tutorials, video demonstrations, documentation and sample images are freely available from: https://sebastients.github.io. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
28

Hall, Peter, and Inge Koch. "On continuous image models and image analysis in the presence of correlated noise." Advances in Applied Probability 22, no. 2 (June 1990): 332–49. http://dx.doi.org/10.2307/1427539.

Full text
Abstract:
Most theoretical studies of image processing employ discrete image models. While that might be a good approximation to digital analysis, it severely restricts the class of tractable models for the blur component of image degradation, and concentrates excessive attention on specialized features of the pixel lattice. It is analogous to modelling all real statistical data using discrete distributions, which is clearly unnecessary. In this paper we study a continuous model for image analysis, in the presence of systematic degradation via a point spread function and stochastic degradation by a second-order stationary random field. Thus, we depart from the restrictive white-noise models which are commonly used in the theory of image analysis. We establish a general result which describes the performance of optimal image processing methods when the noise process has short-range dependence. Concise limits to resolution are derived, depending on image type, point spread function and noise correlation. These results are developed in important special cases, giving explicit formulae for optimal smoothing sets and convergence rates.
APA, Harvard, Vancouver, ISO, and other styles
29

Fulong, ZHANG, ZHANG Hong, and LIU Fengde. "Effect of Laser-arc Hybrid Welding Energy Parameters on Welding Stability." MATEC Web of Conferences 175 (2018): 02019. http://dx.doi.org/10.1051/matecconf/201817502019.

Full text
Abstract:
The high-speed camera was used to collect the droplet transfer pattern and arc pattern of the laser-arc hybrid welding process. Using the methods of image processing and mathematical statistics, the effects of different laser and arc power conditions on the welding stability were studied. The results show that the melting width depends on the welding current, the depth of penetration depends on the laser power, the droplet transition pattern, the actual filament spacing and the arc length determine the welding stability of the laser arc hybrid welding.
APA, Harvard, Vancouver, ISO, and other styles
30

ALONSO, JOSE, W. BYGRAVE, and L. M. GILLIN. "MATHEMATICAL VISUALIZATION AND ANALYSIS TECHNIQUES FOR ENTREPRENEURSHIP STUDIES: IMAGE PROCESSING IN THE UNITED STATES, EUROPE, JAPAN AND AUSTRALIA." Journal of Enterprising Culture 02, no. 01 (March 1994): 509–33. http://dx.doi.org/10.1142/s0218495894000136.

Full text
Abstract:
Statistics and computer graphics, using linear and non-linear techniques, have been applied to entrepreneurial survey data in a study of the image analysis industry. Chief Executive Officers and Directors of Research have been interviewed in the United States, Europe, Japan and Australia. During the interview, a long questionnaire was completed. A mathematical model in terms of motivation, resources and performance measures has been developed to evaluate company positioning, and for future implementation as an expert system for high technology investment evaluation. A set of indices, derived using cluster and principal component analyses, describes groupings of variables which can be used to find the locus of a company position in an n-dimensional space. This position helps to establish whether the requisite technical, knowledge, marketing and financial infrastructures of the company are in keeping with the n-dimensional surfaces established by other companies in the field. These surfaces are then visualized using polynomial and Fourier parametric methods. Stratifications of the database by culture (as determined by geographical location), manufacturer-user-integrator classification, measures of innovation, and modes of deployment of financial resources are studied in terms of their taxonomic and performance characterisation abilities. Preliminary analyses reveal noticeable clustering of motivational variables by culture.
APA, Harvard, Vancouver, ISO, and other styles
31

Debayle, Johan, and Jean-Charles Pinoli. "General Adaptive Neighborhood Image Processing:." Journal of Mathematical Imaging and Vision 25, no. 2 (August 14, 2006): 245–66. http://dx.doi.org/10.1007/s10851-006-7451-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Debayle, Johan, and Jean-Charles Pinoli. "General Adaptive Neighborhood Image Processing." Journal of Mathematical Imaging and Vision 25, no. 2 (August 14, 2006): 267–84. http://dx.doi.org/10.1007/s10851-006-7452-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Tanaka, Kazuyuki, and D. M. Titterington. "Statistical trajectory of an approximate EM algorithm for probabilistic image processing." Journal of Physics A: Mathematical and Theoretical 40, no. 37 (August 29, 2007): 11285–300. http://dx.doi.org/10.1088/1751-8113/40/37/007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Et. al., J. Vijayaraj,. "Various Segmentation Techniques for Lung Cancer Detection using CT Images: A Review." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 2 (April 10, 2021): 918–28. http://dx.doi.org/10.17762/turcomat.v12i2.1102.

Full text
Abstract:
Computed Tomography (CT) is far and wide utilized to make a diagnosis and access thoracic diseases. The enhanced resolution of CT examination has resulted in a considerable investigation of statistics for analysis. Computerizing the scrutiny of such facts is consequently necessitate and fashioned a hastily emergent research region in medical imaging. The finding of thoracic diseases by means of image processing directs to a pre- processing step identified as “Lung segmentation” which portrays a wide range of techniques starts with simple Thresholding and numerous image processing elements are incorporated to progress segmentation, precision and heftiness. In image processing, techniques like image pre-processing, segmentation and feature extraction have been thrashed out in detail. This paper suggestions investigation of literature on computer examination of the lungs in CT scans and statements the Preprocessing ideas, segmentation of a choice of pulmonary arrangements, and Feature Extraction intended at recognition and categorization of chest abnormalities. As well as, research developments and disputes are recognized and instructions for further examinations are discussed.
APA, Harvard, Vancouver, ISO, and other styles
35

Auroux, Didier, and Mohamed Masmoudi. "Image Processing by Topological Asymptotic Expansion." Journal of Mathematical Imaging and Vision 33, no. 2 (November 18, 2008): 122–34. http://dx.doi.org/10.1007/s10851-008-0121-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Blachowicz, Tomasz, Krzysztof Domino, Michał Koruszowic, Jacek Grzybowski, Tobias Böhm, and Andrea Ehrmann. "Statistical Analysis of Nanofiber Mat AFM Images by Gray-Scale-Resolved Hurst Exponent Distributions." Applied Sciences 11, no. 5 (March 9, 2021): 2436. http://dx.doi.org/10.3390/app11052436.

Full text
Abstract:
Two-dimensional structures, either periodic or random, can be classified by diverse mathematical methods. Quantitative descriptions of such surfaces, however, are scarce since bijective definitions must be found to measure unique dependency between described structures and the chosen quantitative parameters. To solve this problem, we use statistical analysis of periodic fibrous structures by Hurst exponent distributions. Although such a Hurst exponent approach was suggested some years ago, the quantitative analysis of atomic force microscopy (AFM) images of nanofiber mats in such a way was described only recently. In this paper, we discuss the influence of typical AFM image post-processing steps on the gray-scale-resolved Hurst exponent distribution. Examples of these steps are polynomial background subtraction, aligning rows, deleting horizontal errors and sharpening. Our results show that while characteristic features of these false-color images may be shifted in terms of gray-channel and Hurst exponent, they can still be used to identify AFM images and, in the next step, to quantitatively describe AFM images of nanofibrous surfaces. Such a gray-channel approach can be regarded as a simple way to include some information about the 3D structure of the image.
APA, Harvard, Vancouver, ISO, and other styles
37

Staniek, Marcin. "STEREO VISION METHOD APPLICATION TO ROAD INSPECTION." Baltic Journal of Road and Bridge Engineering 12, no. 1 (March 24, 2017): 38–47. http://dx.doi.org/10.3846/bjrbe.2017.05.

Full text
Abstract:
The paper presents the stereo vision method for the mapping of road pavement. The mapped road is a set of points in three-dimensional space. The proposed method of measurement and its implementation make it possible to generate a precise mapping of a road surface with a resolution of 1 mm in transverse, longitudinal and vertical dimensions. Such accurate mapping of the road is the effect of application of stereo images based on image processing technologies. The use of matching measure CoVar, at the stage of images matching, help eliminate corner detection and filter stereo images, maintaining the effectiveness of the algorithm mapping. The proper analysis of image-based data and application of mathematical transformations enable to determine many types of distresses such as potholes, patches, bleedings, cracks, ruts and roughness. The paper also aims at comparing the results of proposed solution and reference test-bench. The statistical analysis of the differences permits the judgment of error types.
APA, Harvard, Vancouver, ISO, and other styles
38

Wait, Eric, Mark Winter, and Andrew R. Cohen. "Hydra image processor: 5-D GPU image analysis library with MATLAB and python wrappers." Bioinformatics 35, no. 24 (June 26, 2019): 5393–95. http://dx.doi.org/10.1093/bioinformatics/btz523.

Full text
Abstract:
Abstract Summary Light microscopes can now capture data in five dimensions at very high frame rates producing terabytes of data per experiment. Five-dimensional data has three spatial dimensions (x, y, z), multiple channels (λ) and time (t). Current tools are prohibitively time consuming and do not efficiently utilize available hardware. The hydra image processor (HIP) is a new library providing hardware-accelerated image processing accessible from interpreted languages including MATLAB and Python. HIP automatically distributes data/computation across system and video RAM allowing hardware-accelerated processing of arbitrarily large images. HIP also partitions compute tasks optimally across multiple GPUs. HIP includes a new kernel renormalization reducing boundary effects associated with widely used padding approaches. Availability and implementation HIP is free and open source software released under the BSD 3-Clause License. Source code and compiled binary files will be maintained on http://www.hydraimageprocessor.com. A comprehensive description of all MATLAB and Python interfaces and user documents are provided. HIP includes GPU-accelerated support for most common image processing operations in 2-D and 3-D and is easily extensible. HIP uses the NVIDIA CUDA interface to access the GPU. CUDA is well supported on Windows and Linux with macOS support in the future.
APA, Harvard, Vancouver, ISO, and other styles
39

Chichko, A. N., A. V. Vedeneev, and O. A. Sachek. "Computer method of processing of deformed steel wire microstructures images for its properties analysis." Ferrous Metallurgy. Bulletin of Scientific , Technical and Economic Information 75, no. 7 (August 8, 2019): 844–53. http://dx.doi.org/10.32339/0135-5910-2019-7-844-853.

Full text
Abstract:
Mechanical properties of board-bronzed wire depend from one side on the wire rod microstructure from which it is produced and from the other side – determines by peculiarities by the microstructure of the wire itself. At present, there are no generally accepted methods of deformed wire microstructure analysis allowing establishing relationship between characteristics of wire deformation and microstructure characteristics. A mathematical apparatus and new algorithm for processing of microstructure of pearlite steel wire after drawing elaborated based on microstructure image processing and its parameterization. As the main parameter of microstructure, the density function of statistical distribution of fiber lengths used, which allows characterizing quantitatively image of microstructure. Based on the experimental data of OJSC “BMZ – management company of “Byelorussian metallurgical company” holding” correlation demonstrated between degree of wire twisting and degree of delamination. On the basis of samples of perlite steel and their images obtained for bronzed bead wire manufactured at OJSC “BMZ – management company of “Byelorussian metallurgical company” holding” density functions of statistical distribution of perlite microstructure fiber length for the group of samples calculated. Criteria for wire microstructure analysis, based on density distribution function along length of fibers, as well as reduced distribution function proposed. Correlation between degree of delamination, number of wire twists and characteristics of its microstructure determined, calculated using the distribution function of wire fibrous structure with pronounced texture in the cross section.
APA, Harvard, Vancouver, ISO, and other styles
40

García de la Nava, Jorge, Sacha van Hijum, and Oswaldo Trelles. "Saturation and Quantization Reduction in Microarray Experiments using Two Scans at Different Sensitivities." Statistical Applications in Genetics and Molecular Biology 3, no. 1 (January 8, 2004): 1–16. http://dx.doi.org/10.2202/1544-6115.1057.

Full text
Abstract:
We present a mathematical model to extend the dynamic range of gene expression data measured by laser scanners. The strategy is based on the rather simple but novel idea of producing two images with different scanner sensitivities, obtaining two different sets of expression values: the first is a low-sensitivity measure to obtain high expression values which would be saturated in a high-sensitivity measure; the second, by the converse strategy, obtains additional information about the low-expression levels. Two mathematical models based on linear and gamma curves are presented for relating the two measurements to each other and producing a coherent and extended range of values. The procedure minimizes the quantization relative error and avoids the collateral effects of saturation. Since most of the current scanner devices are able to adjust the saturation level, the strategy can be considered as a universal solution, and not dependent on the image processing software used for reading the DNA chip. Various tests have been performed, on both proprietary and public domain data sets, showing a reduction of the saturation and quantization effects, not achievable by other methods, with a more complete description of gene-expression data and with a reasonable computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
41

Gorokhovatsky, Volodymyr, Svitlana Gadetska, Oleksii Zhadan, and Oleksandr Khvostenko. "Study of the effectiveness of image classifiers by statistical distributions for components of structural description." Advanced Information Systems 5, no. 1 (June 22, 2021): 5–11. http://dx.doi.org/10.20998/2522-9052.2021.1.01.

Full text
Abstract:
The subject of research is models for constructing image classifiers in the description space as a set of descriptors of key points in the recognition of visual objects in computer vision systems. The goal is to create and study the properties of the image classifier based on the construction of an ensemble of distributions for the components of the structural description using various models of classification decisions, which provides effective classification. Tasks: construction of classification models in the synthesized space of images of probability distributions, analysis of parameters influencing their efficiency, experimental evaluation of the effectiveness of classifiers by means of software modeling based on the results of processing the experimental image base. The applied methods are: ORB detector for formation of keypoint descriptors, data mining, mathematical statistics, means of determining relevance for sets of data vectors, software modeling. The obtained results: The developed method of classification confirms its efficiency and effectiveness for image classification. The effectiveness of the method can be enhanced by the introduction of a variety of types of metrics and measures of similarity between centers and descriptors, by the choice of method of forming centers for reference etalon descriptions, by the introduction of logical processing and compression of the structural description. The best results of the classification were shown by the model using the most important class by the distribution vector for each descriptor corresponding to the mode parameter. The use of a concentrated part of the description data makes it possible to improve its distinction from other descriptions. The use of the median as the center of description has an advantage over the mean. Conclusions. Scientific novelty is the development of an effective method of image classification based on the introduction of a system of probability distributions for data components, which contributes to in-depth analysis in the data space and increases in classification effectiveness. The classifier is implemented in the variants of comparing the integrated representation of distributions by classes and on the basis of mode analysis for the distributions of individual components. The practical importance of the work is the construction of classification models in the modified data space, confirmation of the efficiency of the proposed modifications of data analysis on examples of images, development of software models for implementation of the proposed classification methods in computer vision systems.
APA, Harvard, Vancouver, ISO, and other styles
42

Deleu, Anne-Leen, Machaba Junior Sathekge, Alex Maes, Bart De Spiegeleer, Mike Sathekge, and Christophe Van de Wiele. "Characterization of FDG PET Images Using Texture Analysis in Tumors of the Gastro-Intestinal Tract: A Review." Biomedicines 8, no. 9 (August 24, 2020): 304. http://dx.doi.org/10.3390/biomedicines8090304.

Full text
Abstract:
Radiomics or textural feature extraction obtained from positron emission tomography (PET) images through complex mathematical models of the spatial relationship between multiple image voxels is currently emerging as a new tool for assessing intra-tumoral heterogeneity in medical imaging. In this paper, available literature on texture analysis using FDG PET imaging in patients suffering from tumors of the gastro-intestinal tract is reviewed. While texture analysis of FDG PET images appears clinically promising, due to the lack of technical specifications, a large variability in the implemented methodology used for texture analysis and lack of statistical robustness, at present, no firm conclusions can be drawn regarding the predictive or prognostic value of FDG PET texture analysis derived indices in patients suffering from gastro-enterologic tumors. In order to move forward in this field, a harmonized image acquisition and processing protocol as well as a harmonized protocol for texture analysis of tumor volumes, allowing multi-center studies excluding statistical biases should be considered. Furthermore, the complementary and additional value of CT-imaging, as part of the PET/CT imaging technique, warrants exploration.
APA, Harvard, Vancouver, ISO, and other styles
43

Gehrke, S., and B. T. Beshah. "RADIOMETRIC NORMALIZATION OF LARGE AIRBORNE IMAGE DATA SETS ACQUIRED BY DIFFERENT SENSOR TYPES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 3, 2016): 317–26. http://dx.doi.org/10.5194/isprsarchives-xli-b1-317-2016.

Full text
Abstract:
Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere) and temporally (unstable atmo-spheric properties and even changes in land coverage). <br><br> We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor’s properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling – with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images – allows for adaptation to each sensor’s geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image’s histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. <br><br> The normalization approach is implemented in HxMap software. It has been successfully applied to large sets of heterogeneous imagery, including the adjustment of original sensor images prior to quality control and further processing as well as radiometric adjustment for ortho-image mosaic generation.
APA, Harvard, Vancouver, ISO, and other styles
44

Gehrke, S., and B. T. Beshah. "RADIOMETRIC NORMALIZATION OF LARGE AIRBORNE IMAGE DATA SETS ACQUIRED BY DIFFERENT SENSOR TYPES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 3, 2016): 317–26. http://dx.doi.org/10.5194/isprs-archives-xli-b1-317-2016.

Full text
Abstract:
Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere) and temporally (unstable atmo-spheric properties and even changes in land coverage). <br><br> We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor’s properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling – with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images – allows for adaptation to each sensor’s geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image’s histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. <br><br> The normalization approach is implemented in HxMap software. It has been successfully applied to large sets of heterogeneous imagery, including the adjustment of original sensor images prior to quality control and further processing as well as radiometric adjustment for ortho-image mosaic generation.
APA, Harvard, Vancouver, ISO, and other styles
45

HERNANDEZ, GONZALO, HANS J. HERRMANN, and ERIC GOLES. "EXTREMAL AUTOMATA FOR IMAGE SHARPENING." International Journal of Modern Physics C 05, no. 06 (December 1994): 923–31. http://dx.doi.org/10.1142/s0129183194001057.

Full text
Abstract:
We study numerically the parallel iteration of Extremal Rules. For four Extremal Rules, conceived for sharpening algorithms for image processing, we measured, on the square lattice with Von Neumann neighborhood and free boundary conditions, the typical transient length, the loss of information and the damage spreading response considering random and smoothening random damage. The same qualitative behavior was found for all the rules, with no noticeable finite size effect. They have a fast logarithmic convergence towards the fixed points of the parallel update. The linear damage spreading response has no discontinuity at zero damage, for both kinds of damage. Three of these rules produce similar effects. We propose these rules as sharpening algorithms for image processing.
APA, Harvard, Vancouver, ISO, and other styles
46

Oszust, Mariusz. "No-Reference Image Quality Assessment with Local Gradient Orientations." Symmetry 11, no. 1 (January 16, 2019): 95. http://dx.doi.org/10.3390/sym11010095.

Full text
Abstract:
Image processing methods often introduce distortions, which affect the way an image is subjectively perceived by a human observer. To avoid inconvenient subjective tests in cases in which reference images are not available, it is desirable to develop an automatic no-reference image quality assessment (NR-IQA) technique. In this paper, a novel NR-IQA technique is proposed in which the distributions of local gradient orientations in image regions of different sizes are used to characterize an image. To evaluate the objective quality of an image, its luminance and chrominance channels are processed, as well as their high-order derivatives. Finally, statistics of used perceptual features are mapped to subjective scores by the support vector regression (SVR) technique. The extensive experimental evaluation on six popular IQA benchmark datasets reveals that the proposed technique is highly correlated with subjective scores and outperforms related state-of-the-art hand-crafted and deep learning approaches.
APA, Harvard, Vancouver, ISO, and other styles
47

Chan, Debora, Andrea Rey, Juliana Gambini, and Alejandro C. Frery. "Sampling from the 𝒢 I 0 distribution." Monte Carlo Methods and Applications 24, no. 4 (December 1, 2018): 271–87. http://dx.doi.org/10.1515/mcma-2018-2023.

Full text
Abstract:
Abstract Synthetic Aperture Radar (SAR) images are widely used in several environmental applications because they provide information which cannot be obtained with other sensors. The {\mathcal{G}_{I}^{0}} distribution is an important model for these images because of its flexibility (it provides a suitable way for modeling areas with different degrees of texture, reflectivity and signal-to-noise ratio) and tractability (it is closely related to the Snedekor-F, Pareto Type II, and Gamma distributions). Simulated data are important for devising tools for SAR image processing, analysis and interpretation, among other applications. We compare four ways for sampling data that follow the {\mathcal{G}_{I}^{0}} distribution, using several criteria for assessing the quality of the generated data and the consumed processing time. The experiments are performed running codes in four different programming languages. The experimental results indicate that although there is no overall best method in all the considered programming languages, it is possible to make specific recommendations for each one.
APA, Harvard, Vancouver, ISO, and other styles
48

Гаврилов, Дмитро Сергійович, Сергій Степанович Бучік, Юрій Михайлович Бабенко, Сергій Сергійович Шульгін, and Олександр Васильович Слободянюк. "Метод обробки вiдеодaних з можливістю їх захисту після квaнтувaння." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 2 (June 2, 2021): 64–77. http://dx.doi.org/10.32620/reks.2021.2.06.

Full text
Abstract:
The subject of research in the article is the video processing processes based on the JPEG platform for data transmission in the information and telecommunication network. The aim is to build a method for processing a video image with the possibility of protecting it at the quantization stage with subsequent arithmetic coding. That will allow, while preserving the structural and statistical regularity, to ensure the necessary level of accessibility, reliability, and confidentiality when transmitting video data. Task: research of known methods of selective video image processing with the subsequent formalization of the video image processing procedure at the quantization stage and statistical coding of significant blocks based on the JPEG platform. The methods used are an algorithm based on the JPEG platform, methods for selecting significant informative blocks, arithmetic coding. The following results were obtained. A method for processing a video image with the possibility of its protection at the stage of quantization with subsequent arithmetic coding has been developed. This method will allow, while preserving the structural and statistical regularity, to fulfill the set requirements for an accessible, reliable, and confidential transmission of video data. Ensuring the required level of availability is associated with a 30% reduction in the video image volume compared to the original volume. Simultaneously, the provision of the required level of confidence is confirmed by an estimate of the peak signal-to-noise ratio for an authorized user, which is dB. Ensuring the required level of confidentiality is confirmed by an estimate of the peak signal-to-noise ratio in case of unauthorized access, which is equal to dB. Conclusions. The scientific novelty of the results obtained is as follows: for the first time, two methods of processing video images at the quantization stage have been proposed. The proposed technologies fulfill the assigned tasks to ensure the required level of confidentiality at a given level of confidence. Simultaneously, the method of using encryption tables has a higher level of cryptographic stability than the method of using the key matrix. It is due to a a more complex mathematical apparatus. Which, in turn, increases the time for processing the tributes. To fulfill the requirement of data availability, it is proposed to use arithmetic coding for info-normative blocks, which should be more efficient compared with the methods of code tables. So, the method of using the scoring tables has greater cryptographic stability, and the method of using the matrix-key has higher performance. Simultaneously, the use of arithmetic coding will satisfy the need for accessibility by reducing the initial volume.
APA, Harvard, Vancouver, ISO, and other styles
49

Mukherjee, Rashmi, Dhiraj Dhane Manohar, Dev Kumar Das, Arun Achar, Analava Mitra, and Chandan Chakraborty. "Automated Tissue Classification Framework for Reproducible Chronic Wound Assessment." BioMed Research International 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/851582.

Full text
Abstract:
The aim of this paper was to develop a computer assisted tissue classification (granulation, necrotic, and slough) scheme for chronic wound (CW) evaluation using medical image processing and statistical machine learning techniques. The red-green-blue (RGB) wound images grabbed by normal digital camera were first transformed intoHSI(hue, saturation, and intensity) color space and subsequently the “S” component ofHSIcolor channels was selected as it provided higher contrast. Wound areas from 6 different types of CW were segmented from whole images using fuzzy divergence based thresholding by minimizing edge ambiguity. A set of color and textural features describing granulation, necrotic, and slough tissues in the segmented wound area were extracted using various mathematical techniques. Finally, statistical learning algorithms, namely, Bayesian classification and support vector machine (SVM), were trained and tested for wound tissue classification in different CW images. The performance of the wound area segmentation protocol was further validated by ground truth images labeled by clinical experts. It was observed that SVM with 3rd order polynomial kernel provided the highest accuracies, that is, 86.94%, 90.47%, and 75.53%, for classifying granulation, slough, and necrotic tissues, respectively. The proposed automated tissue classification technique achieved the highest overall accuracy, that is, 87.61%, with highest kappa statistic value (0.793).
APA, Harvard, Vancouver, ISO, and other styles
50

Krampikowska, Aleksandra, and Grzegorz Świt. "Acoustic emission for diagnosing cable way steel support towers." MATEC Web of Conferences 284 (2019): 09002. http://dx.doi.org/10.1051/matecconf/201928409002.

Full text
Abstract:
The paper reports results of the study on the possibility of using the acoustic emission method in diagnosing fatigue and corrosion damage in steel elements of the cable way support towers. The assessment of the sensitivity of the structure to the recorded destructive processes is based on the structural damage classification method using the patterns created as a result of statistical and mathematical processing of acoustic emission signals through image analysis and grouping methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography