To see the other types of publications on this topic, follow the link: Image processing Entropy (Information theory).

Journal articles on the topic 'Image processing Entropy (Information theory)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image processing Entropy (Information theory).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jeon, Gwanggil. "Information Entropy Algorithms for Image, Video, and Signal Processing." Entropy 23, no. 8 (2021): 926. http://dx.doi.org/10.3390/e23080926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aljanabi, Mohammed Abdulameer, Zahir M. Hussain, and Song Feng Lu. "An Entropy-Histogram Approach for Image Similarity and Face Recognition." Mathematical Problems in Engineering 2018 (July 9, 2018): 1–18. http://dx.doi.org/10.1155/2018/9801308.

Full text
Abstract:
Image similarity and image recognition are modern and rapidly growing technologies because of their wide use in the field of digital image processing. It is possible to recognize the face image of a specific person by finding the similarity between the images of the same person face and this is what we will address in detail in this paper. In this paper, we designed two new measures for image similarity and image recognition simultaneously. The proposed measures are based mainly on a combination of information theory and joint histogram. Information theory has a high capability to predict the relationship between image intensity values. The joint histogram is based mainly on selecting a set of local pixel features to construct a multidimensional histogram. The proposed approach incorporates the concepts of entropy and a modified 1D version of the 2D joint histogram of the two images under test. Two entropy measures were considered, Shannon and Renyi, giving a rise to two joint histogram-based, information-theoretic similarity measures: SHS and RSM. The proposed methods have been tested against powerful Zernike-moments approach with Euclidean and Minkowski distance metrics for image recognition and well-known statistical approaches for image similarity such as structural similarity index measure (SSIM), feature similarity index measure (FSIM) and feature-based structural measure (FSM). A comparison with a recent information-theoretic measure (ISSIM) has also been considered. A measure of recognition confidence is introduced in this work based on similarity distance between the best match and the second-best match in the face database during the face recognition process. Simulation results using AT&T and FEI face databases show that the proposed approaches outperform existing image recognition methods in terms of recognition confidence. TID2008 and IVC image databases show that SHS and RSM outperform existing similarity methods in terms of similarity confidence.
APA, Harvard, Vancouver, ISO, and other styles
3

Mohammad-Djafari, Ali. "Entropy, Information Theory, Information Geometry and Bayesian Inference in Data, Signal and Image Processing and Inverse Problems." Entropy 17, no. 6 (2015): 3989–4027. http://dx.doi.org/10.3390/e17063989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Miao, Yinghao, Jiaqi Wu, Yue Hou, Linbing Wang, Weixiao Yu, and Sudi Wang. "Study on Asphalt Pavement Surface Texture Degradation Using 3-D Image Processing Techniques and Entropy Theory." Entropy 21, no. 2 (2019): 208. http://dx.doi.org/10.3390/e21020208.

Full text
Abstract:
Surface texture is a very important factor affecting the anti-skid performance ofpavements. In this paper, entropy theory is introduced to study the decay behavior of thethree-dimensional macrotexture and microtexture of road surfaces in service based on the field testdata collected over more than 2 years. Entropy is found to be feasible for evaluating thethree-dimensional macrotexture and microtexture of an asphalt pavement surface. The complexityof the texture increases with the increase of entropy. Under the polishing action of the vehicle load,the entropy of the surface texture decreases gradually. The three-dimensional macrotexture decaycharacteristics of asphalt pavement surfaces are significantly different for different mixturedesigns. The macrotexture decay performance of asphalt pavement can be improved by designingappropriate mixtures. Compared with the traditional macrotexture parameter Mean Texture Depth(MTD) index, entropy contains more physical information and has a better correlation with thepavement anti-skid performance index. It has significant advantages in describing the relationshipbetween macrotexture characteristics and the anti-skid performance of asphalt pavement.
APA, Harvard, Vancouver, ISO, and other styles
5

Moussa, Mourad, Hazar El Ouni, and Ali Douik. "Edge Detection Based on Fuzzy Logic and Hybrid Types of Shannon Entropy." Journal of Circuits, Systems and Computers 29, no. 14 (2020): 2050227. http://dx.doi.org/10.1142/s0218126620502278.

Full text
Abstract:
Edge is basically the symbol and reflection of partial image discreteness. It is one of the most commonly used operations in image processing and pattern recognition, it contains a wealth of internal information leading to strong interpretation of image. Resisting against noise, illumination and extracting appropriate features from an image is a great challenge in many computer vision applications. Indeed this topic participates to reduce the handled information and focuses on those related to existing objects. Efficient and accurate edge detection will lead to increase in the performance of many computer vision applications, including image segmentation, object-based image coding and image retrieval. Contour detection contributes to locate pixel sets which correspond to sudden intensities variation, these unstable properties of the given image commonly suggest to important events on going in the scene. In this paper, we present in the first time a novel and robust method for edge detection based on joint and conditional entropy when we highlight a Shannon theory, the second part of this paper is dedicated to decision making of edge pixels membership by intelligent method based on fuzzy logic tool.
APA, Harvard, Vancouver, ISO, and other styles
6

Gao, Peichao, Hong Zhang, and Zhilin Li. "Boltzmann Entropy for the Spatial Information of Raster Data." Abstracts of the ICA 1 (July 15, 2019): 1. http://dx.doi.org/10.5194/ica-abs-1-86-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Entropy is an important concept that originated in thermodynamics. It is the subject of the famous Second Law of Thermodynamics, which states that “the entropy of a closed system increases continuously and irrevocably toward a maximum” (Huettner 1976, 102) or “the disorder in the universe always increases” (Framer and Cook 2013, 21). Accordingly, it has been widely regarded as an ideal measure of disorder. Its computation can be theoretically performed according to the Boltzmann equation, which was proposed by the Austrian physicist Ludwig Boltzmann in 1872. In practice, however, the Boltzmann equation involves two problems that are difficult to solve, that is the definition of the macrostate of a system and the determination of the number of possible microstates in the microstate. As noted by the American sociologist Kenneth Bailey, “when the notion of entropy is extended beyond physics, researchers may not be certain how to specify and measure the macrostate/microstate relations” (Bailey 2009, 151). As a result, this entropy (also referred to as Boltzmann entropy and thermodynamic entropy) has remained largely at a conceptual level.</p><p> In practice, the widely used entropy is actually proposed by the American mathematician, electrical engineer, and cryptographer Claude Elwood Shannon in 1948, hence the term Shannon entropy. Shannon entropy was proposed to quantify the statistical disorder of telegraph messages in the area of communications. The quantification result was interpreted as the information content of a telegraph message, hence also the term information entropy. This entropy has served as the cornerstone of information theory and was introduced to various fields including chemistry, biology, and geography. It has been widely utilized to quantify the information content of geographic data (or spatial data) in either a vector format (i.e., vector data) or a raster format (i.e., raster data). However, only the statistical information of spatial data can be quantified by using Shannon entropy. The spatial information is ignored by Shannon entropy; for example, a grey image and its corresponding error image share the same Shannon entropy.</p><p> Therefore, considerable efforts have been made to improve the suitability of Shannon entropy for spatial data, and a number of improved Shannon entropies have been put forward. Rather than further improving Shannon entropy, this study introduces a novel strategy, namely shifting back from Shannon entropy to Boltzmann entropy. There are two advantages of employing Boltzmann entropy. First, as previously mentioned, Boltzmann entropy is the ideal, standard measure of disorder or information. It is theoretically capable of quantifying not only the statistical information but also the spatial information of a data set. Second, Boltzmann entropy can serve as the bridge between spatial patterns and thermodynamic interpretations. In this sense, the Boltzmann entropy of spatial data may have wider applications. In this study, Boltzmann entropy is employed to quantify the spatial information of raster data, such as images, raster maps, digital elevation models, landscape mosaics, and landscape gradients. To this end, the macrostate of raster data is defined, and the number of all possible microstates in the macrostate is determined. To demonstrate the usefulness of Boltzmann entropy, it is applied to satellite remote sensing image processing, and a comparison is made between its performance and that of Shannon entropy.</p>
APA, Harvard, Vancouver, ISO, and other styles
7

ROSENBLATT, MARIEL, EDUARDO SERRANO, and ALEJANDRA FIGLIOLA. "AN ENTROPY BASED IN WAVELET LEADERS TO QUANTIFY THE LOCAL REGULARITY OF A SIGNAL AND ITS APPLICATION TO ANALIZE THE DOW JONES INDEX." International Journal of Wavelets, Multiresolution and Information Processing 10, no. 05 (2012): 1250048. http://dx.doi.org/10.1142/s0219691312500488.

Full text
Abstract:
Local regularity analysis is useful in many fields, such as financial analysis, fluid mechanics, PDE theory, signal and image processing. Different quantifiers have been proposed to measure the local regularity of a function. In this paper we present a new quantifier of local regularity of a signal: the pointwise wavelet leaders entropy. We define this new measure of regularity by combining the concept of entropy, coming from the information theory and statistical mechanics, with the wavelet leaders coefficients. Also we establish its inverse relation with one of the well-known regularity exponents, the pointwise Hölder exponent. Finally, we apply this methodology to the financial data series of the Dow Jones Industrial Average Index, registered in the period 1928–2011, in order to compare the temporal evolution of the pointwise Hölder exponent and the pointwise wavelet leaders entropy. The analysis reveals that temporal variation of these quantifiers reflects the evolution of the Dow Jones Industrial Average Index and identifies historical crisis events. We propose a new approach to analyze the local regularity variation of a signal and we apply this procedure to a financial data series, attempting to make a contribution to understand the dynamics of financial markets.
APA, Harvard, Vancouver, ISO, and other styles
8

Ramos, Glenda Quaresma, Robert Saraiva Matos, and Henrique Duarte da Fonseca Filho. "Advanced Microtexture Study of Anacardium occidentale L. Leaf Surface From the Amazon by Fractal Theory." Microscopy and Microanalysis 26, no. 5 (2020): 989–96. http://dx.doi.org/10.1017/s1431927620001798.

Full text
Abstract:
AbstractThis work applies stereometric parameters and fractal theory to characterize the structural complexity of the 3D surface roughness of Anacardium occidentale L. leaf using atomic force microscopy (AFM) measurements. Surface roughness was studied by AFM in tapping mode, in air, on square areas of 6,400 and 10,000 μm2. The stereometric analyses using MountainsMap Premium and WSXM software provided detailed information on the 3D surface topography of the samples. These data showed that the morphology of the abaxial and adaxial side of the cashew leaf is different, which was also observed in relation to their microtextures. Fractal analysis showed that the adaxial and abaxial sides have strong microtexture homogeneity, but the adaxial side presented higher surface entropy. These results show that image processing associated with fractal theory can be an indispensable tool for identifying plant species by their leaves because this species has singularities on each side of the leaf.
APA, Harvard, Vancouver, ISO, and other styles
9

Lestienne, Rémy. "A Bayesian and Emergent View of the Brain." Kronoscope 14, no. 2 (2014): 180–93. http://dx.doi.org/10.1163/15685241-12341304.

Full text
Abstract:
Very simple psychophysiological visual tests suggest that the brain, instead of processing visual information in a passive way as was classically thought, in fact actively evaluates probabilities of the causes of visual data and continuously proposes to the mind the ones that are more likely to account for sensory inputs. In the past few years, Karl Friston, a researcher from University College of London, and his group have proposed a mechanism by which the brain successfully performs with great precision the inversion of probability densities necessary for this Bayesian computation. This mechanism would account for several anatomic structures of the cortex, explaining in particular the abundance of backwards interneuronal connections. The proposed picture of brain functioning is that of a dynamical process, far from the static image of a photographic plate. The result is an emergence, for the final picture of the world is a coherent vision where the more likely causes are proposed in a coherent manner. Although the theory accounts for the automatic, infraconscious side of the processing of information in the brain, it is in good accord with Roger Sperry’s theory of consciousness as a theory of strong emergence. It is too soon to evaluate the solidity of the law of “minimization of free energy” proposed by Friston not only as ruling the automatisms of the brain but as a general law of biology. This law is similar (although in contradistinction) to the second law of thermodynamics of increase of entropy (insofar as it explains the tendency of living beings for self-organization), and it is already looked at by some neuroscientists as a big step forward in deciphering the mysteries of the brain.
APA, Harvard, Vancouver, ISO, and other styles
10

Sbert, Mateu, Jordi Poch, Shuning Chen, and Víctor Elvira. "Stochastic Order and Generalized Weighted Mean Invariance." Entropy 23, no. 6 (2021): 662. http://dx.doi.org/10.3390/e23060662.

Full text
Abstract:
In this paper, we present order invariance theoretical results for weighted quasi-arithmetic means of a monotonic series of numbers. The quasi-arithmetic mean, or Kolmogorov–Nagumo mean, generalizes the classical mean and appears in many disciplines, from information theory to physics, from economics to traffic flow. Stochastic orders are defined on weights (or equivalently, discrete probability distributions). They were introduced to study risk in economics and decision theory, and recently have found utility in Monte Carlo techniques and in image processing. We show in this paper that, if two distributions of weights are ordered under first stochastic order, then for any monotonic series of numbers their weighted quasi-arithmetic means share the same order. This means for instance that arithmetic and harmonic mean for two different distributions of weights always have to be aligned if the weights are stochastically ordered, this is, either both means increase or both decrease. We explore the invariance properties when convex (concave) functions define both the quasi-arithmetic mean and the series of numbers, we show its relationship with increasing concave order and increasing convex order, and we observe the important role played by a new defined mirror property of stochastic orders. We also give some applications to entropy and cross-entropy and present an example of multiple importance sampling Monte Carlo technique that illustrates the usefulness and transversality of our approach. Invariance theorems are useful when a system is represented by a set of quasi-arithmetic means and we want to change the distribution of weights so that all means evolve in the same direction.
APA, Harvard, Vancouver, ISO, and other styles
11

Wu, Chengmao, and Zhuo Cao. "Noise distance driven fuzzy clustering based on adaptive weighted local information and entropy-like divergence kernel for robust image segmentation." Digital Signal Processing 111 (April 2021): 102963. http://dx.doi.org/10.1016/j.dsp.2021.102963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Tsallis, Constantino. "Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere." Entropy 21, no. 7 (2019): 696. http://dx.doi.org/10.3390/e21070696.

Full text
Abstract:
The pillars of contemporary theoretical physics are classical mechanics, Maxwell electromagnetism, relativity, quantum mechanics, and Boltzmann–Gibbs (BG) statistical mechanics –including its connection with thermodynamics. The BG theory describes amazingly well the thermal equilibrium of a plethora of so-called simple systems. However, BG statistical mechanics and its basic additive entropy S B G started, in recent decades, to exhibit failures or inadequacies in an increasing number of complex systems. The emergence of such intriguing features became apparent in quantum systems as well, such as black holes and other area-law-like scenarios for the von Neumann entropy. In a different arena, the efficiency of the Shannon entropy—as the BG functional is currently called in engineering and communication theory—started to be perceived as not necessarily optimal in the processing of images (e.g., medical ones) and time series (e.g., economic ones). Such is the case in the presence of generic long-range space correlations, long memory, sub-exponential sensitivity to the initial conditions (hence vanishing largest Lyapunov exponents), and similar features. Finally, we witnessed, during the last two decades, an explosion of asymptotically scale-free complex networks. This wide range of important systems eventually gave support, since 1988, to the generalization of the BG theory. Nonadditive entropies generalizing the BG one and their consequences have been introduced and intensively studied worldwide. The present review focuses on these concepts and their predictions, verifications, and applications in physics and elsewhere. Some selected examples (in quantum information, high- and low-energy physics, low-dimensional nonlinear dynamical systems, earthquakes, turbulence, long-range interacting systems, and scale-free networks) illustrate successful applications. The grounding thermodynamical framework is briefly described as well.
APA, Harvard, Vancouver, ISO, and other styles
13

Feixas, Miquel, Anton Bardera, Jaume Rigau, Qing Xu, and Mateu Sbert. "Information Theory Tools for Image Processing." Synthesis Lectures on Computer Graphics and Animation 6, no. 1 (2014): 1–164. http://dx.doi.org/10.2200/s00560ed1v01y201312cgr015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gibson, Jerry, and Hoontaek Oh. "Mutual Information Loss in Pyramidal Image Processing." Information 11, no. 6 (2020): 322. http://dx.doi.org/10.3390/info11060322.

Full text
Abstract:
Gaussian and Laplacian pyramids have long been important for image analysis and compression. More recently, multiresolution pyramids have become an important component of machine learning and deep learning for image analysis and image recognition. Constructing Gaussian and Laplacian pyramids consists of a series of filtering, decimation, and differencing operations, and the quality indicator is usually mean squared reconstruction error in comparison to the original image. We present a new characterization of the information loss in a Gaussian pyramid in terms of the change in mutual information. More specifically, we show that one half the log ratio of entropy powers between two stages in a Gaussian pyramid is equal to the difference in mutual information between these two stages. We show that this relationship holds for a wide variety of probability distributions and present several examples of analyzing Gaussian and Laplacian pyramids for different images.
APA, Harvard, Vancouver, ISO, and other styles
15

Trebbia, P. "Application of Cross Entropy and Factorial Analysis to Image Processing." Proceedings, annual meeting, Electron Microscopy Society of America 48, no. 1 (1990): 470–71. http://dx.doi.org/10.1017/s0424820100181105.

Full text
Abstract:
Most of the time, the problem that image analysis has to face is the following : from a given data set obtained with independant measurements, how to extract the signal (the “useful information”) which is mixed with a background (coming from the object itself or from the detector) and statistical noise? If a pre-knowledge of the specimen (or of the experimental procedure) gives some reasonable hypothesis about an empirical mathematical description of the background and of the noise, then, one can try some “fitting” method (maximum likelihood for example) in order to solve the problem. But if no a-priori model is available, one is reduced to try to understand the “meaning” of the images through an analysis of the information content of the data set. Two main statistical approaches can be of some help in that process: ) first, one can measure from the histogram of the data set some characteristic statistical parameters (mean and variance) and compare it with a gaussian histogram of same mean and variance which would be obtained under the same experimental conditions (same number of detected electrons for example) in a pure random process, that is with a “neutral” specimen showing absolutely no contrast. Then from these two histograms, one may estimate the relative cross entropy, that is the information value added by the real presence of a given object in the specimen chamber. Moreover, one can make a 2D-plot of this information in order to localize the pixels, in the data set, whose intensities significantly differ from a pure random process.) the second possibility is to make a variance analysis of the data set : if we have a pre-knowledge that some of the images in this set do not contain the “contrast information” we are looking for, then, there must be some quantitative difference between these images and the other ones. Among the various algorithms available from multivariate statistics, the factorial analysis has been proved to be suited to this kind of analysis.
APA, Harvard, Vancouver, ISO, and other styles
16

Luo, Kaiqing, Manling Lin, Pengcheng Wang, Siwei Zhou, Dan Yin, and Haolan Zhang. "Improved ORB-SLAM2 Algorithm Based on Information Entropy and Image Sharpening Adjustment." Mathematical Problems in Engineering 2020 (September 23, 2020): 1–13. http://dx.doi.org/10.1155/2020/4724310.

Full text
Abstract:
Simultaneous Localization and Mapping (SLAM) has become a research hotspot in the field of robots in recent years. However, most visual SLAM systems are based on static assumptions which ignored motion effects. If image sequences are not rich in texture information or the camera rotates at a large angle, SLAM system will fail to locate and map. To solve these problems, this paper proposes an improved ORB-SLAM2 algorithm based on information entropy and sharpening processing. The information entropy corresponding to the segmented image block is calculated, and the entropy threshold is determined by the adaptive algorithm of image entropy threshold, and then the image block which is smaller than the information entropy threshold is sharpened. The experimental results show that compared with the ORB-SLAM2 system, the relative trajectory error decreases by 36.1% and the absolute trajectory error decreases by 45.1% compared with ORB-SLAM2. Although these indicators are greatly improved, the processing time is not greatly increased. To some extent, the algorithm solves the problem of system localization and mapping failure caused by camera large angle rotation and insufficient image texture information.
APA, Harvard, Vancouver, ISO, and other styles
17

Jeon, Gwanggil, and Abdellah Chehri. "Entropy-Based Algorithms for Signal Processing." Entropy 22, no. 6 (2020): 621. http://dx.doi.org/10.3390/e22060621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Roy Frieden, B. "Information and estimation in image processing." Proceedings, annual meeting, Electron Microscopy Society of America 45 (August 1987): 14–17. http://dx.doi.org/10.1017/s0424820100125142.

Full text
Abstract:
Despite the skill and determination of electro-optical system designers, the images acquired using their best designs often suffer from blur and noise. The aim of an “image enhancer” such as myself is to improve these poor images, usually by digital means, such that they better resemble the true, “optical object,” input to the system. This problem is notoriously “ill-posed,” i.e. any direct approach at inversion of the image data suffers strongly from the presence of even a small amount of noise in the data. In fact, the fluctuations engendered in neighboring output values tend to be strongly negative-correlated, so that the output spatially oscillates up and down, with large amplitude, about the true object. What can be done about this situation? As we shall see, various concepts taken from statistical communication theory have proven to be of real use in attacking this problem. We offer below a brief summary of these concepts.
APA, Harvard, Vancouver, ISO, and other styles
19

Lu, Tao, Jiaming Wang, Huabing Zhou, Junjun Jiang, Jiayi Ma, and Zhongyuan Wang. "Rectangular-Normalized Superpixel Entropy Index for Image Quality Assessment." Entropy 20, no. 12 (2018): 947. http://dx.doi.org/10.3390/e20120947.

Full text
Abstract:
Image quality assessment (IQA) is a fundamental problem in image processing that aims to measure the objective quality of a distorted image. Traditional full-reference (FR) IQA methods use fixed-size sliding windows to obtain structure information but ignore the variable spatial configuration information. In order to better measure the multi-scale objects, we propose a novel IQA method, named RSEI, based on the perspective of the variable receptive field and information entropy. First, we find that consistence relationship exists between the information fidelity and human visual of individuals. Thus, we reproduce the human visual system (HVS) to semantically divide the image into multiple patches via rectangular-normalized superpixel segmentation. Then the weights of each image patches are adaptively calculated via their information volume. We verify the effectiveness of RSEI by applying it to data from the TID2008 database and denoise algorithms. Experiments show that RSEI outperforms some state-of-the-art IQA algorithms, including visual information fidelity (VIF) and weighted average deep image quality measure (WaDIQaM).
APA, Harvard, Vancouver, ISO, and other styles
20

HAO, Bao Ming, Hai Feng Xu, and Huan Yin Guo. "Fabric Defect Detection Based on Cross-Entropy." Advanced Materials Research 760-762 (September 2013): 1233–36. http://dx.doi.org/10.4028/www.scientific.net/amr.760-762.1233.

Full text
Abstract:
The core of fabric defects detection is the collection and processing of fabrics image. A scheme for fabric defect detection based on cross-entropy is proposed in this paper.The crossentropy value illuminates the information difference between the template image and the realtime image on the average.So can take advantage of cross-entropy criteria to use for defect detection and identification. Results have confirmed the usefulness of this scheme for fabric defect detection.
APA, Harvard, Vancouver, ISO, and other styles
21

Beaudry, Normand J., and Renato Renner. "An intuitive proof of the data processing inequality." Quantum Information and Computation 12, no. 5&6 (2012): 432–41. http://dx.doi.org/10.26421/qic12.5-6-4.

Full text
Abstract:
The data processing inequality (DPI) is a fundamental feature of information theory. Informally it states that you cannot increase the information content of a quantum system by acting on it with a local physical operation. When the smooth min-entropy is used as the relevant information measure, then the DPI follows immediately from the definition of the entropy. The DPI for the von Neumann entropy is then obtained by specializing the DPI for the smooth min-entropy by using the quantum asymptotic equipartition property (QAEP). We provide a short proof of the QAEP and therefore obtain a self-contained proof of the DPI for the von Neumann entropy.
APA, Harvard, Vancouver, ISO, and other styles
22

Chan, D., J. Gambini, and A. C. Frery. "SPECKLE NOISE REDUCTION IN SAR IMAGES USING INFORMATION THEORY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W12-2020 (November 5, 2020): 141–46. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w12-2020-141-2020.

Full text
Abstract:
Abstract. In this work, a new nonlocal means filter for single-look speckled data using the Shannon and Rényi entropies is proposed. The measure of similarity between a central window and patches of the image is based on a statistical test for comparing if two samples have the same entropy and hence have the same distribution.The results are encouraging, as the filtered image has better signal-to-noise ratio, it preserves the mean, and the edges are not severely blurred.
APA, Harvard, Vancouver, ISO, and other styles
23

Liu, Zhi-Qiang. "Bayesian Paradigms in Image Processing." International Journal of Pattern Recognition and Artificial Intelligence 11, no. 01 (1997): 3–33. http://dx.doi.org/10.1142/s0218001497000020.

Full text
Abstract:
A large number of image and spatial information processing problems involves the estimation of the intrinsic image information from observed images, for instance, image restoration, image registration, image partition, depth estimation, shape reconstruction and motion estimation. These are inverse problems and generally ill-posed. Such estimation problems can be readily formulated by Bayesian models which infer the desired image information from the measured data. Bayesian paradigms have played a very important role in spatial data analysis for over three decades and have found many successful applications. In this paper, we discuss several aspects of Bayesian paradigms: uncertainty present in the observed image, prior distribution modeling, Bayesian-based estimation techniques in image processing, particularly, the maximum a posteriori estimator and the Kalman filtering theory, robustness, and Markov random fields and applications.
APA, Harvard, Vancouver, ISO, and other styles
24

Ge, Sen, and Da Gui Huang. "Part Recognition Based on Maximum Mutual Information." Key Engineering Materials 407-408 (February 2009): 234–38. http://dx.doi.org/10.4028/www.scientific.net/kem.407-408.234.

Full text
Abstract:
A new approach to the problem of part recognition is proposed by using maximum mutual information. The method applies entropy to measure image feature, combined with color information and local shape information, and uses mutual information as a new matching criterion between the images for image recognition. This method solves the problem that histogram algorithm can not represent the spatial information. This method not only has the feature of translation invariant, but also avoids image segmentation which may lead to a complex calculation, so it can be realized easily. The result shows that proposed approach is accuracy, stability, and reliability in the processing of machine part image recognition.
APA, Harvard, Vancouver, ISO, and other styles
25

Boutekkouk, Fateh. "Digital Color Image Processing Using Intuitionistic Fuzzy Hypergraphs." International Journal of Computer Vision and Image Processing 11, no. 3 (2021): 21–40. http://dx.doi.org/10.4018/ijcvip.2021070102.

Full text
Abstract:
Hypergraphs are considered a useful mathematical tool for digital image processing and analysis since they can represent digital images as complex relationships between pixels or block of pixels. The notion of hypergraphs has been extended in fuzzy theory leading to the concept of fuzzy hypergraphs, then in intuitionistic fuzzy theory conducting to the concept of intuitionistic fuzzy hypergraphs or IFHG. The latter is very suitable to model digital images with uncertain or imprecise knowledge. This paper deals with color image denoising, segmentation, and edge detection in a color image initially represented in RGB space using intuitionistic fuzzy hypergraphs. First, the RGB image is transformed to HLS space resulting in three separated components. Then each component is intuitionistically fuzzified based on entropy measure from which an intuitionistic fuzzy hypergraph is generated automatically. The generated hypergraphs will be used for denoising, segmentation, and edge detection.
APA, Harvard, Vancouver, ISO, and other styles
26

Urban, Jan, Renata Rychtáriková, Petr Macháček, Dalibor Štys, Pavla Urbanová, and Petr Císař. "OPTIMIZATION OF COMPUTATIONAL BURDEN OF THE POINT INFORMATION GAIN." Acta Polytechnica 59, no. 6 (2019): 593–600. http://dx.doi.org/10.14311/ap.2019.59.0593.

Full text
Abstract:
We developed a method of image preprocessing based on the information entropy, namely, on the information contribution made by each individual pixel to the whole image or to image’s part (i.e., a Point Information Gain; PIG). An idea of the PIG calculation is that an image background remains informatively poor, whereas objects carry relevant information. In one calculation, this method preserves details, highlights edges, and decreases random noise. This paper describes optimization and implementation of the PIG calculation on graphical processing units (GPU) to overcome a high computational burden.
APA, Harvard, Vancouver, ISO, and other styles
27

Chahrour, Nour, William Castaings, and Eric Barthélemy. "Image-based river discharge estimation by merging heterogeneous data with information entropy theory." Flow Measurement and Instrumentation 81 (October 2021): 102039. http://dx.doi.org/10.1016/j.flowmeasinst.2021.102039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

., Vishali, Harpal Singh, and Priyanka Kaushal. "Relevance of Matrices and Image Processing." CGC International Journal of Contemporary Technology and Research 2, no. 2 (2020): 126–30. http://dx.doi.org/10.46860/cgcijctr.2020.06.26.126.

Full text
Abstract:
An image is formed by the small bit of information namely pixel which is stored in the form of an array or a matrix. An image is converted into a digital form and some operations are carried out on this so that an improved image can be obtained and some specific information regarding the same can be recovered too; this procedure is known as image processing. Different processes in image processing involve different methods and operations applied. Matrix Theory has great importance in the operations applied in image processing. This manuscript is focused on the relationship of matrix theory and image processing in various applications of image processing. In order to understand the higher dimension matrices in image processing, some applications are considered to give a good insight.
APA, Harvard, Vancouver, ISO, and other styles
29

Pedraza, Anibal, Oscar Deniz, and Gloria Bueno. "Approaching Adversarial Example Classification with Chaos Theory." Entropy 22, no. 11 (2020): 1201. http://dx.doi.org/10.3390/e22111201.

Full text
Abstract:
Adversarial examples are one of the most intriguing topics in modern deep learning. Imperceptible perturbations to the input can fool robust models. In relation to this problem, attack and defense methods are being developed almost on a daily basis. In parallel, efforts are being made to simply pointing out when an input image is an adversarial example. This can help prevent potential issues, as the failure cases are easily recognizable by humans. The proposal in this work is to study how chaos theory methods can help distinguish adversarial examples from regular images. Our work is based on the assumption that deep networks behave as chaotic systems, and adversarial examples are the main manifestation of it (in the sense that a slight input variation produces a totally different output). In our experiments, we show that the Lyapunov exponents (an established measure of chaoticity), which have been recently proposed for classification of adversarial examples, are not robust to image processing transformations that alter image entropy. Furthermore, we show that entropy can complement Lyapunov exponents in such a way that the discriminating power is significantly enhanced. The proposed method achieves 65% to 100% accuracy detecting adversarials with a wide range of attacks (for example: CW, PGD, Spatial, HopSkip) for the MNIST dataset, with similar results when entropy-changing image processing methods (such as Equalization, Speckle and Gaussian noise) are applied. This is also corroborated with two other datasets, Fashion-MNIST and CIFAR 19. These results indicate that classifiers can enhance their robustness against the adversarial phenomenon, being applied in a wide variety of conditions that potentially matches real world cases and also other threatening scenarios.
APA, Harvard, Vancouver, ISO, and other styles
30

Jang, Wonyoung, and Sun-Young Lee. "Partial image encryption using format-preserving encryption in image processing systems for Internet of things environment." International Journal of Distributed Sensor Networks 16, no. 3 (2020): 155014772091477. http://dx.doi.org/10.1177/1550147720914779.

Full text
Abstract:
Concomitant with advances in technology, the number of systems and devices that utilize image data has increased. Nowadays, image processing devices incorporated into systems, such as the Internet of things, drones, and closed-circuit television, can collect images of people and automatically share them with networks. Consequently, the threat of invasion of privacy by image leakage has increased exponentially. However, traditional image-security methods, such as privacy masking and image encryption, have several disadvantages, including storage space wastage associated with data padding, inability to decode, inability to recognize images without decoding, and exposure of private information after decryption. This article proposes a method for partially encrypting private information in images using FF1 and FF3-1. The proposed method encrypts private information without increasing the data size, solving the problem of wasted storage space. Furthermore, using the proposed method, specific sections of encrypted images can be decrypted and recognized before decryption of the entire information, which addresses the problems besetting traditional privacy masking and image encryption methods. The results of histogram analysis, correlation analysis, number of pixels change rate, unified average change intensity, information entropy analysis, and NIST SP 800-22 verify the security and overall efficacy of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
31

PAL, SANKAR K. "FUZZY IMAGE PROCESSING AND RECOGNITION: UNCERTAINTY HANDLING AND APPLICATIONS." International Journal of Image and Graphics 01, no. 02 (2001): 169–95. http://dx.doi.org/10.1142/s0219467801000128.

Full text
Abstract:
Image processing and analysis in fuzzy set theoretic framework is addressed. Various uncertainties involved in these problems and the relevance of fuzzy set theory in handling them are explained. Different image ambiguity measures based on fuzzy entropy and fuzzy geometry of image subsets are mentioned. A discussion is made on the flexibility in choosing membership functions. Illustrations of commonly used fuzzy image processing operations such as enhancement, edge detection segmentation, skeleton extraction, feature extraction are then provided, along with their significance and characteristics. Their applications to some real life problems, e.g., motion frame analysis, remotely sensed image analysis, modeling face images are finally described. An extensive bibliography is also provided.
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Bin, Dingyi Gan, Yongchuan Tang, and Yan Lei. "Incomplete Information Management Using an Improved Belief Entropy in Dempster-Shafer Evidence Theory." Entropy 22, no. 9 (2020): 993. http://dx.doi.org/10.3390/e22090993.

Full text
Abstract:
Quantifying uncertainty is a hot topic for uncertain information processing in the framework of evidence theory, but there is limited research on belief entropy in the open world assumption. In this paper, an uncertainty measurement method that is based on Deng entropy, named Open Deng entropy (ODE), is proposed. In the open world assumption, the frame of discernment (FOD) may be incomplete, and ODE can reasonably and effectively quantify uncertain incomplete information. On the basis of Deng entropy, the ODE adopts the mass value of the empty set, the cardinality of FOD, and the natural constant e to construct a new uncertainty factor for modeling the uncertainty in the FOD. Numerical example shows that, in the closed world assumption, ODE can be degenerated to Deng entropy. An ODE-based information fusion method for sensor data fusion is proposed in uncertain environments. By applying it to the sensor data fusion experiment, the rationality and effectiveness of ODE and its application in uncertain information fusion are verified.
APA, Harvard, Vancouver, ISO, and other styles
33

Ahrari, A. H., M. Kiavarz, M. Hasanlou, and M. Marofi. "THERMAL AND VISIBLE SATELLITE IMAGE FUSION USING WAVELET IN REMOTE SENSING AND SATELLITE IMAGE PROCESSING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W4 (September 26, 2017): 11–15. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w4-11-2017.

Full text
Abstract:
Multimodal remote sensing approach is based on merging different data in different portions of electromagnetic radiation that improves the accuracy in satellite image processing and interpretations. Remote Sensing Visible and thermal infrared bands independently contain valuable spatial and spectral information. Visible bands make enough information spatially and thermal makes more different radiometric and spectral information than visible. However low spatial resolution is the most important limitation in thermal infrared bands. Using satellite image fusion, it is possible to merge them as a single thermal image that contains high spectral and spatial information at the same time. The aim of this study is a performance assessment of thermal and visible image fusion quantitatively and qualitatively with wavelet transform and different filters. In this research, wavelet algorithm (Haar) and different decomposition filters (mean.linear,ma,min and rand) for thermal and panchromatic bands of Landast8 Satellite were applied as shortwave and longwave fusion method . Finally, quality assessment has been done with quantitative and qualitative approaches. Quantitative parameters such as Entropy, Standard Deviation, Cross Correlation, Q Factor and Mutual Information were used. For thermal and visible image fusion accuracy assessment, all parameters (quantitative and qualitative) must be analysed with respect to each other. Among all relevant statistical factors, correlation has the most meaningful result and similarity to the qualitative assessment. Results showed that mean and linear filters make better fused images against the other filters in Haar algorithm. Linear and mean filters have same performance and there is not any difference between their qualitative and quantitative results.
APA, Harvard, Vancouver, ISO, and other styles
34

Shanker, N. R., and S. S. Ramakrishnan. "Enhancement of Multispectral Ikonos Satellite Image Using Quantum Information Processing." Fundamenta Informaticae 101, no. 4 (2010): 305–20. http://dx.doi.org/10.3233/fi-2010-290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Jiao, Wentan, Wenqing Chen, and Jing Zhang. "An Improved Cuckoo Search Algorithm for Multithreshold Image Segmentation." Security and Communication Networks 2021 (August 2, 2021): 1–10. http://dx.doi.org/10.1155/2021/6036410.

Full text
Abstract:
Image segmentation is an important part of image processing. For the disadvantages of image segmentation under multiple thresholds such as long time and poor quality, an improved cuckoo search (ICS) is proposed for multithreshold image segmentation strategy. Firstly, the image segmentation model based on the maximum entropy threshold is described, and secondly, the cuckoo algorithm is improved by using chaotic initialization population to improve the diversity of solutions, optimizing the step size factor to improve the possibility of obtaining the optimal solution, and using probability to reduce the complexity of the algorithm; finally, the maximum entropy threshold function in image segmentation is used as the individual fitness function of the cuckoo search algorithm for solving. The simulation experiments show that the algorithm has a good segmentation effect under four different thresholding conditions.
APA, Harvard, Vancouver, ISO, and other styles
36

Flegner, Patrik, and Ján Kačur. "EVALUATION OF SENSOR SIGNAL PROCESSING METHODS IN TERMS OF INFORMATION THEORY." Acta Polytechnica 58, no. 6 (2018): 339–45. http://dx.doi.org/10.14311/ap.2018.58.0339.

Full text
Abstract:
The paper deals with the examination of basic methods of evaluation of sensor signals in terms of the information content of the given method and the used technical means. In this respect, methods based on classical analog systems, digital systems in the time domain of signal processing, hybrid systems and digital systems evaluating signal in the frequency domain are compared. A significant increase in entropy in individual systems is demonstrated in the case of a more complex signal evaluation. For each measuring system, the experimental setups, results, and discussions are described in the paper. The issue described in the article is particularly topical in connection with the development of modern technologies used in the processes and subsequent use of information. The main purpose of the article is to show that the information content of the signal is increased because the signal is more complexly processed.
APA, Harvard, Vancouver, ISO, and other styles
37

Lavanya, R., G. K. Rajini, and G. Vidhya Sagar. "Retinal vessel feature extraction from fundus image using image processing techniques." International Journal of Engineering & Technology 7, no. 2 (2018): 687. http://dx.doi.org/10.14419/ijet.v7i2.8892.

Full text
Abstract:
Retinal Vessel detection for retinal images play crucial role in medical field for proper diagnosis and treatment of various diseases like diabetic retinopathy, hypertensive retinopathy etc. This paper deals with image processing techniques for automatic analysis of blood vessel detection of fundus retinal image using MATLAB tool. This approach uses intensity information and local phase based enhancement filter techniques and morphological operators to provide better accuracy.Objective: The effect of diabetes on the eye is called Diabetic Retinopathy. At the early stages of the disease, blood vessels in the retina become weakened and leak, forming small hemorrhages. As the disease progress, blood vessels may block, and sometimes leads to permanent vision loss. To help Clinicians in diagnosis of diabetic retinopathy in retinal images with an early detection of abnormalities with automated tools.Methods: Fundus photography is an imaging technology used to capture retinal images in diabetic patient through fundus camera. Adaptive Thresholding is used as pre-processing techniques to increase the contrast, and filters are applied to enhance the image quality. Morphological processing is used to detect the shape of blood vessels as they are nonlinear in nature.Results: Image features like, Mean and Standard deviation and entropy, for textural analysis of image with Gray Level Co-occurrence Matrix features like contrast and Energy are calculated for detected vessels.Conclusion: In diabetic patients eyes are affected severely compared to other organs. Early detection of vessel structure in retinal images with computer assisted tools may assist Clinicians for proper diagnosis and pathology.
APA, Harvard, Vancouver, ISO, and other styles
38

Cieplinski, Leszek. "Entropy-Constrained Multiresolution Vector Quantisation for Image Coding." Fundamenta Informaticae 34, no. 4 (1998): 389–96. http://dx.doi.org/10.3233/fi-1998-34403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Małyszko, Dariusz, and Jarosław Stepaniuk. "Adaptive Rough Entropy Clustering Algorithms in Image Segmentation." Fundamenta Informaticae 98, no. 2-3 (2010): 199–231. http://dx.doi.org/10.3233/fi-2010-224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

CHENG, H. D., YANHUI GUO, and YINGTAO ZHANG. "A NOVEL APPROACH TO IMAGE THRESHOLDING BASED ON 2D HOMOGENEITY HISTOGRAM AND MAXIMUM FUZZY ENTROPY." New Mathematics and Natural Computation 07, no. 01 (2011): 105–33. http://dx.doi.org/10.1142/s1793005711001834.

Full text
Abstract:
Image thresholding is an important topic for image processing, pattern recognition and computer vision. Fuzzy set theory has been successfully applied to many areas, and it is generally believed that image processing bears some fuzziness in nature. In this paper, we employ the newly proposed 2D homogeneity histogram (homogram) and the maximum fuzzy entropy principle to perform thresholding. We have conducted experiments on a variety of images. The experimental results demonstrate that the proposed approach can select the thresholds automatically and effectively. Especially, it not only can process "clean" images, but also can process images with different kinds of noises and images with multiple kinds of noise well without knowing the type of the noise, which is the most difficult task for image thresholding. It will be useful for applications in computer vision and image processing.
APA, Harvard, Vancouver, ISO, and other styles
41

Lima, Matheus Sant’Ana. "Information theory inspired optimization algorithm for efficient service orchestration in distributed systems." PLOS ONE 16, no. 1 (2021): e0242285. http://dx.doi.org/10.1371/journal.pone.0242285.

Full text
Abstract:
Distributed Systems architectures are becoming the standard computational model for processing and transportation of information, especially for Cloud Computing environments. The increase in demand for application processing and data management from enterprise and end-user workloads continues to move from a single-node client-server architecture to a distributed multitier design where data processing and transmission are segregated. Software development must considerer the orchestration required to provision its core components in order to deploy the services efficiently in many independent, loosely coupled—physically and virtually interconnected—data centers spread geographically, across the globe. This network routing challenge can be modeled as a variation of the Travelling Salesman Problem (TSP). This paper proposes a new optimization algorithm for optimum route selection using Algorithmic Information Theory. The Kelly criterion for a Shannon-Bernoulli process is used to generate a reliable quantitative algorithm to find a near optimal solution tour. The algorithm is then verified by comparing the results with benchmark heuristic solutions in 3 test cases. A statistical analysis is designed to measure the significance of the results between the algorithms and the entropy function can be derived from the distribution. The tested results shown an improvement in the solution quality by producing routes with smaller length and time requirements. The quality of the results proves the flexibility of the proposed algorithm for problems with different complexities without relying in nature-inspired models such as Genetic Algorithms, Ant Colony, Cross Entropy, Neural Networks, 2opt and Simulated Annealing. The proposed algorithm can be used by applications to deploy services across large cluster of nodes by making better decision in the route design. The findings in this paper unifies critical areas in Computer Science, Mathematics and Statistics that many researchers have not explored and provided a new interpretation that advances the understanding of the role of entropy in decision problems encoded in Turing Machines.
APA, Harvard, Vancouver, ISO, and other styles
42

Teku, Sandhya Kumari, Koteswara Rao Sanagapallea, and Santi Prabha Inty. "A two-stage processing approach for contrast intensified image fusion." World Journal of Engineering 17, no. 1 (2020): 68–77. http://dx.doi.org/10.1108/wje-07-2019-0190.

Full text
Abstract:
Purpose Integrating complementary information with high-quality visual perception is essential in infrared and visible image fusion. Contrast-enhanced fusion required for target detection in military, navigation and surveillance applications, where visible images are captured at low-light conditions, is a challenging task. This paper aims to focus on the enhancement of poorly illuminated low-light images through decomposition prior to fusion, to provide high visual quality. Design/methodology/approach In this paper, a two-step process is implemented to improve the visual quality. First, the low-light visible image is decomposed to dark and bright image components. The decomposition is accomplished based on the selection of a threshold using Renyi’s entropy maximization. The decomposed dark and bright images are intensified with the stochastic resonance (SR) model. Second, texture information-based weighted average scheme for low-frequency coefficients and select maximum precept for high-frequency coefficients are used in the discrete wavelet transform (DWT) domain. Findings Simulations in MATLAB were carried out on various test images. The qualitative and quantitative evaluations of the proposed method show improvement in edge-based and information-based metrics compared to several existing fusion techniques. Originality/value In this work, a high-contrast, edge-preserved and brightness-improved image is obtained by the processing steps considered in this work to get good visual quality.
APA, Harvard, Vancouver, ISO, and other styles
43

Doukovska, Lyubka, Venko Petkov, Emil Mihailov, and Svetla Vassileva. "Image Processing for Technological Diagnostics of Metallurgical Facilities." Cybernetics and Information Technologies 12, no. 4 (2012): 66–76. http://dx.doi.org/10.2478/cait-2012-0031.

Full text
Abstract:
Abstract The paper presents an overview of the image-processing techniques. The set of basic theoretical instruments includes methods of mathematical analysis, linear algebra, probability theory and mathematical statistics, theory of digital processing of one-dimensional and multidimensional signals, wavelet-transforms and theory of information. This paper describes a methodology that aims to detect and diagnose faults, using thermographs approaches for the digital image processing technique.
APA, Harvard, Vancouver, ISO, and other styles
44

Ortega, Pedro A., and Daniel A. Braun. "Thermodynamics as a theory of decision-making with information-processing costs." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 469, no. 2153 (2013): 20120683. http://dx.doi.org/10.1098/rspa.2012.0683.

Full text
Abstract:
Perfectly rational decision-makers maximize expected utility, but crucially ignore the resource costs incurred when determining optimal actions. Here, we propose a thermodynamically inspired formalization of bounded rational decision-making where information processing is modelled as state changes in thermodynamic systems that can be quantified by differences in free energy. By optimizing a free energy, bounded rational decision-makers trade off expected utility gains and information-processing costs measured by the relative entropy. As a result, the bounded rational decision-making problem can be rephrased in terms of well-known variational principles from statistical physics. In the limit when computational costs are ignored, the maximum expected utility principle is recovered. We discuss links to existing decision-making frameworks and applications to human decision-making experiments that are at odds with expected utility theory. Since most of the mathematical machinery can be borrowed from statistical physics, the main contribution is to re-interpret the formalism of thermodynamic free-energy differences in terms of bounded rational decision-making and to discuss its relationship to human decision-making experiments.
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, S., H. Li, X. Wang, L. Guo, and R. Wang. "STUDY ON MOSAIC AND UNIFORM COLOR METHOD OF SATELLITE IMAGE FUSION IN LARGE SREA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 1099–102. http://dx.doi.org/10.5194/isprs-archives-xlii-3-1099-2018.

Full text
Abstract:
Due to the improvement of satellite radiometric resolution and the color difference for multi-temporal satellite remote sensing images and the large amount of satellite image data, how to complete the mosaic and uniform color process of satellite images is always an important problem in image processing. First of all using the bundle uniform color method and least squares mosaic method of GXL and the dodging function, the uniform transition of color and brightness can be realized in large area and multi-temporal satellite images. Secondly, using Color Mapping software to color mosaic images of 16bit to mosaic images of 8bit based on uniform color method with low resolution reference images. At last, qualitative and quantitative analytical methods are used respectively to analyse and evaluate satellite image after mosaic and uniformity coloring. The test reflects the correlation of mosaic images before and after coloring is higher than 95 % and image information entropy increases, texture features are enhanced which have been proved by calculation of quantitative indexes such as correlation coefficient and information entropy. Satellite image mosaic and color processing in large area has been well implemented.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Lei, Jun Lu, and Xian Qing Ling. "A Hybrid Filtering Method Based on Triangle-Module Fusion." Advanced Materials Research 268-270 (July 2011): 1239–44. http://dx.doi.org/10.4028/www.scientific.net/amr.268-270.1239.

Full text
Abstract:
Edge is the basic feature of the image, and is easily damaged in the image processing. This paper proposed an edge-preserving method for image filtering. The scheme can improve the capability of protecting the edge information. The proposed method firstly defined two information measures that were based on fuzzy entropy and image gradient. Then the two information measures were fused by triangle-module operator to determine the image edges. At last, we used the modified filter to eliminate noise and retain the determined edge points. The experiment results, compared with AMAWM, can achieve better effects on PSNR and AG (Average Gradient), which illustrates that more edge information may be preserved after the filtering operation.
APA, Harvard, Vancouver, ISO, and other styles
47

Chuang, Yu Chiang, and Shu Kai S. Fan. "An Image Registration Method Based upon Information Theorem on Overlapped Region." Applied Mechanics and Materials 58-60 (June 2011): 1985–89. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.1985.

Full text
Abstract:
Digital image and video have been widely applied to many practical applications due to their simple image acquirement. Image registration is an important image processing for integrating information from images. For Image registration, it is intuitive to orientate images by matching corresponding pixels being considered idealistically identical on the overlapping region. Based on this idea, this article proposes an image registration method that applies the information theorem to the corresponding intensity data. An entropy-based objective function is developed upon the histogram of the intensity differences as to evaluate the similarity between images. Intensity differences represent the differences of the corresponding pixels between the referenced and sensed images on the overlapped region. The sensed image is aligned to the referenced image by minimizing the proposed objective function through iteratively updating the parameters of the projective transformation during the optimization process. The experimental results obtained by means of several test image sets illustrate the effectiveness and feasibility of the proposed image registration method.
APA, Harvard, Vancouver, ISO, and other styles
48

Pérez-Amat García, Ricardo. "Towards a Semantic Theory of Information." tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society 7, no. 2 (2009): 158–71. http://dx.doi.org/10.31269/triplec.v7i2.108.

Full text
Abstract:
Information can be understood as that which reduces uncertainty, no matter what origin it has. In the field of human communication, information is only meaningful if it is part of a finished or intentional action. Meaning should be gathered from the empirical perspective of the use of language. If we study the processing of signification through transmission of the normal use of language, we will see that it takes place communicating a set of prototype categories, the core or central facts, which defines meaning as empirical hypothesis. But if there are central facts showing the use of words, then other facts –more or less peripheral– should also exit, whose knowledge is necessary in order to communicate in contexts far away from the “denotative conceptual norm”. Hence meaning can be represented by a fuzzy subset of the universe of discourse partition set. This concept of meaning may be integrated in a formal model of semantic source and information may be measured by non-probabilistic entropy.
APA, Harvard, Vancouver, ISO, and other styles
49

Pérez-Amat García, Ricardo. "Towards a Semantic Theory of Information." tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society 7, no. 2 (2009): 158–71. http://dx.doi.org/10.31269/vol7iss2pp158-171.

Full text
Abstract:
Information can be understood as that which reduces uncertainty, no matter what origin it has. In the field of human communication, information is only meaningful if it is part of a finished or intentional action. Meaning should be gathered from the empirical perspective of the use of language. If we study the processing of signification through transmission of the normal use of language, we will see that it takes place communicating a set of prototype categories, the core or central facts, which defines meaning as empirical hypothesis. But if there are central facts showing the use of words, then other facts –more or less peripheral– should also exit, whose knowledge is necessary in order to communicate in contexts far away from the “denotative conceptual norm”. Hence meaning can be represented by a fuzzy subset of the universe of discourse partition set. This concept of meaning may be integrated in a formal model of semantic source and information may be measured by non-probabilistic entropy.
APA, Harvard, Vancouver, ISO, and other styles
50

KANE, THOMAS B., PATRICK McANDREW, and ANDREW M. WALLACE. "MODEL-BASED OBJECT RECOGNITION USING PROBABILISTIC LOGIC AND MAXIMUM ENTROPY." International Journal of Pattern Recognition and Artificial Intelligence 05, no. 03 (1991): 425–37. http://dx.doi.org/10.1142/s0218001491000247.

Full text
Abstract:
In the visual context, a reasoning system should he capable of inferring a scene description using evidence derived from data-driven processing of the iconic image data. This evidence may consist of a set of curvilinear boundaries, which are obtained by grouping local edge data into extended features. Using linear primitives, a framework is described which represents the information contained in pre-formed models of possible objects in the scene, and in the segmented scenes themselves. A method based on maximum entropy is developed which assigns measures of likelihood for the presence of objects in the two-dimensional image. This method is applied to and evaluated on real and simulated image data, and the effectiveness of the approach is discussed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography