Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Transformation of image.

Zeitschriftenartikel zum Thema „Transformation of image“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Transformation of image" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Kim, J., T. Kim, D. Shin, and S. H. Kim. "ROBUST MOSAICKING OF UAV IMAGES WITH NARROW OVERLAPS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 879–83. http://dx.doi.org/10.5194/isprs-archives-xli-b1-879-2016.

Der volle Inhalt der Quelle
Annotation:
This paper considers fast and robust mosaicking of UAV images under a circumstance that each UAV images have very narrow overlaps in-between. Image transformation for image mosaicking consists of two estimations: relative transformations and global transformations. For estimating relative transformations between adjacent images, projective transformation is widely considered. For estimating global transformations, panoramic constraint is widely used. While perspective transformation is a general transformation model in 2D-2D transformation, this may not be optimal with weak stereo geometry such as images with narrow overlaps. While panoramic constraint works for reliable conversion of global transformation for panoramic image generation, this constraint is not applicable to UAV images in linear motions. For these reasons, a robust approach is investigated to generate a high quality mosaicked image from narrowly overlapped UAV images. For relative transformations, several transformation models were considered to ensure robust estimation of relative transformation relationship. Among them were perspective transformation, affine transformation, coplanar relative orientation, and relative orientation with reduced adjustment parameters. Performance evaluation for each transformation model was carried out. The experiment results showed that affine transformation and adjusted coplanar relative orientation were superior to others in terms of stability and accuracy. For global transformation, we set initial approximation by converting each relative transformation to a common transformation with respect to a reference image. In future work, we will investigate constrained relative orientation for enhancing geometric accuracy of image mosaicking and bundle adjustments of each relative transformation model for optimal global transformation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kim, J., T. Kim, D. Shin, and S. H. Kim. "ROBUST MOSAICKING OF UAV IMAGES WITH NARROW OVERLAPS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 879–83. http://dx.doi.org/10.5194/isprsarchives-xli-b1-879-2016.

Der volle Inhalt der Quelle
Annotation:
This paper considers fast and robust mosaicking of UAV images under a circumstance that each UAV images have very narrow overlaps in-between. Image transformation for image mosaicking consists of two estimations: relative transformations and global transformations. For estimating relative transformations between adjacent images, projective transformation is widely considered. For estimating global transformations, panoramic constraint is widely used. While perspective transformation is a general transformation model in 2D-2D transformation, this may not be optimal with weak stereo geometry such as images with narrow overlaps. While panoramic constraint works for reliable conversion of global transformation for panoramic image generation, this constraint is not applicable to UAV images in linear motions. For these reasons, a robust approach is investigated to generate a high quality mosaicked image from narrowly overlapped UAV images. For relative transformations, several transformation models were considered to ensure robust estimation of relative transformation relationship. Among them were perspective transformation, affine transformation, coplanar relative orientation, and relative orientation with reduced adjustment parameters. Performance evaluation for each transformation model was carried out. The experiment results showed that affine transformation and adjusted coplanar relative orientation were superior to others in terms of stability and accuracy. For global transformation, we set initial approximation by converting each relative transformation to a common transformation with respect to a reference image. In future work, we will investigate constrained relative orientation for enhancing geometric accuracy of image mosaicking and bundle adjustments of each relative transformation model for optimal global transformation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Sempio, J. N. H., R. K. D. Aranas, B. P. Lim, B. J. Magallon, M. E. A. Tupas, and I. A. Ventura. "ASSESSMENT OF DIFFERENT IMAGE TRANSFORMATION METHODS ON DIWATA-1 SMI IMAGES USING STRUCTURAL SIMILARITY MEASURE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W19 (December 23, 2019): 393–400. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w19-393-2019.

Der volle Inhalt der Quelle
Annotation:
Abstract. This paper aims to provide a qualitative assessment of different image transformation parameters as applied on images taken by the spaceborne multispectral imager (SMI) sensor installed in Diwata-1, the Philippines’ first Earth observation microsatellite, with the aim of determining the order of transformation that is sufficient for operationalization purposes. Images of the Palawan area were subjected to different image transformations by manual georeferencing using QGIS 3, and cloud masks generated and applied to remove the effects of clouds. The resulting images were then subjected to structural similarity (SSIM) tests using resampled and cloud masked Landsat 8 images of the same area to generate SSIM indices, which are then used as a quantitative means to assess the best performing transformation. The results of this study point to all transformed images having good SSIM ratings with their Landsat 8 counterparts, indicating that features shown in a Diwata-1 SMI image are structurally similar to the same features in a resampled Landsat 8 data. This implies that for Diwata-1 data processing operationalization purposes, higher order transformations, with the necessary effort to implement them, offer little advantage to lower order counterparts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Jabir, Adnan. "Image Geometrical Analogies." Journal of Al-Rafidain University College For Sciences ( Print ISSN: 1681-6870 ,Online ISSN: 2790-2293 ), no. 2 (October 26, 2021): 113–30. http://dx.doi.org/10.55562/jrucs.v23i2.484.

Der volle Inhalt der Quelle
Annotation:
Geometric transform (G.T.) of images is a critical operation in commercial television, film producing and advertisement design. All geometric transformation operations are performed by moving pixel values from their original spatial coordinates to new coordinates in the destination image. The traditional algorithms for geometric transformation are time consuming and not accurate. With a very few exceptions, all geometric transformations result in some output pixel locations being missed because no input pixels were transformed there. This paper presents an easy-to- implement and very efficient algorithm for image geometric transform. The idea behind the proposed algorithm comes from the current work on transferring properties between images, where various types of properties (i.e. transformation filters) are to be learned from one source image – and applied to another target image. The results demonstrate the efficiency effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Garg, Ankit, Ashish Negi, and Geeta Chauhan. "Analysis of Iterated Affine Transformation Function and Linear Mapping for Content Preservation." International Journal of Engineering & Technology 7, no. 4.19 (2018): 50–57. http://dx.doi.org/10.14419/ijet.v7i4.19.22014.

Der volle Inhalt der Quelle
Annotation:
In image scaling contents of image can be distorted which are required to preserve using linear mapping. Geometric transformations can preserve structural properties i.e. parallelism, colinearity and orientation. It is highly desirable to preserve structural properties of image contents because human visual system is very sensitive to distortion of objects. In this paper image scaling is performed using iterative affine transformation and results show that linear mapping function applied on affine space preserve affine properties under affine transformation. A number of scaling operations are applied on image using iterative affine transformation and for each iteration linear mapping is performed to preserved object structure. Analysis present in this paper shows that in image scaling preservation of image content is possible under iterative affine transformation and linear mapping. Image artifacts can be minimize using saliency based antialiasing algorithm after affine transformation. Â
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kim, Jae-In, Hyun-cheol Kim, and Taejung Kim. "Robust Mosaicking of Lightweight UAV Images Using Hybrid Image Transformation Modeling." Remote Sensing 12, no. 6 (2020): 1002. http://dx.doi.org/10.3390/rs12061002.

Der volle Inhalt der Quelle
Annotation:
This paper proposes a robust feature-based mosaicking method that can handle images obtained by lightweight unmanned aerial vehicles (UAVs). The imaging geometry of small UAVs can be characterized by unstable flight attitudes and low flight altitudes. These can reduce mosaicking performance by causing insufficient overlaps, tilted images, and biased tiepoint distributions. To solve these problems in the mosaicking process, we introduce the tiepoint area ratio (TAR) as a geometric stability indicator and orthogonality as an image deformation indicator. The proposed method estimates pairwise transformations with optimal transformation models derived by geometric stability analysis between adjacent images. It then estimates global transformations from optimal pairwise transformations that maximize geometric stability between adjacent images and minimize mosaic deformation. The valid criterion for the TAR in selecting an optimal transformation model was found to be about 0.3 from experiments with two independent image datasets. The results of a performance evaluation showed that the problems caused by the imaging geometry characteristics of small UAVs could actually occur in image datasets and showed that the proposed method could reliably produce image mosaics for image datasets obtained in both general and extreme imaging environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Liu, Hongbing, Gengyi Liu, Xuewen Ma, and Daohua Liu. "Training dictionary by granular computing with L∞-norm for patch granule–based image denoising." Journal of Algorithms & Computational Technology 12, no. 2 (2018): 136–46. http://dx.doi.org/10.1177/1748301818761131.

Der volle Inhalt der Quelle
Annotation:
Considering the objects by different granularity reflects the recognition common law of people, granular computing embodies the transformation between different granularity spaces. We present the image denoising algorithm by using the dictionary trained by granular computing with L∞-norm, which realizes three transformations, (1) the transformation from image space to patch granule space, (2) the transformation between granule spaces with different granularities, and (3) the transformation from patch granule space to image space. We demonstrate that the granular computing with L∞-norm achieved the comparable peak signal to noise ratio (PSNR) measure compared with BM3D and patch group prior based denoising for eight natural images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

MAZURETS, O., T. SKRYPNYK, and A. IZOTOV. "FACET METHOD OF IMAGE TRANSFORMATION BY MEANS OF NEURAL NETWORK RECOGNITION." Herald of Khmelnytskyi National University. Technical sciences 281, no. 1 (2020): 147–53. https://doi.org/10.31891/2307-5732-2020-281-1-147-153.

Der volle Inhalt der Quelle
Annotation:
The method of facet image conversion is a software resizing of the input image and is intended for use in the process of image recognition. Based on the developed facet method for image transformation, an application was created for neural network image recognition after processing by the developed method. To investigate the efficiency of the facet image conversion method, the results of the image recognition were compared before and after the facet image convolution. The developed facet image convolution information technology uses the facet image conversion method and allows the image to be recognized before scaling and after scaling using the perceptron neural network. There are two main components of information technology: facet image convolution and neural network image recognition. In the first stage, the image is processed by the facet method. First, an image dimension analysis is performed and a square is determined depending on the size of the input image, and then the dimension for the facet convolution is adjusted, if necessary. Necessity is determined by the ability to divide the image into squares. The next step is to set how noisy the image is, or vice versa. The next step is to recursively determine the affiliation of the pixel image to the original image, after which the intermediate material proceeds to the next stage of image recognition by the neural network. Researches have shown that the facet image conversion method allows to convert images in such a way as to increase the efficiency of further recognition. Thus, in comparison with the success of recognition of raw images, for noisy images the efficiency increases on average from 36.17% to 94.53%, and for the outline and segmented images the recognition efficiency increases on average from 52.93% to 88.16%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Sarid, Orly, and Ephrat Huss. "Image formation and image transformation." Arts in Psychotherapy 38, no. 4 (2011): 252–55. http://dx.doi.org/10.1016/j.aip.2011.07.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Liu, Jincheng. "Keywords-based conditional image transformation." Applied and Computational Engineering 57, no. 1 (2024): 56–65. http://dx.doi.org/10.54254/2755-2721/57/20241310.

Der volle Inhalt der Quelle
Annotation:
In recent years, Generative Adversarial Networks (GANs) and their variants, such as pix2pix, have occupied a significant position in the field of image generation. Despite the impressive performance of the pix2pix model in image-to-image transformation tasks, its reliance on a large amount of paired training data and computational resources has posed a crucial constraint to its broader application. To address these issues, this paper introduces a novel algorithm, Keywords-Based Conditional Image Transformation (KB-CIT). KB-CIT dynamically extracts keywords from the input grayscale images to acquire and generate training data, thus avoiding the need for a large amount of paired data and significantly improving the efficiency of image transformation. Experimental results demonstrate that KB-CIT performs remarkably well in image colorization tasks and can generate high-quality colored images even with limited training data. This algorithm not only simplifies the data collection process but also exhibits significant advantages in terms of computational resource requirements, data utilization efficiency, and personalized real-time training of the model, thereby providing new possibilities for the widespread application of the pix2pix model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Smagina, O. A. "The Image of Transformation and Cognitive Flexibility." Reflexio 15, no. 2 (2023): 76–86. http://dx.doi.org/10.25205/2658-4506-2022-15-2-76-86.

Der volle Inhalt der Quelle
Annotation:
The article suggests to view the image of transformation as an idea, that affects cognitive flexibility (the ability to overcome responses or thinking that have become habitual and adapt to new situations). We may notice, that cultural stereotypes of thinking are partly predetermined by mythology. Myths represent our mind’s view of mental capabilities, the way our conscience interprets unconscious processes. The following describes the images of transformations in ancient Greek and Hindu mythologies as two points of view on psychic processes, letting us notice possibilities and limitations of transformation processes offered by Western and Eastern myths. Transformation is considered a basic task in Jungian theory. Today, the entire world mythology has become accessible due to globalization. The purpose of this work is to consider different versions of the transformation image, making it possible to enrich emotionally significant, archetypal symbols that are used in practice, as well as to think about which symbols and mythological images most accurately represent modern consciousness, today’s reality perception. The signs underlying the image of transformation in Eastern myths are compared with modern empirical data on physiology and psyche. Based on this comparison, it is suggested that modern knowledge on cognitive flexibility is closer to Eastern mythology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

G., Sindhu Madhuri, and Indra Gandhi M. P. "New Image Registration Techniques: Development and Comparative Analysis." International Journal of Emerging Research in Management and Technology 6, no. 7 (2018): 146. http://dx.doi.org/10.23956/ijermt.v6i7.204.

Der volle Inhalt der Quelle
Annotation:
Design and Development of new Image Registration Techniques by using complex mathematical transformation functions are attempted in this research work as there is a requirement for the performance measurement of image registration complexity. The design and development of new image registration techniques are carried out with complex mathematical transformations of Radon and Slant functions due to their importance. And the rotation and translation geometric function are considered for better insight into the complex image registration process. The newly developed image registration techniques areevaluated and analyzed with openly available images of Lena, Cameraman and VegCrop. The accuracy as a performance measure of the newly developed image registration techniques are attempted to measure with popularly known metrics of RMSE, PSNR and Entropy. And the results obtained after successful image registration process are compared are presented. It is observed from the results that the developed new image registration techniques using Radon and Slant transformation functions with rotation and translation are superior and useful for the requirement and purpose in the digital image processing domain. Finally a research effort is made to development of new image registration techniques that are useful to extract intelligence embedded in the images with complex transformation function and an attempt is made to measure its performance also.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Iqbal, Saima, Wilayat Khan, Abdulrahman Alothaim, Aamir Qamar, Adi Alhudhaif, and Shtwai Alsubai. "Proving Reliability of Image Processing Techniques in Digital Forensics Applications." Security and Communication Networks 2022 (March 31, 2022): 1–17. http://dx.doi.org/10.1155/2022/1322264.

Der volle Inhalt der Quelle
Annotation:
Binary images have found its place in many applications, such as digital forensics involving legal documents, authentication of images, digital books, contracts, and text recognition. Modern digital forensics applications involve binary image processing as part of data hiding techniques for ownership protection, copyright control, and authentication of digital media. Whether in image forensics, health, or other fields, such transformations are often implemented in high-level languages without formal foundations. The lack of formal foundation questions the reliability of the image processing techniques and hence the forensic results loose their legal significance. Furthermore, counter-forensics can impede or mislead the forensic analysis of the digital images. To ensure that any image transformation meet high standards of safety and reliability, more rigorous methods should be applied to image processing applications. To verify the reliability of these transformations, we propose to use formal methods based on theorem proving that can fulfil high standards of safety. To formally investigate binary image processing, in this paper, a reversible formal model of the binary images is defined in the Proof Assistant Coq. Multiple image transformation methods are formalized and their reliability properties are proved. To analyse real-life RGB images, a prototype translator is developed that reads RGB images and translate them to Coq definitions. As the formal definitions and proof scripts can be validated automatically by the computer, this raises the reliability and legal significance of the image forensic applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Wang, Nannan, Jie Li, Dacheng Tao, Xuelong Li, and Xinbo Gao. "Heterogeneous image transformation." Pattern Recognition Letters 34, no. 1 (2013): 77–84. http://dx.doi.org/10.1016/j.patrec.2012.04.005.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Dixit, Karnika, and Mr Kamlesh Lakhwani. "A NOVEL METHOD OF COLOR IMAGE ENHANCEMENT BY COLOR SPACE TRANSFORMATION FOLLOWED BY GAMMA/LOGARITHMIC TRANSFORMATION." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 8, no. 1 (2013): 707–11. http://dx.doi.org/10.24297/ijct.v8i1.3430.

Der volle Inhalt der Quelle
Annotation:
Visual enhancement of image plays a very important role in the field of medical imaging. Enhanced medical images are more suitable for analysis and proper diagnosis. We present a novel method of enhancement of color medical images in this paper. We are transforming color space of the image from RGB to HIS followed by application of logarithmic and gamma transformation on saturation and intensity component respectively. Hence we obtain a visually enhanced version of the original image. We have obtained excellent color medical image enhancement results presented in this paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Golub, Yu I. "COMPRESSION OF HIGH DYNAMIC RANGE OF SAR IMAGES." «System analysis and applied information science», no. 1 (June 12, 2018): 51–57. http://dx.doi.org/10.21122/2309-4923-2018-1-51-57.

Der volle Inhalt der Quelle
Annotation:
The paper presents results of our experiments on compression of the high dynamic range SAR images. The range is equal to 16-bit. Objectives of study were comparison of known approaches to compression of the high dynamic range images; selection of optimal parameters for compression algorithms, and selection of a no-reference measure for image quality assessment after compression. Tone-mapping transformations like gamma correction, Ashikhmin-operator, mu-transformation, as well as no-reference image quality assessment measures were tested. The results of the experiments are presented in the article. It was concluded that further research and analysis of various functions and approaches to compression of dynamic range of SAR images is necessary, since including in the article approaches do not give stable and positive results on all SAR images. It was also concluded that after transformation 16-bit images, it is very difficult to estimate which image is better, and it is necessary to use no-reference image quality assessment measure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Xu, Pengcheng, Qingnan Fan, Fei Kou, et al. "Textualize Visual Prompt for Image Editing via Diffusion Bridge." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 20 (2025): 21779–87. https://doi.org/10.1609/aaai.v39i20.35483.

Der volle Inhalt der Quelle
Annotation:
Visual prompt, a pair of before-and-after edited images, can convey indescribable imagery transformations and prosper in image editing. However, current visual prompt methods rely on a pretrained text-guided image-to-image generative model that requires a triplet of text, before, and after images for retraining over a text-to-image model. Such crafting triplets and retraining processes limit the scalability and generalization of editing. In this paper, we present a framework based on any single text-to-image model without reliance on the explicit image-to-image model thus enhancing the generalizability and scalability. Specifically, by leveraging the probability-flow ordinary equation, we construct a diffusion bridge to transfer the distribution between before-and-after images under the text guidance. By optimizing the text via the bridge, the framework adaptively textualizes the editing transformation conveyed by visual prompts into text embeddings without other models. Meanwhile, we introduce differential attention control during optimization, which disentangles the text embedding from the invariance of the before-and-after images and makes it solely capture the delicate transformation and generalize to edit various images. Experiments on real images validate competitive results on the generalization, contextual coherence, and high fidelity for delicate editing with just one image pair as the visual prompt.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

WANG, XIUYING, and DAVID DAGAN FENG. "AUTOMATIC ELASTIC MEDICAL IMAGE REGISTRATION BASED ON IMAGE INTENSITY." International Journal of Image and Graphics 05, no. 02 (2005): 351–69. http://dx.doi.org/10.1142/s0219467805001793.

Der volle Inhalt der Quelle
Annotation:
An automatic elastic medical image registration approach is proposed, based on image intensity. The algorithm is divided into two steps. In Step 1, global affine registration is first used to establish an initial guess and the resulting images can be assumed to have only small local elastic deformations. The mapped images are then used as inputs in Step 2, during which, the study image is modeled as elastic sheet by being divided into sub-images. Moving the individual sub-image in the reference image, the local displacement vectors are found and the global elastic transformation is achieved by assimilating all of the local transformation into a continuous transformation. The algorithm has been validated by simulated data, noisy data and clinical tomographic data. Both experiments and theoretical analysis have demonstrated that the proposed algorithm has a superior computational performance and can register images automatically with an improved accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Hou, Dongdong, Weiming Zhang, and Nenghai Yu. "Image camouflage by reversible image transformation." Journal of Visual Communication and Image Representation 40 (October 2016): 225–36. http://dx.doi.org/10.1016/j.jvcir.2016.06.018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Huang, Hung-Tsai, Zi-Cai Li, Yimin Wei, and Ching Yee Suen. "Improved Splitting-Integrating Methods for Image Geometric Transformations: Error Analysis and Applications." Mathematics 13, no. 11 (2025): 1773. https://doi.org/10.3390/math13111773.

Der volle Inhalt der Quelle
Annotation:
Geometric image transformations are fundamental to image processing, computer vision and graphics, with critical applications to pattern recognition and facial identification. The splitting-integrating method (SIM) is well suited to the inverse transformation T−1 of digital images and patterns, but it encounters difficulties in nonlinear solutions for the forward transformation T. We propose improved techniques that entirely bypass nonlinear solutions for T, simplify numerical algorithms and reduce computational costs. Another significant advantage is the greater flexibility for general and complicated transformations T. In this paper, we apply the improved techniques to the harmonic, Poisson and blending models, which transform the original shapes of images and patterns into arbitrary target shapes. These models are, essentially, the Dirichlet boundary value problems of elliptic equations. In this paper, we choose the simple finite difference method (FDM) to seek their approximate transformations. We focus significantly on analyzing errors of image greyness. Under the improved techniques, we derive the greyness errors of images under T. We obtain the optimal convergence rates O(H2)+O(H/N2) for the piecewise bilinear interpolations (μ=1) and smooth images, where H(≪1) denotes the mesh resolution of an optical scanner, and N is the division number of a pixel split into N2 sub-pixels. Beyond smooth images, we address practical challenges posed by discontinuous images. We also derive the error bounds O(Hβ)+O(Hβ/N2), β∈(0,1) as μ=1. For piecewise continuous images with interior and exterior greyness jumps, we have O(H)+O(H/N2). Compared with the error analysis in our previous study, where the image greyness is often assumed to be smooth enough, this error analysis is significant for geometric image transformations. Hence, the improved algorithms supported by rigorous error analysis of image greyness may enhance their wide applications in pattern recognition, facial identification and artificial intelligence (AI).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Fida, A. D., A. V. Gaidel, N. S. Demin, N. Yu Ilyasova, and E. A. Zamytskiy. "Automated combination of optical coherence tomography images and fundus images." Computer Optics 5, no. 45 (2021): 721–27. http://dx.doi.org/10.18287/2412-6179-co-892.

Der volle Inhalt der Quelle
Annotation:
We discuss approaches to combining multimodal multidimensional images, namely, three-dimensional optical coherence tomography (OCT) data and two-dimensional color images of the fundus. Registration of these two modalities can help to adjust the position of the obtained OCT images on the retina. Some existing approaches to matching fundus images are based on finding key points that are considered invariant to affine transformations and are common to the two images. However, errors in the identification of such points can lead to registration errors. There are also methods for iterative adjustment of conversion parameters, but they are based on some manual settings. In this paper, we propose a method based on a full or partial search of possible combinations of the OCT image transformation to find the best approximation of the true transformation. The best approximation is determined using a measure of comparison of preprocessed image pixels. Further, the obtained transformations are compared with the available true transformations to assess the quality of the algorithm. The structure of the work includes: pre-processing of OCT and fundus images with the extraction of blood vessels, random search or grid search over possible transformation parameters (shift, rotation and scaling), and evaluation of the quality of the algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Fida, A. D., A. V. Gaidel, N. S. Demin, N. Yu Ilyasova, and E. A. Zamytskiy. "Automated combination of optical coherence tomography images and fundus images." Computer Optics 5, no. 45 (2021): 721–27. http://dx.doi.org/10.18287/2412-6179-co-892.

Der volle Inhalt der Quelle
Annotation:
We discuss approaches to combining multimodal multidimensional images, namely, three-dimensional optical coherence tomography (OCT) data and two-dimensional color images of the fundus. Registration of these two modalities can help to adjust the position of the obtained OCT images on the retina. Some existing approaches to matching fundus images are based on finding key points that are considered invariant to affine transformations and are common to the two images. However, errors in the identification of such points can lead to registration errors. There are also methods for iterative adjustment of conversion parameters, but they are based on some manual settings. In this paper, we propose a method based on a full or partial search of possible combinations of the OCT image transformation to find the best approximation of the true transformation. The best approximation is determined using a measure of comparison of preprocessed image pixels. Further, the obtained transformations are compared with the available true transformations to assess the quality of the algorithm. The structure of the work includes: pre-processing of OCT and fundus images with the extraction of blood vessels, random search or grid search over possible transformation parameters (shift, rotation and scaling), and evaluation of the quality of the algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Mat Jizat, Jessnor Arif, Ahmad Fakhri Ab. Nasir, Anwar P.P Abdul Majeed, and Edmund Yuen. "Effect of Image Compression using Fast Fourier Transformation and Discrete Wavelet Transformation on Transfer Learning Wafer Defect Image Classification." MEKATRONIKA 2, no. 1 (2020): 16–22. http://dx.doi.org/10.15282/mekatronika.v2i1.6704.

Der volle Inhalt der Quelle
Annotation:
Automated inspection machines for wafer defects usually captured thousands of images on a large scale to preserve the detail of defect features. However, most transfer learning architecture requires smaller images as input images. Thus, proper compression is required to preserve the defect features whilst maintaining an acceptable classification accuracy. This paper reports on the effect of image compression using Fast Fourier Transformation and Discrete Wavelet Transformation on transfer learning wafer defect image classification. A total of 500 images with 5 classes with 4 defect classes and 1 non-defect class were split to 60:20:20 ratio for training, validating and testing using InceptionV3 and Logistic Regression classifier. However, the input images were compressed using Fast Fourier Transformation and Discrete Wavelet Transformation using 4 level decomposition and Debauchies 4 wavelet family. The images were compressed by 50%, 75%, 90%, 95%, and 99%. As a result, the Fast Fourier Transformation compression show an increase from 89% to 94% in classification accuracy up to 95% compression, while Discrete Wavelet Transformation shows consistent classification accuracy throughout albeit diminishing image quality. From the experiment, it can be concluded that FFT and DWT image compression can be a reliable method for image compression for grayscale image classification as the image memory space drop 56.1% while classification accuracy increased by 5.6% with 95% FFT compression and memory space drop 55.6% while classification accuracy increased 2.2% with 50% DWT compression.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Tan, Daning, Yu Liu, Gang Li, Libo Yao, Shun Sun, and You He. "Serial GANs: A Feature-Preserving Heterogeneous Remote Sensing Image Transformation Model." Remote Sensing 13, no. 19 (2021): 3968. http://dx.doi.org/10.3390/rs13193968.

Der volle Inhalt der Quelle
Annotation:
In recent years, the interpretation of SAR images has been significantly improved with the development of deep learning technology, and using conditional generative adversarial nets (CGANs) for SAR-to-optical transformation, also known as image translation, has become popular. Most of the existing image translation methods based on conditional generative adversarial nets are modified based on CycleGAN and pix2pix, focusing on style transformation in practice. In addition, SAR images and optical images are characterized by heterogeneous features and large spectral differences, leading to problems such as incomplete image details and spectral distortion in the heterogeneous transformation of SAR images in urban or semiurban areas and with complex terrain. Aiming to solve the problems of SAR-to-optical transformation, Serial GANs, a feature-preserving heterogeneous remote sensing image transformation model, is proposed in this paper for the first time. This model uses the Serial Despeckling GAN and Colorization GAN to complete the SAR-to-optical transformation. Despeckling GAN transforms the SAR images into optical gray images, retaining the texture details and semantic information. Colorization GAN transforms the optical gray images obtained in the first step into optical color images and keeps the structural features unchanged. The model proposed in this paper provides a new idea for heterogeneous image transformation. Through decoupling network design, structural detail information and spectral information are relatively independent in the process of heterogeneous transformation, thereby enhancing the detail information of the generated optical images and reducing its spectral distortion. Using SEN-2 satellite images as the reference, this paper compares the degree of similarity between the images generated by different models and the reference, and the results revealed that the proposed model has obvious advantages in feature reconstruction and the economical volume of the parameters. It also showed that Serial GANs have great potential in decoupling image transformation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Awan, Hafiz Shakeel Ahmad, and Muhammad Tariq Mahmood. "Deep Dynamic Weights for Underwater Image Restoration." Journal of Marine Science and Engineering 12, no. 7 (2024): 1208. http://dx.doi.org/10.3390/jmse12071208.

Der volle Inhalt der Quelle
Annotation:
Underwater imaging presents unique challenges, notably color distortions and reduced contrast due to light attenuation and scattering. Most underwater image enhancement methods first use linear transformations for color compensation and then enhance the image. We observed that linear transformation for color compensation is not suitable for certain images. For such images, non-linear mapping is a better choice. This paper introduces a unique underwater image restoration approach leveraging a streamlined convolutional neural network (CNN) for dynamic weight learning for linear and non-linear mapping. In the first phase, a classifier is applied that classifies the input images as Type I or Type II. In the second phase, we use the Deep Line Model (DLM) for Type-I images and the Deep Curve Model (DCM) for Type-II images. For mapping an input image to an output image, the DLM creatively combines color compensation and contrast adjustment in a single step and uses deep lines for transformation, whereas the DCM employs higher-order curves. Both models utilize lightweight neural networks that learn per-pixel dynamic weights based on the input image’s characteristics. Comprehensive evaluations on benchmark datasets using metrics like peak signal-to-noise ratio (PSNR) and root mean square error (RMSE) affirm our method’s effectiveness in accurately restoring underwater images, outperforming existing techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Perju, Veaceslav, and Vladislav Cojuhari. "CENTRAL AND LOGARITHMIC CENTRAL IMAGE CHORD TRANSFORMATIONS FOR INVARIANT OBJECT RECOGNITION." Journal of Engineering Science XXVIII (1) (March 15, 2021): 38–46. https://doi.org/10.52326/jes.utm.2021.28(1).03.

Der volle Inhalt der Quelle
Annotation:
Pattern descriptors invariant to rotation, scaling, and translation represents an important direction in the elaboration of the real time object recognition systems. In this article, the new kinds of object descriptors based on chord transformation are presented. There are described new methods of image presentation - Central and Logarithmic Central Image Chord Transformations (CICT and LCICT). It is shown that the CICT operation makes it possible to achieve invariance to object rotation. In the case of implementation of the LCICT transformation, invariance to changes in the rotation and scale of the object is achieved. The possibilities of implementing the CICT and LCICT operations are discussed. The algorithms of these operations for contour images are presented. The possibilities of integrated implementation of CICT and LCICT operations are considered. A generalized CICT operation for a full (halftone) image is defined. The structures of the coherent optical processors that implement operations of basic and integral image chord transformations are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Li, Dongyang, Lin Yang, Hongguang Zhang, Xiaolei Wang, Linru Ma, and Junchao Xiao. "Image-Based Insider Threat Detection via Geometric Transformation." Security and Communication Networks 2021 (September 13, 2021): 1–18. http://dx.doi.org/10.1155/2021/1777536.

Der volle Inhalt der Quelle
Annotation:
Insider threat detection has been a challenging task over decades; existing approaches generally employ the traditional generative unsupervised learning methods to produce normal user behavior model and detect significant deviations as anomalies. However, such approaches are insufficient in precision and computational complexity. In this paper, we propose a novel insider threat detection method, Image-based Insider Threat Detector via Geometric Transformation (IGT), which converts the unsupervised anomaly detection into supervised image classification task, and therefore the performance can be boosted via computer vision techniques. To illustrate, our IGT uses a novel image-based feature representation of user behavior by transforming audit logs into grayscale images. By applying multiple geometric transformations on these behavior grayscale images, IGT constructs a self-labelled dataset and then trains a behavior classifier to detect anomaly in a self-supervised manner. The motivation behind our proposed method is that images converted from normal behavior data may contain unique latent features which remain unchanged after geometric transformation, while malicious ones cannot. Experimental results on CERT dataset show that IGT outperforms the classical autoencoder-based unsupervised insider threat detection approaches, and improves the instance and user based Area under the Receiver Operating Characteristic Curve (AUROC) by 4% and 2%, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Martinho, Laura, José Pio, and Felipe Oliveira. "Deep Learning-Driven Parameter Adaptation for Underwater Image Restoration." Revista Eletrônica de Iniciação Científica em Computação 22, no. 1 (2024): 81–90. http://dx.doi.org/10.5753/reic.2024.4671.

Der volle Inhalt der Quelle
Annotation:
In this paper we propose a learning-based approach to enhance underwater image quality by optimizing parameters and applying intensity transformations. Our methodology involves training a CNN Regression model on diverse underwater images to learn enhancing parameters, followed by applying intensity transformation techniques. In order to evaluate our approach, we conducted experiments using well-known underwater image datasets found in the literature, comprising real-world subaquatic images and we propose a novel underwater image dataset, composed by 276 images from Amazon turbid water rivers. The results demonstrate that our approach achieves an impressive accuracy rate in three different underwater image datasets. This high level of accuracy showcases the robustness and efficiency of our proposed method in restoring underwater images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Xu, Yihao. "CNN-based image style transformation--Using VGG19." Applied and Computational Engineering 39, no. 1 (2024): 130–36. http://dx.doi.org/10.54254/2755-2721/39/20230589.

Der volle Inhalt der Quelle
Annotation:
eural Style Transfer is a widely used approach in the field of computer vision, which aims to generate visual effects by integrating the information contained in one image into another. In this paper, this work presents an implementation of neural style transfer using TensorFlow and the VGG19 model. The proposed method involves loading and preprocessing the content and style images, extracting features from both images using the VGG19 model, and computing Gram matrices to capture the style information. A StyleContentModel class is introduced to encapsulate the style and content extraction process. The optimization process is performed using the Adam optimizer, where gradients are applied to iteratively update the generated image. The number of epochs and steps per epoch can be adjusted to control the optimization process and achieve desired results. Experiments show that we are effective in generating an image that is able to integrate the content of one image into the other. The generated images exhibit visually appealing characteristics and showcase the potential of neural style transfer as a creative tool in image synthesis. Future work may involve exploring different variations of the style transfer algorithm, optimizing hyperparameters, and evaluating the performance on a wider range of image datasets. Additionally, the integration of other deep learning architectures and techniques could further enhance the capabilities of neural style transfer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Pandurangan, Durai, R. Saravana Kumar, Lukas Gebremariam, L. Arulmurugan, and S. Tamilselvan. "Combined Gray Level Transformation Technique for Low Light Color Image Enhancement." Journal of Computational and Theoretical Nanoscience 18, no. 4 (2021): 1221–26. http://dx.doi.org/10.1166/jctn.2021.9392.

Der volle Inhalt der Quelle
Annotation:
Insufficient and poor lightning conditions affect the quality of videos and images captured by the camcorders. The low quality images decrease the performances of computer vision systems in smart traffic, video surveillance, and other imaging systems applications. In this paper, combined gray level transformation technique is proposed to enhance the less quality of illuminated images. This technique is composed of log transformation, power law transformation and adaptive histogram equalization process to improve the low light illumination image estimated using HIS color model. Finally, the enhanced illumination image is blended with original reflectance image to get enhanced color image. This paper shows that the proposed algorithm on various weakly illuminated images is enhanced better and has taken reduced computation time than previous image processing techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Wu, Dan. "Reversible Data Hiding for Encrypted Image Based on Arnold Transformation." MATEC Web of Conferences 173 (2018): 03088. http://dx.doi.org/10.1051/matecconf/201817303088.

Der volle Inhalt der Quelle
Annotation:
A reversible data hiding scheme for encrypted image was proposed based on Arnold transformation. In this scheme, the original image was divided into four sub-images by sampling, the sub-images were scrambled by Arnold transformation using two secret keys, then the scrambled sub-images were reconstituted an encrypted image. Subsequently, additional data was embedded into the encrypted image by modifying the difference between two adjacent pixels. With an encrypted image containing additional data, the receiver can obtain a decrypt image using the decryption key. Meanwhile, with the aid of the decryption key and information hiding key, the receiver can pick the hiding information and recover the original image without any error. Experiment result shows that the proposed scheme can obtain a higher payload with good image quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Raghawendra, Bhimarao Naik, and N.Kunchur Pavan. "Image Fusion Based on Wavelet Transformation." International Journal of Engineering and Advanced Technology (IJEAT) 9, no. 5 (2020): 473–77. https://doi.org/10.35940/ijeat.D9161.069520.

Der volle Inhalt der Quelle
Annotation:
The article based totally on the MATLAB software program simulation was carried out on the image fusion; to design and develop a MATLAB based image processing application for fusing two images of the similar scene received through other modalities. The application is required to use Discrete Wavelet Transform (DWT) and Pulse Coupled Neural Network (PCNN) techniques. The comparison is to be performed on the results obtained on the above mentioned techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Barthel, Kai Uwe. "Entropy Constrained Fractal Image Coding." Fractals 05, supp01 (1997): 17–26. http://dx.doi.org/10.1142/s0218348x97000607.

Der volle Inhalt der Quelle
Annotation:
In this paper we present an entropy constrained fractal coding scheme. In order to get high compression rates, previous fractal coders used hierarchical coding schemes with variable range block sizes. Our scheme uses constant range block sizes, but the complexity of the fractal transformations is adapted to the image contents. The entropy of the fractal code can be significantly reduced by introducing geometrical codebooks of variable size and a variable order luminance transformation. We propose a luminance transformation consisting of a unification of fractal and transform coding. With this transformation both inter- and intra- block redundancy of an image can be exploited to get higher coding gain. The coding results obtained with our new scheme are superior compared to conventional fractal and transform coding schemes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Hocevar, Erwin, and Walter G. Kropatsch. "Inventing the Formula of the Trees: A Solution of the Representation of Self Similar Objects." Fractals 05, supp01 (1997): 51–64. http://dx.doi.org/10.1142/s0218348x97000632.

Der volle Inhalt der Quelle
Annotation:
Iterated Function Systems (IFS) seem to be used best to represent objects in the nature, because many of them are self similar. An IFS is a set of affine and contractive transformations. The union (so-called collage) of the subimages generated by transforming the whole image produces the image again - the self similar attractor of these transformations, which can be described by a binary image. For a fast and compact representation of those images, it would be desirable to calculate the transformations (the IFS-Codes) directly from the image that means to solve the inverse IFS-Problem. The solution presented in this paper will directly use the features of the self similar image. Subsets of the entire image and the subimage to be calculated are identified by the computation of the set difference between the pixels of the original and a rotated copy. The rotation and the scale factor of the transformation can be computed by the mapping of this two subsets onto each other, if the translation part - the fixed point - is predefined. The calculation of the transformation has to be repeated for each subimage. It will be proved, that with this method the IFS-Codes can be calculated for not convex, undistorted, and self similar images as long as the fixed point is known. An efficient algorithm for the identification of these fixed points within the image is introduced. Different ways to achieve this solutions are presented. In the conclusion the class of images, which can be coded by this method is defined, the results are pointed out, the advantages resp. the disadvantages of the method are evaluated, and possible ways to extend the method are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Prakash, S. Om. "IMAGE STEGANOGRAPHY USING MID POINT TRANSFORMATION TECHNIQUE." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem31632.

Der volle Inhalt der Quelle
Annotation:
The project titled "Image Steganography using Mid-Point Transformation Technique" aims to explore and implement a novel approach to concealing information within digital images while preserving their visual integrity. Steganography is an age-old technique for covert communication, and this project leverages the mid-point transformation method to embed data seamlessly into images. The mid-point transformation technique involves the subtle alteration of pixel values based on the midpoint of neighboring pixels. This process ensures that the changes made to the image are imperceptible to the human eye, allowing for effective data hiding without compromising the overall visual quality. The project will focus on the development of an algorithm to encode and decode hidden information within images using the mid-point transformation technique. Implementation will be carried out using a programming language suitable for image processing, and the project aims to provide a user-friendly interface for ease of use. Key objectives include understanding the theoretical foundations of steganography, implementing the mid-point transformation algorithm, evaluating the effectiveness of the technique in terms of data capacity and visual impact, and comparing the results with existing steganographic methods. Key Words: Image Steganography, Mid Point Transformation, Steganography Techniques, Digital Image Security, Data Hiding, Information Security.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

D, Rajeshwari, Dr Shrinivasa Naika C. L,, and Dr Mohamed Rafi. "LUNG SCANS SEGMENTATION USING MARKER-CONTROLLED WATERSHED TRANSFORMATION." International Journal of Engineering Applied Sciences and Technology 7, no. 3 (2022): 152–56. http://dx.doi.org/10.33564/ijeast.2022.v07i03.024.

Der volle Inhalt der Quelle
Annotation:
Image segmentation is the process of partitioning a digital image into multiple segments knows as set of pixels. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. It is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The Watershed Transform consists of an image partitioning into its constitutive regions. This transform is easily adapted to be used in different types of images and it allows distinguishing complex objects. The marker watershed transformation is an improvement one on the basis of the watershed transform, which is set a tag in the image. The tag can be a point, a line or an area, the importance is its location instead of the marked shape. Each tag represents the image of a final partition and selecting a tag is a key factor in the decision segmentation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Deng, Xiaozheng, Shasha Mao, Jinyuan Yang, et al. "Multi-Class Double-Transformation Network for SAR Image Registration." Remote Sensing 15, no. 11 (2023): 2927. http://dx.doi.org/10.3390/rs15112927.

Der volle Inhalt der Quelle
Annotation:
In SAR image registration, most existing methods consider the image registration as a two-classification problem to construct the pair training samples for training the deep model. However, it is difficult to obtain a mass of given matched-points directly from SAR images as the training samples. Based on this, we propose a multi-class double-transformation network for SAR image registration based on Swin-Transformer. Different from existing methods, the proposed method directly considers each key point as an independent category to construct the multi-classification model for SAR image registration. Then, based on the key points from the reference and sensed images, respectively, a double-transformation network with two branches is designed to search for matched-point pairs. In particular, to weaken the inherent diversity between two SAR images, key points from one image are transformed to the other image, and the transformed image is used as the basic image to capture sub-images corresponding to all key points as the training and testing samples. Moreover, a precise-matching module is designed to increase the reliability of the obtained matched-points by eliminating the inconsistent matched-point pairs given by two branches. Finally, a series of experiments illustrate that the proposed method can achieve higher registration performance compared to existing methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Watcharawipha, Anirut, Nipon Theera-Umpon, and Sansanee Auephanwiriyakul. "Space Independent Image Registration Using Curve-Based Method with Combination of Multiple Deformable Vector Fields." Symmetry 11, no. 10 (2019): 1210. http://dx.doi.org/10.3390/sym11101210.

Der volle Inhalt der Quelle
Annotation:
This paper proposes a novel curve-based or edge-based image registration technique that utilizes the curve transformation function and Gaussian function. It enables deformable image registration between images in different spaces, e.g., different color spaces or different medical image modalities. In particular, piecewise polynomial fitting is used to fit a curve and convert it to the global cubic B-spline control points. The transformation between the curves in the reference and source images are performed by using these control points. The image area is segmented with respect to the reference curve for the moving pixels. The Gaussian function, which is symmetric about the coordinates of the points of the reference curve, was used to improve the continuity in the intra- and inter-segmented areas. The overall result on curve transformation by means of the Hausdroff distance was 5.820 ± 1.127 pixels on average on several 512 × 512 synthetic images. The proposed method was compared with an ImageJ plugin, namely bUnwarpJ, and a software suite for deformable image registration and adaptive radiotherapy research, namely DIRART, to evaluate the image registration performance. The experimental result shows that the proposed method yielded better image registration performance than its counterparts. On average, the proposed method could reduce the root mean square error from 2970.66 before registration to 1677.94 after registration and can increase the normalized cross-correlation coefficient from 91.87% before registration to 97.40% after registration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Minati, Mishra. "IMAGE ENCRYPTION USING FIBONACCI-LUCAS TRANSFORMATION." International Journal on Cryptography and Information Security (IJCIS) 2, no. 3 (2020): 131–41. https://doi.org/10.5281/zenodo.3775563.

Der volle Inhalt der Quelle
Annotation:
Secret communication techniques are of great demand since last 3000 years due to the need of information security and confidentiality at various levels of communication such as while communicating confidential personal data , patients’ medical data, countries’ defence and intelligence information, data related to examinations etc. With advancements in image processing research, Image encryption and Steganographic techniques have gained popularity over other forms of hidden communication techniques during the last few decades and a number of image encryption models are suggested by various researchers from time to time. In this paper, we are suggesting a new image encryption model based on Fibonacci and Lucas series.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Zhang, Yu Jun, Mei Xiang, and Ying Tian. "An Efficient Ear Recognition Method from Two-Dimensional Images." Advanced Materials Research 1049-1050 (October 2014): 1531–35. http://dx.doi.org/10.4028/www.scientific.net/amr.1049-1050.1531.

Der volle Inhalt der Quelle
Annotation:
An efficient ear recognition method by weighted wavelet transformation and Bi-Directional principal component analysis was proposed. First, each ear image was decomposed into four sub-images by wavelet transformation ,the four sub-images were low frequency image , vertical detail image ,horizontal detail image and high frequency image .Then the low frequency image was decomposed into four sub-images, the four-images were weighted by different coefficients, then ,the four sub-images were reconstructed into a image .On this basis ,the feature was extraction by the BDPCA method ,and then we use the k-Nearest Neighbor Classification to recognition .Experimental results show that the method have high recognition rate and shorted training time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Fernández, Claudio Ignacio, Ata Haddadi, Brigitte Leblon, Jinfei Wang, and Keri Wang. "Comparison between Three Registration Methods in the Case of Non-Georeferenced Close Range of Multispectral Images." Remote Sensing 13, no. 3 (2021): 396. http://dx.doi.org/10.3390/rs13030396.

Der volle Inhalt der Quelle
Annotation:
Cucumber powdery mildew, which is caused by Podosphaera xanthii, is a major disease that has a significant economic impact in cucumber greenhouse production. It is necessary to develop a non-invasive fast detection system for that disease. Such a system will use multispectral imagery acquired at a close range with a camera attached to a mobile cart’s mechanic extension. This study evaluated three image registration methods applied to non-georeferenced multispectral images acquired at close range over greenhouse cucumber plants with a MicaSense® RedEdge camera. The detection of matching points was performed using Speeded-Up Robust Features (SURF), and outliers matching points were removed using the M-estimator Sample Consensus (MSAC) algorithm. Three geometric transformations (affine, similarity, and projective) were considered in the registration process. For each transformation, we mapped the matching points of the blue, green, red, and NIR band images into the red-edge band space and computed the root mean square error (RMSE in pixel) to estimate the accuracy of each image registration. Our results achieved an RMSE of less than 1 pixel with the similarity and affine transformations and of less than 2 pixels with the projective transformation, whatever the band image. We determined that the best image registration method corresponded to the affine transformation because the RMSE is less than 1 pixel and the RMSEs have a Gaussian distribution for all of the bands, but the blue band.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Joe, G. Saliby. "Design and Implementation of Digital Image Transformation Algorithms." International Journal of Trend in Scientific Research and Development 3, no. 3 (2019): 623–31. https://doi.org/10.31142/ijtsrd22918.

Der volle Inhalt der Quelle
Annotation:
In computer science, Digital Image Processing or DIP is the use of computer hardware and software to perform image processing and computations on digital images. Generally, digital image processing requires the use of complex algorithms, and hence, can be more sophisticated from a performance perspective at doing simple tasks. Many applications exist for digital image processing, one of which is Digital Image Transformation. Basically, Digital Image Transformation or DIT is an algorithmic and mathematical function that converts one set of digital objects into another set after performing some operations. Some techniques used in DIT are image filtering, brightness, contrast, hue, and saturation adjustment, blending and dilation, histogram equalization, discrete cosine transform, discrete Fourier transform, edge detection, among others. This paper proposes a set of digital image transformation algorithms that deal with converting digital images from one domain to another. The algorithms to be implemented are grayscale transformation, contrast and brightness adjustment, hue and saturation adjustment, histogram equalization, blurring and sharpening adjustment, blending and fading transformation, erosion and dilation transformation, and finally edge detection and extraction. As future work, some of the proposed algorithms are to be investigated with parallel processing paving the way to make their execution time faster and more scalable. Joe G. Saliby "Design & Implementation of Digital Image Transformation Algorithms" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd22918.pdf
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Mann Parminder Singh, Navjot. "Medial Axis Transformation based Skeletonzation of Image Patterns using Image Processing Techniques." International Journal of Science and Research (IJSR) 1, no. 3 (2012): 220–23. http://dx.doi.org/10.21275/ijsr12120344.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Grimes, David B., and Rajesh P. N. Rao. "Bilinear Sparse Coding for Invariant Vision." Neural Computation 17, no. 1 (2005): 47–73. http://dx.doi.org/10.1162/0899766052530893.

Der volle Inhalt der Quelle
Annotation:
Recent algorithms for sparse coding and independent component analysis (ICA) have demonstrated how localized features can be learned from natural images. However, these approaches do not take image transformations into account. We describe an unsupervised algorithm for learning both localized features and their transformations directly from images using a sparse bilinear generative model. We show that from an arbitrary set of natural images, the algorithm produces oriented basis filters that can simultaneously represent features in an image and their transformations. The learned generative model can be used to translate features to different locations, thereby reducing the need to learn the same feature at multiple locations, a limitation of previous approaches to sparse coding and ICA. Our results suggest that by explicitly modeling the interaction between local image features and their transformations, the sparse bilinear approach can provide a basis for achieving transformation-invariant vision.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Бродская, Юлия Алексеевна, and Светлана Ивановна Яковлева. "KONIGSBERG-KALININGRAD IMAGE TRANSFORMATION." Вестник Тверского государственного университета. Серия: География и геоэкология, no. 1(33) (March 23, 2021): 82–92. http://dx.doi.org/10.26456/2226-7719-2021-1-82-92.

Der volle Inhalt der Quelle
Annotation:
Цель исследования - анализ трансформации городского образа на примере Кёнигсберга-Калининграда. Новизна исследования - в применение градостроительной схемы К.Линча (1960) к анализу разновременной пространственной структуры крупного старого немецкого города Кёнигсберга и послевоенного (современного) Калининграда. The aim of the research is to analyze the transformation of the urban image on the example of Königsberg-Kaliningrad. The novelty of the research lies in the application of the urban planning scheme of K. Linch (1960) to the analysis of the multi-temporal spatial structure of the large old German city of Königsberg and post-war (modern) Kaliningrad.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Sánchez-Morales, Maria-Eugenia, José-Trinidad Guillen-Bonilla, Héctor Guillen-Bonilla, Alex Guillen-Bonilla, Jorge Aguilar-Santiago, and Maricela Jiménez-Rodríguez. "Vectorial Image Representation for Image Classification." Journal of Imaging 10, no. 2 (2024): 48. http://dx.doi.org/10.3390/jimaging10020048.

Der volle Inhalt der Quelle
Annotation:
This paper proposes the transformation S→C→, where S is a digital gray-level image and C→ is a vector expressed through the textural space. The proposed transformation is denominated Vectorial Image Representation on the Texture Space (VIR-TS), given that the digital image S is represented by the textural vector C→. This vector C→ contains all of the local texture characteristics in the image of interest, and the texture unit T→ entertains a vectorial character, since it is defined through the resolution of a homogeneous equation system. For the application of this transformation, a new classifier for multiple classes is proposed in the texture space, where the vector C→ is employed as a characteristics vector. To verify its efficiency, it was experimentally deployed for the recognition of digital images of tree barks, obtaining an effective performance. In these experiments, the parametric value λ employed to solve the homogeneous equation system does not affect the results of the image classification. The VIR-TS transform possesses potential applications in specific tasks, such as locating missing persons, and the analysis and classification of diagnostic and medical images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Slobodyanyuk, N. L. "ANNA AKHMATOVA’S CYBER IMAGE IN THE PERCEPTION OF THE MODERN READER." Vestnik of the Kyrgyz-Russian Slavic University 25, no. 2 (2025): 151–58. https://doi.org/10.36979/1694-500x-2025-25-2-151-158.

Der volle Inhalt der Quelle
Annotation:
The article considers the factors of transformation of the regularities of functioning of information and communication space of modern society and their reflection in the modern literary process, fixation of changes in the content of culture in media art texts in the digital space The phenomena of traditional culture, appearing in a new, digital context, undergo significant transformations. The projections of both cultural constructs and cultural environment into cyberspace, where they take on the character of content not limited by a fixed form, are studied. The object of the study is the image of the writer, which is open to new connotations. The image of the writer is influenced by the digital environment itself, which is eclectic by nature and structured not according to the principles of traditional culture, which has, as a rule, fixed niches. The image of the writer, his work as a whole or elements of his work fall into a content space that cannot be called authentic. There is a transformation of cultural codes, which in turn are themselves transformed into cyber images. A cyber image is a more malleable semantic formation than an image existing in the context of the traditional cultural field. The image of Anna Andreyevna Akhmatova can serve as an illustration of the transformation of the traditional image of the writer into a cyber image.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Foroughi Sabzevar, Mohsen, Masoud Gheisari, and James Lo. "Development and Assessment of a Sensor-Based Orientation and Positioning Approach for Decreasing Variation in Camera Viewpoints and Image Transformations at Construction Sites." Applied Sciences 10, no. 7 (2020): 2305. http://dx.doi.org/10.3390/app10072305.

Der volle Inhalt der Quelle
Annotation:
Image matching techniques offer valuable opportunities for the construction industry. Image matching, a fundamental process in computer vision, is required for different purposes such as object and scene recognition, video data mining, reconstruction of three-dimensional (3D) objects, etc. During the image matching process, two images that are randomly (i.e., from different position and orientation) captured from a scene are compared using image matching algorithms in order to identify their similarity. However, this process is very complex and error prone, because pictures that are randomly captured from a scene vary in viewpoints. Therefore, some main features in images such as position, orientation, and scale of objects are transformed. Sometimes, these image matching algorithms cannot correctly identify the similarity between these images. Logically, if these features remain unchanged during the picture capturing process, then image transformations are reduced, similarity increases, and consequently, the chances of algorithms successfully conducting the image matching process increase. One way to improve these chances is to hold the camera at a fixed viewpoint. However, in messy, dusty, and temporary locations such as construction sites, holding the camera at a fixed viewpoint is not always feasible. Is there any way to repeat and retrieve the camera’s viewpoints during different captures at locations such as construction sites? This study developed and evaluated an orientation and positioning approach that decreased the variation in camera viewpoints and image transformation on construction sites. The results showed that images captured while using this approach had less image transformation in contrast to images not captured using this approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Wang, Hexiang, Fengqi Liu, Qianyu Zhou, Ran Yi, Xin Tan, and Lizhuang Ma. "Continuous Piecewise-Affine Based Motion Model for Image Animation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (2024): 5427–35. http://dx.doi.org/10.1609/aaai.v38i6.28351.

Der volle Inhalt der Quelle
Annotation:
Image animation aims to bring static images to life according to driving videos and create engaging visual content that can be used for various purposes such as animation, entertainment, and education. Recent unsupervised methods utilize affine and thin-plate spline transformations based on keypoints to transfer the motion in driving frames to the source image. However, limited by the expressive power of the transformations used, these methods always produce poor results when the gap between the motion in the driving frame and the source image is large. To address this issue, we propose to model motion from the source image to the driving frame in highly-expressive diffeomorphism spaces. Firstly, we introduce Continuous Piecewise-Affine based (CPAB) transformation to model the motion and present a well-designed inference algorithm to generate CPAB transformation from control keypoints. Secondly, we propose a SAM-guided keypoint semantic loss to further constrain the keypoint extraction process and improve the semantic consistency between the corresponding keypoints on the source and driving images. Finally, we design a structure alignment loss to align the structure-related features extracted from driving and generated images, thus helping the generator generate results that are more consistent with the driving action. Extensive experiments on four datasets demonstrate the effectiveness of our method against state-of-the-art competitors quantitatively and qualitatively. Code will be publicly available at: https://github.com/DevilPG/AAAI2024-CPABMM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Kalthom Adam H. Ibrahim, Mohammed Abdallah Almaleeh, Moaawia Mohamed Ahmed, and Dalia Mahmoud Adam. "Images Processing for Segmentation Neisseria Bacteria Cells." World Journal of Advanced Research and Reviews 12, no. 3 (2021): 573–79. http://dx.doi.org/10.30574/wjarr.2021.12.3.0672.

Der volle Inhalt der Quelle
Annotation:
This paper introduces the segmentation of Neisseria bacterial meningitis images. Images segmentation is an operation of identifying the homogeneous location in a digital image. The basic idea behind segmentation called thresholding, which be classified as single thresholding and multiple thresholding. To perform images segmentation, transformations and morphological operations processes are used to segment the images, as well as image transformation an edge detecting, filling operation, design structure element, and arithmetic operations technique is used to implement images segmentation. The images segmentation represent significant step in extracting images features and diagnoses the disease by computer software applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!