Academic literature on the topic 'Domain of images'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Domain of images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Domain of images"

1

Nuss, Martin C., and Rick L. Morrison. "Time-domain images." Optics Letters 20, no. 7 (April 1, 1995): 740. http://dx.doi.org/10.1364/ol.20.000740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vasconcelos, Ivan, Paul Sava, and Huub Douma. "Nonlinear extended images via image-domain interferometry." GEOPHYSICS 75, no. 6 (November 2010): SA105—SA115. http://dx.doi.org/10.1190/1.3494083.

Full text
Abstract:
Wave-equation, finite-frequency imaging and inversion still face many challenges in addressing the inversion of highly complex velocity models as well as in dealing with nonlinear imaging (e.g., migration of multiples, amplitude-preserving migration). Extended images (EIs) are particularly important for designing image-domain objective functions aimed at addressing standing issues in seismic imaging, such as two-way migration velocity inversion or imaging/inversion using multiples. General one- and two-way representations for scattered wavefields can describe and analyze EIs obtained in wave-equation imaging. We have developed a formulation that explicitly connects the wavefield correlations done in seismic imaging with the theory and practice of seismic interferometry. In light of this connection, we define EIs as locally scattered fields reconstructed by model-dependent, image-domain interferometry. Because they incorporate the same one- and two-way scattering representations usedfor seismic interferometry, the reciprocity-based EIs can in principle account for all possible nonlinear effects in the imaging process, i.e., migration of multiples and amplitude corrections. In this case, the practice of two-way imaging departs considerably from the one-way approach. We have studied the differences between these approaches in the context of nonlinear imaging, analyzing the differences in the wavefield extrapolation steps as well as in imposing the extended imaging conditions. When invoking single-scattering effects and ignoring amplitude effects in generating EIs, the one- and two-way approaches become essentially the same as those used in today’s migration practice, with the straightforward addition of space and time lags in the correlation-based imaging condition. Our formal description of the EIs and the insight that they are scattered fields in the image domain may be useful in further development of imaging and inversion methods in the context of linear, migration-based velocity inversion or in more sophisticated image-domain nonlinear inverse scattering approaches.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Ximei, Liang Li, Weirui Ye, Mingsheng Long, and Jianmin Wang. "Transferable Attention for Domain Adaptation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5345–52. http://dx.doi.org/10.1609/aaai.v33i01.33015345.

Full text
Abstract:
Recent work in domain adaptation bridges different domains by adversarially learning a domain-invariant representation that cannot be distinguished by a domain discriminator. Existing methods of adversarial domain adaptation mainly align the global images across the source and target domains. However, it is obvious that not all regions of an image are transferable, while forcefully aligning the untransferable regions may lead to negative transfer. Furthermore, some of the images are significantly dissimilar across domains, resulting in weak image-level transferability. To this end, we present Transferable Attention for Domain Adaptation (TADA), focusing our adaptation model on transferable regions or images. We implement two types of complementary transferable attention: transferable local attention generated by multiple region-level domain discriminators to highlight transferable regions, and transferable global attention generated by single image-level domain discriminator to highlight transferable images. Extensive experiments validate that our proposed models exceed state of the art results on standard domain adaptation datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

Divel, Sarah E., and Norbert J. Pelc. "Accurate Image Domain Noise Insertion in CT Images." IEEE Transactions on Medical Imaging 39, no. 6 (June 2020): 1906–16. http://dx.doi.org/10.1109/tmi.2019.2961837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Linhao, Zhiqiang Zhou, Bo Wang, Lingjuan Miao, Zhe An, and Xiaowu Xiao. "Domain Adaptive Ship Detection in Optical Remote Sensing Images." Remote Sensing 13, no. 16 (August 10, 2021): 3168. http://dx.doi.org/10.3390/rs13163168.

Full text
Abstract:
With the successful application of the convolutional neural network (CNN), significant progress has been made by CNN-based ship detection methods. However, they often face considerable difficulties when applied to a new domain where the imaging condition changes significantly. Although training with the two domains together can solve this problem to some extent, the large domain shift will lead to sub-optimal feature representations, and thus weaken the generalization ability on both domains. In this paper, a domain adaptive ship detection method is proposed to better detect ships between different domains. Specifically, the proposed method minimizes the domain discrepancies via both image-level adaption and instance-level adaption. In image-level adaption, we use multiple receptive field integration and channel domain attention to enhance the feature’s resistance to scale and environmental changes, respectively. Moreover, a novel boundary regression module is proposed in instance-level adaption to correct the localization deviation of the ship proposals caused by the domain shift. Compared with conventional regression approaches, the proposed boundary regression module is able to make more accurate predictions via the effective extreme point features. The two adaption components are implemented by learning the corresponding domain classifiers respectively in an adversarial training way, thereby obtaining a robust model suitable for both of the two domains. Experiments on both supervised and unsupervised domain adaption scenarios are conducted to verify the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
6

Hayder, Israa M., Hussain A. Younis, and Hameed Abdul-Kareem Younis. "Digital Image Enhancement Gray Scale Images In Frequency Domain." Journal of Physics: Conference Series 1279 (July 2019): 012072. http://dx.doi.org/10.1088/1742-6596/1279/1/012072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

De, Kanjar, and V. Masilamani. "Image Sharpness Measure for Blurred Images in Frequency Domain." Procedia Engineering 64 (2013): 149–58. http://dx.doi.org/10.1016/j.proeng.2013.09.086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fuchida, Takayasu, Sadayuki Murashima, and Hirofumi Nakamura. "Domain search using shrunken images for fractal image compression." Japan Journal of Industrial and Applied Mathematics 22, no. 2 (June 2005): 205–22. http://dx.doi.org/10.1007/bf03167438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bernstein, Gary M., and Daniel Gruen. "Resampling Images in Fourier Domain." Publications of the Astronomical Society of the Pacific 126, no. 937 (March 2014): 287–95. http://dx.doi.org/10.1086/675812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Buzzelli, Marco. "Recent Advances in Saliency Estimation for Omnidirectional Images, Image Groups, and Video Sequences." Applied Sciences 10, no. 15 (July 27, 2020): 5143. http://dx.doi.org/10.3390/app10155143.

Full text
Abstract:
We present a review of methods for automatic estimation of visual saliency: the perceptual property that makes specific elements in a scene stand out and grab the attention of the viewer. We focus on domains that are especially recent and relevant, as they make saliency estimation particularly useful and/or effective: omnidirectional images, image groups for co-saliency, and video sequences. For each domain, we perform a selection of recent methods, we highlight their commonalities and differences, and describe their unique approaches. We also report and analyze the datasets involved in the development of such methods, in order to reveal additional peculiarities of each domain, such as the representation used for the ground truth saliency information (scanpaths, saliency maps, or salient object regions). We define domain-specific evaluation measures, and provide quantitative comparisons on the basis of common datasets and evaluation criteria, highlighting the different impact of existing approaches on each domain. We conclude by synthesizing the emerging directions for research in the specialized literature, which include novel representations for omnidirectional images, inter- and intra- image saliency decomposition for co-saliency, and saliency shift for video saliency estimation.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Domain of images"

1

Thornström, Johan. "Domain Adaptation of Unreal Images for Image Classification." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165758.

Full text
Abstract:
Deep learning has been intensively researched in computer vision tasks like im-age classification. Collecting and labeling images that these neural networks aretrained on is labor-intensive, which is why alternative methods of collecting im-ages are of interest. Virtual environments allow rendering images and automaticlabeling,  which could speed up the process of generating training data and re-duce costs.This  thesis  studies  the  problem  of  transfer  learning  in  image  classificationwhen the classifier has been trained on rendered images using a game engine andtested on real images. The goal is to render images using a game engine to createa classifier that can separate images depicting people wearing civilian clothingor camouflage.  The thesis also studies how domain adaptation techniques usinggenerative  adversarial  networks  could  be  used  to  improve  the  performance  ofthe classifier.  Experiments show that it is possible to generate images that canbe used for training a classifier capable of separating the two classes.  However,the experiments with domain adaptation were unsuccessful.  It is instead recom-mended to improve the quality of the rendered images in terms of features usedin the target domain to achieve better results.
APA, Harvard, Vancouver, ISO, and other styles
2

Manamasa, Krishna Himaja. "Domain adaptation from 3D synthetic images to real images." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19303.

Full text
Abstract:
Background. Domain adaptation is described as, a model learning from a source data distribution and performing well on the target data. This concept, Domain adaptation is applied to assembly-line production tasks to perform an automatic quality inspection. Objectives. The aim of this master thesis is to apply this concept of 3D domain adaptation from synthetic images to real images. It is an attempt to bridge the gap between different domains (synthetic and real point cloud images), by implementing deep learning models that learn from synthetic 3D point cloud (CAD model images) and perform well on the actual 3D point cloud (3D Camera images). Methods. Through this course of thesis project, various methods for understand- ing the data and analyzing it for bridging the gap between CAD and CAM to make them similar is looked into. Literature review and controlled experiment are research methodologies followed during implementation. In this project, we experiment with four different deep learning models with data generated and compare their performance to know which deep learning model performs best for the data. Results. The results are explained through metrics i.e, accuracy and train time, which were the outcomes of each of the deep learning models after the experiment. These metrics are illustrated in the form of graphs for comparative analysis between the models on which the data is trained and tested on. PointDAN showed better results with higher accuracy compared to the other 3 models. Conclusions. The results attained show that domain adaptation for synthetic images to real images is possible with the data generated. PointDAN deep learning model which focuses on local feature alignment and global feature alignment with single-view point data shows better results with our data.
APA, Harvard, Vancouver, ISO, and other styles
3

VALE, EDUARDO ESTEVES. "ENHANCEMENT OF IMAGES IN THE TRANSFORM DOMAIN." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2006. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8237@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Esta Dissertação destina-se ao desenvolvimento de novas técnicas de realce aplicadas no domínio da transformada. O estudo das transformadas bidimensionais motivaram o desenvolvimento de técnicas baseadas nestas ferramentas matemáticas. Análises comparativas entre os métodos de realce no domínio espacial e no domínio da transformada logo revelaram as vantagens do uso das transformadas. É proposta e analisada uma nova técnica de realce no domínio da Transformada Cosseno Discreta (DCT). Os resultados mostraram que esta nova proposta é menos afetada por ruído e realça mais a imagem que as técnicas apresentadas na literatura. Adicionalmente, considera-se uma estratégia com o objetivo de eliminar o efeito de escurecimento da imagem processada pelo Alpha-rooting. É também apresentada uma nova proposta de realce no domínio da Transformada Wavelet Discreta (DWT). As simulações mostraram que a imagem resultante possui melhor qualidade visual que a de técnicas relatadas na literatura, além de ser pouco afetada pelo ruído. Além disso, a escolha do parâmetro de realce é simplificada.
This Dissertation is aimed at the development of new enhancement techniques applied in the transform domain. The study of the bidimensional transforms motivated the development of techniques based on these mathematical tools. The comparative analysis between the enhancement methods in the spatial domain and in the transform domain revealed the advantages of the use of transforms. A new proposal of enhancement in the Discrete Cosine Transform (DCT) domain is analysed. The results showed that this new proposal is less affected by noise and enhances more the image than other techniques reported in the literature. In addition, a strategy to eliminate the darkening effect of enhancement by Alpha-rooting is considered. A new proposal of enhancement in the Discrete Wavelet Transform (DWT) domain is also presented. Simulation results showed that the enhanced images have better visual quality than other ones presented in the literature and is less affected by noise. Moreover, the choice of the enhancement parameter is simplified.
APA, Harvard, Vancouver, ISO, and other styles
4

Grahn, Fredrik, and Kristian Nilsson. "Object Detection in Domain Specific Stereo-Analysed Satellite Images." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159917.

Full text
Abstract:
Given satellite images with accompanying pixel classifications and elevation data, we propose different solutions to object detection. The first method uses hierarchical clustering for segmentation and then employs different methods of classification. One of these classification methods used domain knowledge to classify objects while the other used Support Vector Machines. Additionally, a combination of three Support Vector Machines were used in a hierarchical structure which out-performed the regular Support Vector Machine method in most of the evaluation metrics. The second approach is more conventional with different types of Convolutional Neural Networks. A segmentation network was used as well as a few detection networks and different fusions between these. The Convolutional Neural Network approach proved to be the better of the two in terms of precision and recall but the clustering approach was not far behind. This work was done using a relatively small amount of data which potentially could have impacted the results of the Machine Learning models in a negative way.
APA, Harvard, Vancouver, ISO, and other styles
5

Soukal, David. "Advanced steganographic and steganalytic methods in the spatial domain." Diss., Online access via UMI:, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Naraharisetti, Sahasan Mohanty Saraju. "Region aware DCT domain invisible robust blind watermarking for color images." [Denton, Tex.] : University of North Texas, 2008. http://digital.library.unt.edu/permalink/meta-dc-9748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Naraharisetti, Sahasan. "Region aware DCT domain invisible robust blind watermarking for color images." Thesis, University of North Texas, 2008. https://digital.library.unt.edu/ark:/67531/metadc9748/.

Full text
Abstract:
The multimedia revolution has made a strong impact on our society. The explosive growth of the Internet, the access to this digital information generates new opportunities and challenges. The ease of editing and duplication in digital domain created the concern of copyright protection for content providers. Various schemes to embed secondary data in the digital media are investigated to preserve copyright and to discourage unauthorized duplication: where digital watermarking is a viable solution. This thesis proposes a novel invisible watermarking scheme: a discrete cosine transform (DCT) domain based watermark embedding and blind extraction algorithm for copyright protection of the color images. Testing of the proposed watermarking scheme's robustness and security via different benchmarks proves its resilience to digital attacks. The detectors response, PSNR and RMSE results show that our algorithm has a better security performance than most of the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Chakravarthy, Chinna Narayana Swamy Thrilok. "Combinational Watermarking for Medical Images." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5833.

Full text
Abstract:
Digitization of medical data has become a very important part of the modern healthcare system. Data can be transmitted easily at any time to anywhere in the world using Internet to get the best diagnosis possible for a patient. This digitized medical data must be protected at all times to preserve the doctor-patient confidentiality. Watermarking can be used as an effective tool to achieve this. In this research project, image watermarking is performed both in the spatial domain and the frequency domain to embed a shared image with medical image data and the patient data which would include the patient identification number. For the proposed system, Structural Similarity (SSIM) is used as an index to measure the quality of the watermarking process instead of Peak Signal to Noise Ratio (PSNR) since SSIM takes into account the visual perception of the images compared to PSNR which uses the intensity levels to measure the quality of the watermarking process. The system response under ideal conditions as well as under the influence of noise were measured and the results were analyzed.
APA, Harvard, Vancouver, ISO, and other styles
9

Tasar, Onur. "Des images satellites aux cartes vectorielles." Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4063.

Full text
Abstract:
Grâce à d'importants développements technologiques au fil des ans, il a été possible de collecter des quantités massives de données de télédétection. Par exemple, les constellations de divers satellites sont capables de capturer de grandes quantités d'images de télédétection à haute résolution spatiale ainsi que de riches informations spectrales sur tout le globe. La disponibilité de données aussi gigantesques a ouvert la porte à de nombreuses applications et a soulevé de nombreux défis scientifiques. Parmi ces défis, la génération automatique de cartes précises est devenue l'un des problèmes les plus intéressants et les plus anciens, car il s'agit d'un processus crucial pour un large éventail d'applications dans des domaines tels que la surveillance et l'aménagement urbains, l'agriculture de précision, la conduite autonome et la navigation.Cette thèse vise à développer de nouvelles approches pour générer des cartes vectorielles à partir d'images de télédétection. À cette fin, nous avons divisé la tâche en deux sous-étapes. La première étape consiste à générer des cartes matricielles à partir d'images de télédétection en effectuant une classification au niveau des pixels grâce à des techniques avancées d'apprentissage profond. La seconde étape vise à convertir les cartes matricielles en cartes vectorielles en utilisant des structures de données et des algorithmes de géométrie algorithmique. Cette thèse aborde les défis qui sont couramment rencontrés au cours de ces deux étapes. Bien que des recherches antérieures aient montré que les réseaux neuronaux convolutifs (CNN) sont capables de générer d'excellentes cartes lorsque les données d'entraînement sont représentatives des données d'essai, leurs performances diminuent considérablement lorsqu'il existe une grande différence de distribution entre les images d'entraînement et d'essai. Dans la première étape de notre traitement, nous visons principalement à surmonter les capacités de généralisation limitées des CNN pour effectuer une classification à grande échelle. Nous explorons également un moyen d'exploiter de multiples ensembles de données collectées à différentes époques avec des annotations pour des classes distinctes afin de former des CNN capables de générer des cartes pour toutes les classes.Dans la deuxième partie, nous décrivons une méthode qui vectorise les cartes matricielles pour les intégrer dans des applications de systèmes d'information géographique, ce qui complète notre chaîne de traitement. Tout au long de cette thèse, nous expérimentons sur un grand nombre d'images satellitaires et aériennes de très haute résolution. Nos expériences démontrent la robustesse et la capacité à généraliser des méthodes proposées
With the help of significant technological developments over the years, it has been possible to collect massive amounts of remote sensing data. For example, the constellations of various satellites are able to capture large amounts of remote sensing images with high spatial resolution as well as rich spectral information over the globe. The availability of such huge volume of data has opened the door to numerous applications and raised many challenges. Among these challenges, automatically generating accurate maps has become one of the most interesting and long-standing problems, since it is a crucial process for a wide range of applications in domains such as urban monitoring and management, precise agriculture, autonomous driving, and navigation.This thesis seeks for developing novel approaches to generate vector maps from remote sensing images. To this end, we split the task into two sub-stages. The former stage consists in generating raster maps from remote sensing images by performing pixel-wise classification using advanced deep learning techniques. The latter stage aims at converting raster maps to vector ones by leveraging computational geometry approaches. This thesis addresses the challenges that are commonly encountered within both stages. Although previous research has shown that convolutional neural networks (CNNs) are able to generate excellent maps when training data are representative for test data, their performance significantly drops when there exists a large distribution difference between training and test images. In the first stage of our pipeline, we mainly aim at overcoming limited generalization abilities of CNNs to perform large-scale classification. We also explore a way of leveraging multiple data sets collected at different times with annotations for separate classes to train CNNs that can generate maps for all the classes.In the second part, we propose a method that vectorizes raster maps to integrate them into geographic information systems applications, which completes our processing pipeline. Throughout this thesis, we experiment on a large number of very high resolution satellite and aerial images. Our experiments demonstrate robustness and scalability of the proposed methods
APA, Harvard, Vancouver, ISO, and other styles
10

Mohamed, Aamer S. S. "From content-based to semantic image retrieval. Low level feature extraction, classification using image processing and neural networks, content based image retrieval, hybrid low level and high level based image retrieval in the compressed DCT domain." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4438.

Full text
Abstract:
Digital image archiving urgently requires advanced techniques for more efficient storage and retrieval methods because of the increasing amount of digital. Although JPEG supply systems to compress image data efficiently, the problems of how to organize the image database structure for efficient indexing and retrieval, how to index and retrieve image data from DCT compressed domain and how to interpret image data semantically are major obstacles for further development of digital image database system. In content-based image, image analysis is the primary step to extract useful information from image databases. The difficulty in content-based image retrieval is how to summarize the low-level features into high-level or semantic descriptors to facilitate the retrieval procedure. Such a shift toward a semantic visual data learning or detection of semantic objects generates an urgent need to link the low level features with semantic understanding of the observed visual information. To solve such a -semantic gap¿ problem, an efficient way is to develop a number of classifiers to identify the presence of semantic image components that can be connected to semantic descriptors. Among various semantic objects, the human face is a very important example, which is usually also the most significant element in many images and photos. The presence of faces can usually be correlated to specific scenes with semantic inference according to a given ontology. Therefore, face detection can be an efficient tool to annotate images for semantic descriptors. In this thesis, a paradigm to process, analyze and interpret digital images is proposed. In order to speed up access to desired images, after accessing image data, image features are presented for analysis. This analysis gives not only a structure for content-based image retrieval but also the basic units ii for high-level semantic image interpretation. Finally, images are interpreted and classified into some semantic categories by semantic object detection categorization algorithm.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Domain of images"

1

The domain of images. Ithaca: Cornell University Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jong, Steven M. de. Remote sensing image analysis: Including the spatial domain. Dordrecht: Kluwer Academic, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Merhav, Neri. Multiplication-free approximate algorithms for compressed domain linear operations on images. Palo Alto, CA: Hewlett-Packard Laboratories, Technical Publications Department, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wdowin, Michal. Image analysis of magnetic domain structures. Manchester: University of Manchester, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Image and video processing in the compressed domain. Boca Raton: CRC Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

V, Prasad M., ed. Lossy image compression: Domain decomposition-based algorithms. London: Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

MacOrlan, Pierre. Domaine de l'ombre: Images du fantastique social. Paris: Phébus, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhao, Yang. Dual domain semi-fragile watermarking for image authentication. Ottawa: National Library of Canada, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jong, Steven M., and Freek D. Meer, eds. Remote Sensing Image Analysis: Including the Spatial Domain. Dordrecht: Kluwer Academic Publishers, 2005. http://dx.doi.org/10.1007/1-4020-2560-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jong, Steven M. De, and Freek D. Van der Meer, eds. Remote Sensing Image Analysis: Including The Spatial Domain. Dordrecht: Springer Netherlands, 2004. http://dx.doi.org/10.1007/978-1-4020-2560-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Domain of images"

1

Chica-Olmo, Mario, and Francisco Abarca-Hernández. "Variogram Derived Image Texture for Classifying Remotely Sensed Images." In Remote Sensing Image Analysis: Including The Spatial Domain, 93–111. Dordrecht: Springer Netherlands, 2004. http://dx.doi.org/10.1007/978-1-4020-2560-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lombardo, Patrizia. "Edgar Allan Poe: The Domain of Artifice." In Cities, Words and Images, 1–45. London: Palgrave Macmillan UK, 2003. http://dx.doi.org/10.1057/9780230286696_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Amayeh, Gholamreza, Soheil Amayeh, and Mohammad Taghi Manzuri. "Fingerprint Images Enhancement in Curvelet Domain." In Advances in Visual Computing, 541–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89646-3_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shalev-Eyni, Sarit. "The Aural-Visual Experience in the Ashkenazi Ritual Domain of the Middle Ages." In Resounding Images, 189–204. Turnhout: Brepols Publishers, 2015. http://dx.doi.org/10.1484/m.svcma-eb.5.109333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bertini, M., R. Cucchiara, A. Del Bimbo, and C. Torniai. "Domain Knowledge Extension with Pictorially Enriched Ontologies." In Computer Analysis of Images and Patterns, 652–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11556121_80.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Parekh, Maharshi, Shiv Bidani, and V. Santhi. "Spatial Domain Blind Watermarking for Digital Images." In Advances in Intelligent Systems and Computing, 519–27. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-7871-2_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Baek, Yeul-Min, Joong-Geun Kim, Dong-Chan Cho, Jin-Aeon Lee, and Whoi-Yul Kim. "Integrated Noise Modeling for Image Sensor Using Bayer Domain Images." In Computer Vision/Computer Graphics CollaborationTechniques, 413–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01811-4_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Carvalho, Luis M. T. de, Fausto W. Acerbi, Jan G. P. W. Clevers, Leila M. G. Fonseca, and Steven M. de Jong. "Multiscale Feature Extraction from Images Using Wavelets." In Remote Sensing Image Analysis: Including The Spatial Domain, 237–70. Dordrecht: Springer Netherlands, 2004. http://dx.doi.org/10.1007/978-1-4020-2560-0_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gofman, Yossi, and Nahum Kiryati. "Detecting grey level symmetry: The frequency domain approach." In Computer Analysis of Images and Patterns, 588–93. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-60268-2_349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Petersen, Henry, and Josiah Poon. "Reworking Bridging for Use within the Image Domain." In Computer Analysis of Images and Patterns, 832–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03767-2_101.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Domain of images"

1

Mao, Xudong, and Qing Li. "Unpaired Multi-Domain Image Generation via Regularized Conditional GANs." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/354.

Full text
Abstract:
In this paper, we study the problem of multi-domain image generation, the goal of which is to generate pairs of corresponding images from different domains. With the recent development in generative models, image generation has achieved great progress and has been applied to various computer vision tasks. However, multi-domain image generation may not achieve the desired performance due to the difficulty of learning the correspondence of different domain images, especially when the information of paired samples is not given. To tackle this problem, we propose Regularized Conditional GAN (RegCGAN) which is capable of learning to generate corresponding images in the absence of paired training data. RegCGAN is based on the conditional GAN, and we introduce two regularizers to guide the model to learn the corresponding semantics of different domains. We evaluate the proposed model on several tasks for which paired training data is not given, including the generation of edges and photos, the generation of faces with different attributes, etc. The experimental results show that our model can successfully generate corresponding images for all these tasks, while outperforms the baseline methods. We also introduce an approach of applying RegCGAN to unsupervised domain adaptation.
APA, Harvard, Vancouver, ISO, and other styles
2

He, Tao, Yuan-Fang Li, Lianli Gao, Dongxiang Zhang, and Jingkuan Song. "One Network for Multi-Domains: Domain Adaptive Hashing with Intersectant Generative Adversarial Networks." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/344.

Full text
Abstract:
With the recent explosive increase of digital data, image recognition and retrieval become a critical practical application. Hashing is an effective solution to this problem, due to its low storage requirement and high query speed. However, most of past works focus on hashing in a single (source) domain. Thus, the learned hash function may not adapt well in a new (target) domain that has a large distributional difference with the source domain. In this paper, we explore an end-to-end domain adaptive learning framework that simultaneously and precisely generates discriminative hash codes and classifies target domain images. Our method encodes two domains images into a semantic common space, followed by two independent generative adversarial networks arming at crosswise reconstructing two domains’ images, reducing domain disparity and improving alignment in the shared space. We evaluate our framework on four public benchmark datasets, all of which show that our method is superior to the other state-of-the-art methods on the tasks of object recognition and image retrieval.
APA, Harvard, Vancouver, ISO, and other styles
3

Vasconcelos, I., P. Sava, and H. Douma. "Image-domain Interferometry and Wave-equation Extended Images." In 71st EAGE Conference and Exhibition incorporating SPE EUROPEC 2009. European Association of Geoscientists & Engineers, 2009. http://dx.doi.org/10.3997/2214-4609.201400362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Khapli, Vidya R., and Anjali S. Bhalchandra. "Compressed Domain Image Retrieval Using Thumbnails of Images." In 2009 First International Conference on Computational Intelligence, Communication Systems and Networks (CICSYN). IEEE, 2009. http://dx.doi.org/10.1109/cicsyn.2009.96.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Girard, Aaron, and Ivan Vasconcelos. "Image‐domain time‐lapse inversion with extended images." In SEG Technical Program Expanded Abstracts 2010. Society of Exploration Geophysicists, 2010. http://dx.doi.org/10.1190/1.3513744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vasconcelos, Ivan, Paul Sava, and Huub Douma. "Wave‐equation extended images via image‐domain interferometry." In SEG Technical Program Expanded Abstracts 2009. Society of Exploration Geophysicists, 2009. http://dx.doi.org/10.1190/1.3255439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Weiquan, Xuelun Shen, Cheng Wang, Zhihong Zhang, Chenglu Wen, and Jonathan Li. "H-Net: Neural Network for Cross-domain Image Patch Matching." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/119.

Full text
Abstract:
Describing the same scene with different imaging style or rendering image from its 3D model gives us different domain images. Different domain images tend to have a gap and different local appearances, which raise the main challenge on the cross-domain image patch matching. In this paper, we propose to incorporate AutoEncoder into the Siamese network, named as H-Net, of which the structural shape resembles the letter H. The H-Net achieves state-of-the-art performance on the cross-domain image patch matching. Furthermore, we improved H-Net to H-Net++. The H-Net++ extracts invariant feature descriptors in cross-domain image patches and achieves state-of-the-art performance by feature retrieval in Euclidean space. As there is no benchmark dataset including cross-domain images, we made a cross-domain image dataset which consists of camera images, rendering images from UAV 3D model, and images generated by CycleGAN algorithm. Experiments show that the proposed H-Net and H-Net++ outperform the existing algorithms. Our code and cross-domain image dataset are available at https://github.com/Xylon-Sean/H-Net.
APA, Harvard, Vancouver, ISO, and other styles
8

Su, Yuting, Yuqian Li, Dan Song, Weizhi Nie, Wenhui Li, and An-An Liu. "Consistent Domain Structure Learning and Domain Alignment for 2D Image-Based 3D Objects Retrieval." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/123.

Full text
Abstract:
2D image-based 3D objects retrieval is a new topic for 3D objects retrieval which can be used to manage 3D data with 2D images. The goal is to search some related 3D objects when given a 2D image. The task is challenging due to the large domain gap between 2D images and 3D objects. Therefore, it is essential to consider domain adaptation problems to reduce domain discrepancy. However, most of the existing domain adaptation methods only utilize the semantic information from the source domain to predict labels in the target domain and neglect the intrinsic structure of the target domain. In this paper, we propose a domain alignment framework with consistent domain structure learning to reduce the large gap between 2D images and 3D objects. The domain structure learning module makes use of both the semantic information from the source domain and the intrinsic structure of the target domain, which provides more reliable predicted labels to the domain alignment module to better align the conditional distribution. We conducted experiments on two public datasets, MI3DOR and MI3DOR-2, and the experimental results demonstrate the proposed method outperforms the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Wenzel, L., J. McCord, and A. Hubert. "Simulation Of Magnetooptical Domain Boundary Images." In 1997 IEEE International Magnetics Conference (INTERMAG'97). IEEE, 1997. http://dx.doi.org/10.1109/intmag.1997.597864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Xiangfen, and Wufan Chen. "Wavelet Domain Diffusion for DWI Images." In 2008 2nd International Conference on Bioinformatics and Biomedical Engineering. IEEE, 2008. http://dx.doi.org/10.1109/icbbe.2008.867.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Domain of images"

1

Goda, Matthew E. Wavelet Domain Image Restoration and Super-Resolution. Fort Belvoir, VA: Defense Technical Information Center, August 2002. http://dx.doi.org/10.21236/ada405111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Speed, Ann, David John Stracuzzi, Jina Lee, and Lauren Hund. Applying Image Clutter Metrics to Domain-Specific Expert Visual Search. Office of Scientific and Technical Information (OSTI), September 2017. http://dx.doi.org/10.2172/1603851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Tie Q., Yi L. Murphey, Robert Karlsen, and Grant Gerhart. Color Image Segmentation in the Color and Spatial Domains. Fort Belvoir, VA: Defense Technical Information Center, January 2002. http://dx.doi.org/10.21236/ada458211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rane, Shantanu D., Jeremiah Remus, and Guillermo Sapiro. Wavelet-Domain Reconstruction of Lost Blocks in Wireless Image Transmission and Packet-Switched Networks. Fort Belvoir, VA: Defense Technical Information Center, January 2005. http://dx.doi.org/10.21236/ada437341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yan, Yujie, and Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, May 2021. http://dx.doi.org/10.17760/d20410114.

Full text
Abstract:
Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography