Academic literature on the topic 'Image scene'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Image scene.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Image scene"

1

Liu, Yongmei, Tanakrit Wongwitit, and Linsen Yu. "Automatic Image Annotation Based on Scene Analysis." International Journal of Image and Graphics 14, no. 03 (2014): 1450012. http://dx.doi.org/10.1142/s0219467814500120.

Full text
Abstract:
Automatic image annotation is an important and challenging job for image analysis and understanding such as content-based image retrieval (CBIR). The relationship between the keywords and visual features is too complicated due to the semantic gap. We present an approach of automatic image annotation based on scene analysis. With the constrain of scene semantics, the correlation between keywords and visual features becomes simpler and clearer. Our model has two stages of process. The first stage is training process which groups training image data set into semantic scenes using the extracted semantic feature and visual scenes constructed from the calculation distances of visual features for every pairs of training images by using Earth mover's distance (EMD). Then, combine a pair of semantic and visual scene together and apply Gaussian mixture model (GMM) for all scenes. The second stage is to test and annotate keywords for test image data set. Using the visual features provided by Duygulu, experimental results show that our model outperforms probabilistic latent semantic analysis (PLSA) & GMM (PLSA&GMM) model on Corel5K database.
APA, Harvard, Vancouver, ISO, and other styles
2

Stevenson, Natasha, and Kun Guo. "Image Valence Modulates the Processing of Low-Resolution Affective Natural Scenes." Perception 49, no. 10 (2020): 1057–68. http://dx.doi.org/10.1177/0301006620957213.

Full text
Abstract:
In natural vision, noisy and distorted visual inputs often change our perceptual strategy in scene perception. However, it is unclear the extent to which the affective meaning embedded in the degraded natural scenes modulates our scene understanding and associated eye movements. In this eye-tracking experiment by presenting natural scene images with different categories and levels of emotional valence (high-positive, medium-positive, neutral/low-positive, medium-negative, and high-negative), we systematically investigated human participants’ perceptual sensitivity (image valence categorization and arousal rating) and image-viewing gaze behaviour to the changes of image resolution. Our analysis revealed that reducing image resolution led to decreased valence recognition and arousal rating, decreased number of fixations in image-viewing but increased individual fixation duration, and stronger central fixation bias. Furthermore, these distortion effects were modulated by the scene valence with less deterioration impact on the valence categorization of negatively valenced scenes and on the gaze behaviour in viewing of high emotionally charged (high-positive and high-negative) scenes. It seems that our visual system shows a valence-modulated susceptibility to the image distortions in scene perception.
APA, Harvard, Vancouver, ISO, and other styles
3

Deng, Li Qiong, Dan Wen Chen, Zhi Min Yuan, and Ling Da Wu. "Attribute-Based Cartoon Scene Image Search System." Advanced Materials Research 268-270 (July 2011): 1030–35. http://dx.doi.org/10.4028/www.scientific.net/amr.268-270.1030.

Full text
Abstract:
In this paper, we present an interactive search system of cartoon scène images. Using a set of automatically extracted, semantic cartoon scene images’ attributes (such as category, time and pureness), the user can find a desired cartoon scene image, such as “a pure sky at sunset”. The system is fully automatic and scalable. It computes all cartoon scene images’ attributes offline, and then provides an interactive online search engine. Furthermore, the system contains different kinds of retrieval interface designs which aimed at users. The results show that our system can improve the facility and efficiency greatly.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Xue Feng, and Yu Fan. "A Research for Fuzzy Image Restoration." Advanced Materials Research 955-959 (June 2014): 1085–88. http://dx.doi.org/10.4028/www.scientific.net/amr.955-959.1085.

Full text
Abstract:
Computational photography and image processing technology are used to restore the clearness of images taken in fog scenes autmatically.The technology is used to restore the clearness of the fog scene,which includes digital image processing and the physical model of atmospheric scattering.An algorithm is designed to restore the clearness of the fog scene under the assumption of the albedo images and then the resolution algorithm is analysised.The algorithm is implemented by the software of image process ,which can improve the efficiency of the algorithm and interface.The fog image and defogging image are compared, and the results show that the visibility of the image is improved, and the image restoration is more clearly.
APA, Harvard, Vancouver, ISO, and other styles
5

Fan, Yu, and Xue Feng Wu. "A Research for Image Defogging Algorithm." Applied Mechanics and Materials 409-410 (September 2013): 1653–56. http://dx.doi.org/10.4028/www.scientific.net/amm.409-410.1653.

Full text
Abstract:
Computational photography and image processing technology are used to restore the clearness of images taken in fog scenes autmatically.The technology is used to restore the clearness of the fog scene,which includes digital image processing and the physical model of atmospheric scattering.An algorithm is designed to restore the clearness of the fog scene under the assumption of the albedo images and then the resolution algorithm is analysised.The algorithm is implemented by the software of image process ,which can improve the efficiency of the algorithm and interface.The fog image and defogging image are compared, and the results show that the visibility of the image is improved, and the image restoration is more clearly .
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Yuanjin. "Application of Remote Sensing Image Data Scene Generation Method in Smart City." Complexity 2021 (January 28, 2021): 1–13. http://dx.doi.org/10.1155/2021/6653841.

Full text
Abstract:
Remote sensing image simulation is a very effective method to verify the feasibility of sensor devices for ground observation. The key to remote sensing image application is that simultaneous interpreting of remote sensing images can make use of the different characteristics of different data, eliminate the redundancy and contradiction between different sensors, and improve the timeliness and reliability of remote sensing information extraction. The hotspots and difficulties in this direction are based on remote sensing image simulation of 3D scenes on the ground. Therefore, constructing the 3D scene model on the ground rapidly and accurately is the focus of current research. Because different scenes have different radiation characteristics, therefore, when using MATLAB to write a program generated by 3D scenes, 3D scenes must be saved as different text files according to different scene types, and then extension program of the scene is written to solve the defect that the calculation efficiency is not ideal due to the huge amount of data. This paper uses POV ray photon reverse tracking software to simulate the imaging process of remote sensing sensors, coordinate transformation is used to convert a triangle text file to POV ray readable information and input the RGB value of the base color based on the colorimetry principle, and the final 3D scene is visualized. This paper analyzes the thermal radiation characteristics of the scene and proves the rationality of the scene simulation. The experimental results show that introducing the chroma in the visualization of the scene model makes the whole scene have not only fidelity, but also radiation characteristics in shape and color. This is indispensable in existing 3D modeling and visualization studies. Compared with the complex radiation transmission method, using the multiple angle two-dimensional image generated by POV rays to analyze the radiation characteristics of the scene, the result is intuitive and easy to understand.
APA, Harvard, Vancouver, ISO, and other styles
7

Goel, Gaurav, and Dr Renu Dhir. "Characters Strings are Extracted Exhibit Morphology Method of an Image." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 6, no. 1 (2013): 272–78. http://dx.doi.org/10.24297/ijct.v6i1.4454.

Full text
Abstract:
In understanding an image, extraction of characters existing in the image is considered to be important. Scene images are different from document images, which are composed of characters and complicated background i.e. photo, picture, or painting etc. instead of white one that makes it difficult to be dealt with. Extraction and localization of scene text are used in many applications. In this paper, we have proposed a connected component based method to extract text from natural images. The proposed method uses colour space processing. Character recognition is done through OCR that accepts the input in form of text boxes, which are generated through text detection and localization stages. The Proposed method is robust with respect to font size, colour, orientation, and style. Results of the proposed algorithm, by taking the real scenes, including indoor and outdoor images, shows that proposed method efficiently extracts and localizes the scene text. In this paper, we have introduced a new method to extract characters from scene images using mathematical morphology.Â
APA, Harvard, Vancouver, ISO, and other styles
8

Zhu, Zhiqin, Yaqin Luo, Hongyan Wei, et al. "Atmospheric Light Estimation Based Remote Sensing Image Dehazing." Remote Sensing 13, no. 13 (2021): 2432. http://dx.doi.org/10.3390/rs13132432.

Full text
Abstract:
Remote sensing images are widely used in object detection and tracking, military security, and other computer vision tasks. However, remote sensing images are often degraded by suspended aerosol in the air, especially under poor weather conditions, such as fog, haze, and mist. The quality of remote sensing images directly affect the normal operations of computer vision systems. As such, haze removal is a crucial and indispensable pre-processing step in remote sensing image processing. Additionally, most of the existing image dehazing methods are not applicable to all scenes, so the corresponding dehazed images may have varying degrees of color distortion. This paper proposes a novel atmospheric light estimation based dehazing algorithm to obtain high visual-quality remote sensing images. First, a differentiable function is used to train the parameters of a linear scene depth model for the scene depth map generation of remote sensing images. Second, the atmospheric light of each hazy remote sensing image is estimated by the corresponding scene depth map. Then, the corresponding transmission map is estimated on the basis of the estimated atmospheric light by a haze-lines model. Finally, according to the estimated atmospheric light and transmission map, an atmospheric scattering model is applied to remove haze from remote sensing images. The colors of the images dehazed by the proposed method are in line with the perception of human eyes in different scenes. A dataset with 100 remote sensing images from hazy scenes was built for testing. The performance of the proposed image dehazing method is confirmed by theoretical analysis and comparative experiments.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Wen, Dongping Ming, Lu Xu, Hanqing Bao, and Min Wang. "Stratified Object-Oriented Image Classification Based on Remote Sensing Image Scene Division." Journal of Spectroscopy 2018 (June 3, 2018): 1–11. http://dx.doi.org/10.1155/2018/3918954.

Full text
Abstract:
The traditional remote sensing image segmentation method uses the same set of parameters for the entire image. However, due to objects’ scale-dependent nature, the optimal segmentation parameters for an overall image may not be suitable for all objects. According to the idea of spatial dependence, the same kind of objects, which have the similar spatial scale, often gather in the same scene and form a scene. Based on this scenario, this paper proposes a stratified object-oriented image analysis method based on remote sensing image scene division. This method firstly uses middle semantic which can reflect an image’s visual complexity to classify the remote sensing image into different scenes, and then within each scene, an improved grid search algorithm is employed to optimize the segmentation result of each scene, so that the optimal scale can be utmostly adopted for each scene. Because the complexity of data is effectively reduced by stratified processing, local scale optimization ensures the overall classification accuracy of the whole image, which is practically meaningful for remote sensing geo-application.
APA, Harvard, Vancouver, ISO, and other styles
10

Qi, Guanqiu, Liang Chang, Yaqin Luo, Yinong Chen, Zhiqin Zhu, and Shujuan Wang. "A Precise Multi-Exposure Image Fusion Method Based on Low-level Features." Sensors 20, no. 6 (2020): 1597. http://dx.doi.org/10.3390/s20061597.

Full text
Abstract:
Multi exposure image fusion (MEF) provides a concise way to generate high-dynamic-range (HDR) images. Although the precise fusion can be achieved by existing MEF methods in different static scenes, the corresponding performance of ghost removal varies in different dynamic scenes. This paper proposes a precise MEF method based on feature patches (FPM) to improve the robustness of ghost removal in a dynamic scene. A reference image is selected by a priori exposure quality first and then used in the structure consistency test to solve the image ghosting issues existing in the dynamic scene MEF. Source images are decomposed into spatial-domain structures by a guided filter. Both the base and detail layer of the decomposed images are fused to achieve the MEF. The structure decomposition of the image patch and the appropriate exposure evaluation are integrated into the proposed solution. Both global and local exposures are optimized to improve the fusion performance. Compared with six existing MEF methods, the proposed FPM not only improves the robustness of ghost removal in a dynamic scene, but also performs well in color saturation, image sharpness, and local detail processing.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Image scene"

1

Zhu, Shanshan, and 朱珊珊. "Using semantic sub-scenes to facilitate scene categorization and understanding." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206459.

Full text
Abstract:
This thesis proposes to learn the absent cognitive element in conventional scene categorization methods: sub-scenes, and use them to better categorize and understand scenes. In scene categorization, it has been observed that the problem of ambiguity occurs when treating the scene as a whole. Scene ambiguity arises from when a similar set of sub-scenes are arranged differently to compose different scenes, or when a scene literally contains several categories. However, these ambiguities can be discerned by the knowledge of sub-scenes. Thus, it is worthy to study sub-scenes and use them to better understand a scene. The proposed research firstly considers an unsupervised method to segment sub-scenes. It emphasizes on generating more integral regions instead of over-segmented regions usually produced by conventional segmentation methods. Several properties of sub-scenes are explored such as proximity grouping, area of influence, similarity and harmony based on psychological principles. These properties are formulated into constraints that are used directly in the proposed framework. A self-determined approach is employed to produce a final segmentation result based on the characteristics of each image in an unsupervised manner. The proposed method performs competitively against other state-of-the-art unsupervised segmentation methods with F-measure of 0.55, Covering of 0.51 and VoI of 1.93 in the Berkeley segmentation dataset. In the Stanford background dataset, it achieves the overlapping score of 0.566 which is higher than the score of 0.499 of the comparison method. To segment and label sub-scenes simultaneously, a supervised approach of semantic segmentation is proposed. It is developed based on a Hierarchical Conditional Random Field classification framework. The proposed method integrates contextual information into the model to improve classification performance. Contextual information including global consistency and spatial context are considered in the proposed method. Global consistency is developed based on generalizing the scene by scene types and spatial context takes the spatial relationship into account. The proposed method improves semantic segmentation by boosting more logical class combinations. It achieves the best score in the MSRC-21 dataset with global accuracy at 87% and the average accuracy at 81%, which out-performs all other state-of-the-art methods by 4% individually. In the Stanford background dataset, it achieves global accuracy at 80.5% and average accuracy at 71.8%, also out-performs other methods by 2%. Finally, the proposed research incorporates sub-scenes into the scene categorization framework to improve categorization performance, especially in ambiguity cases. The proposed method encodes the sub-scene in the way that their spatial information is also considered. Sub-scene descriptor compensates the global descriptor of a scene by evaluating local features with specific geometric attributes. The proposed method obtains an average categorization accuracy of 92.26% in the 8 Scene Category dataset, which outperforms all other published methods by over 2% of improvement. It evaluates ambiguity cases more accurately by discerning which part exemplifies a scene category and how those categories are organized.<br>published_or_final_version<br>Electrical and Electronic Engineering<br>Doctoral<br>Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
2

Gorham, LeRoy A. "Large Scene SAR Image Formation." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1452031174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kulikova, Maria. "Shape recognition for image scene analysis." Nice, 2009. http://www.theses.fr/2009NICE4081.

Full text
Abstract:
Cette thèse englobe deux parties principales. La première partie est dédiée au problème de la classification d’espèces d’arbres en utilisant des descripteurs de forme, en combinant ou non, avec ceux de radiométrie ou de texture. Nous montrons notamment que l’information sur la forme améliore la performance d’un classifieur. Pour cela, dans un premier temps, une étude des formes de couronnes d’arbres extraites à partir d’images aériennes fermées dans un espace de formes, en utilisant la notion de chemin géodésique sous deux métriques dans des espaces appropriés : une métrique non-élastique en utilisant la représentation par la fonction d’angle de la courbe, ainsi qu’une métrique élastique induite par une représentation par la racine carrée appelée q-fonction. Une étape préliminaire nécessaire à la classification est l’extraction des couronnes d’arbre. Dans une seconde partie nous abordons donc le problème de l’extraction d’objets à forme complexe arbitraire à partir des images de télédétection de très haute résolution. Nous construisons un modèle fondé sur les processus ponctuels marqués. Son originalité tient dans sa prise en compte d’objets à forme arbitraire par rapport aux objets à forme paramétrique, e. G. Ellipses ou rectangles. Les formes sélectionnées sont obtenues par la minimisation locale d’une énergie de type contours actifs avec différents a priori sur la forme incorporée. Les objets de la configuration finale sont ensuite sélectionnés parmi les candidats par une dynamique de naissances et morts multiple, couplée à un schéma de recuit simulé. L’approche est validée sur des images de zones forestières à très haute résolution fournies par l’Université d’Agriculture en Suède<br>This thesis includes two main parts. In the first part we address the problem of tree crown classification into species using shape features, without, or in combination with, those of radiometry and texture, to demonstrate that shape information improves classification performance. For this purpose, we first study the shapes of tree crowns extracted from very high resolution aerial infra-red images. For our study, we choose a methodology based on the shape analysis of closed continuous curves on shape spaces using geodesic paths under the bending metric with the angle function curve representation, and the elastic metric with the square root q-function representation? A necessary preliminary step to classification is extraction of the tree crowns. In the second part, we address thus the problem of extraction of multiple objects with complex, arbitrary shape from remote sensing images of very high resolution. We develop a model based on marked point process. Its originality lies on its use of arbitrarily-shaped objects as opposed to parametric shape objects, e. G. Ellipses or rectangles. The shapes considered are obtained by local minimisation of an energy of contour active type with weak and the strong shape prior knowledge included. The objects in the final (optimal) configuration are then selected from amongst these candidates by a birth-and-death dynamics embedded in an annealing scheme. The approach is validated on very high resolutions of forest provided by the Swedish University of Agriculture
APA, Harvard, Vancouver, ISO, and other styles
4

Torle, Petter. "Scene-based correction of image sensor deficiencies." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1752.

Full text
Abstract:
<p>This thesis describes and evaluates a number of algorithms for reducing fixed pattern noise in image sequences. Fixed pattern noise is the dominantnoise component for many infrared detector systems, perceived as a superimposed pattern that is approximately constant for all image frames. </p><p>Primarily, methods based on estimation of the movement between individual image frames are studied. Using scene-matching techniques, global motion between frames can be successfully registered with sub-pixel accuracy. This allows each scene pixel to be traced along a path of individual detector elements. Assuming a static scene, differences in pixel intensities are caused by fixed pattern noise that can be estimated and removed. </p><p>The algorithms have been tested by using real image data from existing infrared imaging systems with good results. The tests include both a two-dimensional focal plane array detector and a linear scanning one-dimensional detector, in different scene conditions.</p>
APA, Harvard, Vancouver, ISO, and other styles
5

Fairweather, Alexander John Robert. "Robust scene interpretation from underwater image sequences." Thesis, University College London (University of London), 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fan, Chuanmao. "Indoor Scene 3D Modeling with Single Image." Thesis, University of Missouri - Columbia, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13850735.

Full text
Abstract:
<p> 3D modeling is a fundamental and very important research area in computer vision and computer graphics. One specific category of this research field is indoor scene 3D modeling. Many efforts have been devoted to its development, but this particular type of modeling is far from mature. Some researchers have focused on single-view reconstruction which reconstructs a 3D model from a single-view 2D indoor image. This is based on the Manhattan world assumption, which states that structure edges are usually parallel with the X, Y, Z axis of the Cartesian coordinate system defined in a scene. Parallel lines, when projected to a 2D image, are straight lines that converge to a vanishing point. Single-view reconstruction uses these constraints to do 3D modeling from a 2D image only. However, this is not an easy task due to the lack of depth information in the 2D image. With the development and maturity of 3D imaging methods such as stereo vision, structured light triangulation, laser strip triangulation, etc., devices that gives 2D images associated with depth information, which form the so called RGBD image, are becoming more popular. Processing of RGB color images and depth images can be combined to ease the 3D modeling of indoor scenes. Two methods combining 2D and 3D modeling are developed in this thesis for comparison. One is region growing segmentation, and second is RANSAC planar segmentation in 3D directly. Results are compared, and 3D modeling is illustrated. 3D modeling is composed of plane labeling, automatic floor, wall, and boundary point detection, wall domain partitions using automatically detected wall, and wall boundary points in 2D image, 3D modeling by extruding from obtained boundary points from floor plane etc. Tests were conducted to verify the method.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
7

Slabaugh, Gregory G. "Novel volumetric scene reconstruction methods for new view synthesis." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/13427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hertz, Lois. "Robust image thresholding techniques for automated scene analysis." Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/15050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Spackman, John Neil. "Scene decompositions for accelerated ray tracing." Thesis, University of Bath, 1989. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kivinen, Jyri Juhani. "Statistical models for natural scene data." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/8879.

Full text
Abstract:
This thesis considers statistical modelling of natural image data. Obtaining advances in this field can have significant impact for both engineering applications, and for the understanding of the human visual system. Several recent advances in natural image modelling have been obtained with the use of unsupervised feature learning. We consider a class of such models, restricted Boltzmann machines (RBMs), used in many recent state-of-the-art image models. We develop extensions of these stochastic artificial neural networks, and use them as a basis for building more effective image models, and tools for computational vision. We first develop a novel framework for obtaining Boltzmann machines, in which the hidden unit activations co-transform with transformed input stimuli in a stable and predictable way throughout the network. We define such models to be transformation equivariant. Such properties have been shown useful for computer vision systems, and have been motivational for example in the development of steerable filters, a widely used classical feature extraction technique. Translation equivariant feature sharing has been the standard method for scaling image models beyond patch-sized data to large images. In our framework we extend shallow and deep models to account for other kinds of transformations as well, focusing on in-plane rotations. Motivated by the unsatisfactory results of current generative natural image models, we take a step back, and evaluate whether they are able to model a subclass of the data, natural image textures. This is a necessary subcomponent of any credible model for visual scenes. We assess the performance of a state- of-the-art model of natural images for texture generation, using a dataset and evaluation techniques from in prior work. We also perform a dissection of the model architecture, uncovering the properties important for good performance. Building on this, we develop structured extensions for more complicated data comprised of textures from multiple classes, using the single-texture model architecture as a basis. These models are shown to be able to produce state-of-the-art texture synthesis results quantitatively, and are also effective qualitatively. It is demonstrated empirically that the developed multiple-texture framework provides a means to generate images of differently textured regions, more generic globally varying textures, and can also be used for texture interpolation, where the approach is radically dfferent from the others in the area. Finally we consider visual boundary prediction from natural images. The work aims to improve understanding of Boltzmann machines in the generation of image segment boundaries, and to investigate deep neural network architectures for learning the boundary detection problem. The developed networks (which avoid several hand-crafted model and feature designs commonly used for the problem), produce the fastest reported inference times in the literature, combined with state-of-the-art performance.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Image scene"

1

Mavandadi, Sam. Interactive three dimensional scene searching and image retrieval. National Library of Canada, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Zhengyou. 3D dynamic scene analysis: A stereo basedapproach. Springer-Verlag, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

1949-, Faugeras Olivier, ed. 3D dynamic scene analysis: A stereo based approach. Springer-Verlag, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Robles-Kelly, Antonio. Imaging Spectroscopy for Scene Analysis. Springer London, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hart, Michael. An image compression survey and algorithm switching based on scene activity. National Aeronautics and Space Administration, Scientific and Technical Information Branch, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nicosevici, Tudor. Efficient 3D Scene Modeling and Mosaicing. Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nitschke, Christian. 3D reconstruction: Real-time volumetric scene reconstruction from multiple views. VDM, Verlag Dr. Müller, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Daniel, Cremers, and SpringerLink (Online service), eds. Stereo Scene Flow for 3D Motion Analysis. Springer-Verlag London Limited, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wiskott, Laurenz. Labeled graphs and dynamic link matching for face recognition and scene analysis. Deutsch, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

White, Brice Landreau. Evaluation of the impact of multispectral image fusion on human performance in global scene processing. Naval Postgraduate School, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Image scene"

1

Awcock, G. J., and R. Thomas. "Scene Constraints." In Applied Image Processing. Macmillan Education UK, 1995. http://dx.doi.org/10.1007/978-1-349-13049-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Forsaith, Peter S. "Scene paintings." In Image, Identity and John Wesley. Routledge, 2017. http://dx.doi.org/10.4324/9781315107905-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mansfield, Alex, Peter Gehler, Luc Van Gool, and Carsten Rother. "Scene Carving: Scene Consistent Image Retargeting." In Computer Vision – ECCV 2010. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15549-9_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Robles-Kelly, Antonio, and Cong Phuoc Huynh. "Spectral Image Acquisition." In Imaging Spectroscopy for Scene Analysis. Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-4652-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Yu-Jin. "Scene Analysis and Interpretation." In Handbook of Image Engineering. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-5873-3_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Batchelor, Bruce G., and Frederick Waltz. "Setting the Scene." In Interactive Image Processing for Machine Vision. Springer London, 1993. http://dx.doi.org/10.1007/978-1-4471-0393-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ogris, Georg, and Lucas Paletta. "Predicting Detection Events from Bayesian Scene Recognition." In Image Analysis. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-45103-x_139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Robles-Kelly, Antonio, and Cong Phuoc Huynh. "Spectral Image Formation Process." In Imaging Spectroscopy for Scene Analysis. Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-4652-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baraldi, Lorenzo, Costantino Grana, and Rita Cucchiara. "Measuring Scene Detection Performance." In Pattern Recognition and Image Analysis. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-19390-8_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Rui, Haoran Zhang, Lv Yan, Xin Tian, and Zheng Zhou. "Scene-Oriented Aesthetic Image Assessment." In Communications in Computer and Information Science. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1194-0_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Image scene"

1

Yu, Chengcheng, Xiaobai Liu, and Song-Chun Zhu. "Single-Image 3D Scene Parsing Using Geometric Commonsense." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/649.

Full text
Abstract:
This paper presents a unified grammatical framework capable of reconstructing a variety of scene types (e.g., urban, campus, county etc.) from a single input image. The key idea of our approach is to study a novel commonsense reasoning framework that mainly exploits two types of prior knowledges: (i) prior distributions over a single dimension of objects, e.g., that the length of a sedan is about 4.5 meters; (ii) pair-wise relationships between the dimensions of scene entities, e.g., that the length of a sedan is shorter than a bus. These unary or relative geometric knowledge, once extracted, are fairly stable across different types of natural scenes, and are informative for enhancing the understanding of various scenes in both 2D images and 3D world. Methodologically, we propose to construct a hierarchical graph representation as a unified representation of the input image and related geometric knowledge. We formulate these objectives with a unified probabilistic formula and develop a data-driven Monte Carlo method to infer the optimal solution with both bottom-to-up and top-down computations. Results with comparisons on public datasets showed that our method clearly outperforms the alternative methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Shen, Yangping, Yoshitsugu Manabe, and Noriko Yata. "3D scene reconstruction and object recognition for indoor scene." In International Workshop on Advanced Image Technology, edited by Phooi Yee Lau, Kazuya Hayase, Qian Kemao, et al. SPIE, 2019. http://dx.doi.org/10.1117/12.2521492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Da Silveira, Thiago L. T., and Cláudio R. Jung. "Dense 3D Indoor Scene Reconstruction from Spherical Images." In Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação, 2020. http://dx.doi.org/10.5753/sibgrapi.est.2020.12977.

Full text
Abstract:
Techniques for 3D reconstruction of scenes based on images are popular and support a number of secondary applications. Traditional approaches require several captures for covering whole environments due to the narrow field of view (FoV) of the pinhole-based/perspective cameras. This paper summarizes the main contributions of the homonym Ph.D. Thesis, which addresses the 3D scene reconstruction problem by considering omnidirectional (spherical or 360◦ ) cameras that present a 360◦ × 180◦ FoV. Although spherical imagery have the benefit of the full-FoV, they are also challenging due to the inherent distortions involved in the capture and representation of such images, which might compromise the use of many wellestablished algorithms for image processing and computer vision. The referred Ph.D. Thesis introduces novel methodologies for estimating dense depth maps from two or more uncalibrated and temporally unordered 360◦ images. It also presents a framework for inferring depth from a single spherical image. We validate our approaches using both synthetic data and computer-generated imagery, showing competitive results concerning other state-ofthe-art methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Shi, Botian, Lei Ji, Pan Lu, Zhendong Niu, and Nan Duan. "Knowledge Aware Semantic Concept Expansion for Image-Text Matching." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/720.

Full text
Abstract:
Image-text matching is a vital cross-modality task in artificial intelligence and has attracted increasing attention in recent years. Existing works have shown that learning semantic concepts is useful to enhance image representation and can significantly improve the performance of both image-to-text and text-to-image retrieval. However, existing models simply detect semantic concepts from a given image, which are less likely to deal with long-tail and occlusion concepts. Frequently co-occurred concepts in the same scene, e.g. bedroom and bed, can provide common-sense knowledge to discover other semantic-related concepts. In this paper, we develop a Scene Concept Graph (SCG) by aggregating image scene graphs and extracting frequently co-occurred concept pairs as scene common-sense knowledge. Moreover, we propose a novel model to incorporate this knowledge to improve image-text matching. Specifically, semantic concepts are detected from images and then expanded by the SCG. After learning to select relevant contextual concepts, we fuse their representations with the image embedding feature to feed into the matching module. Extensive experiments are conducted on Flickr30K and MSCOCO datasets, and prove that our model achieves state-of-the-art results due to the effectiveness of incorporating the external SCG.
APA, Harvard, Vancouver, ISO, and other styles
5

Семёнов, Виталий, Vitaliy Semenov, Василий Шуткин, et al. "Extension of HLOD Technique for Dynamic Scenes with Deterministic Events." In 29th International Conference on Computer Graphics, Image Processing and Computer Vision, Visualization Systems and the Virtual Environment GraphiCon'2019. Bryansk State Technical University, 2019. http://dx.doi.org/10.30987/graphicon-2019-1-37-41.

Full text
Abstract:
Rendering of large 3D scenes with a convincing level of realism is a challenging computer graphics problem. One of the common approaches to solving this problem is to use different levels of details (LOD) for scene objects, depending on their distance from the observer. Using hierarchical levels of detail (HLOD), when levels of details are created not for each object individually, but for large groups of objects at once, is more effective for large scenes. However, this method faces great challenges when changes occur in the scene. This paper discusses a specific class of scenes with a deterministic nature of events and introduces a method for effective rendering of such scenes based on usage of so-called hierarchical dynamic levels of details (HDLOD). Algorithms for generating HDLOD and their use for visualization of the scenes are also described.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Zongyao, Ren Togo, Takahiro Ogawa, and Miki Haseyama. "Semantic-Aware Unpaired Image-to-Image Translation for Urban Scene Images." In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9414192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Lei, Yu-Chiu Tse, Pedro V. Sander, et al. "Image-based bidirectional scene reprojection." In the 2011 SIGGRAPH Asia Conference. ACM Press, 2011. http://dx.doi.org/10.1145/2024156.2024184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Patel, Diptiben, and Shanmuganathan Raman. "Scene Text Aware Image Retargeting." In 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2019. http://dx.doi.org/10.1109/globalsip45357.2019.8969407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Johnson, Justin, Ranjay Krishna, Michael Stark, et al. "Image retrieval using scene graphs." In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2015. http://dx.doi.org/10.1109/cvpr.2015.7298990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Johnson, Justin, Agrim Gupta, and Li Fei-Fei. "Image Generation from Scene Graphs." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00133.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Image scene"

1

Tunick, Arnold. SpaceTime Environmental Image Information for Scene Understanding. Defense Technical Information Center, 2016. http://dx.doi.org/10.21236/ad1007247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Baker, H. H. Building and Using Scene Representation in Image Understanding. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada461044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Conser, Erik. Improved Scoring Models for Semantic Image Retrieval Using Scene Graphs. Portland State University Library, 2000. http://dx.doi.org/10.15760/etd.5767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Peterson, Erin D., Scott D. Brown, Timothy J. Hattenberger, and John R. Schott. Surface and Buried Landmine Scene Generation and Validation Using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model. Defense Technical Information Center, 2000. http://dx.doi.org/10.21236/ada424769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Budkewitsch, P., M. A. D'Iorio, P. W. Vachon, D. T. Andersen, and W H Pollard. Sources of phase decorrelation in SAR scene coherence images from Arctic environments. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1999. http://dx.doi.org/10.4095/219537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Budkewitsch, P., M. D'Iorio, P. W. Vachon, W. H. Pollard, and D T Andersen. Geomorphic, Active Layer and Environmental Change Detection Using SAR Scene Coherence Images. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2000. http://dx.doi.org/10.4095/219671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography