Academic literature on the topic 'Texture-based rendering'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Texture-based rendering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Texture-based rendering"

1

Sun, Yuhong, Jiatao Wang, and Lijuan Han. "Pencil drawing rendering based on example texture." Journal of Computational Methods in Sciences and Engineering 17, no. 4 (November 24, 2017): 635–44. http://dx.doi.org/10.3233/jcm-170747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Celes, Waldemar, and Frederico Abraham. "Fast and versatile texture-based wireframe rendering." Visual Computer 27, no. 10 (August 27, 2011): 939–48. http://dx.doi.org/10.1007/s00371-011-0623-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Caban, J. J., and P. Rheingans. "Texture-based Transfer Functions for Direct Volume Rendering." IEEE Transactions on Visualization and Computer Graphics 14, no. 6 (November 2008): 1364–71. http://dx.doi.org/10.1109/tvcg.2008.169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Qian, Wen Hua, Dan Xu, Kun Yue, Zheng Guan, and Yuan Yuan Pu. "Texture Deviation Mapping Based on Detail Enhancement." Advanced Engineering Forum 6-7 (September 2012): 32–37. http://dx.doi.org/10.4028/www.scientific.net/aef.6-7.32.

Full text
Abstract:
To provide with an effective technique for non-photorealistic rendering for computer generated images with artistic appearances from 2D images motivates our work in this paper. The methods proposed in this paper are inspired by the image deviation mapping constructed from a single texture background image. We establish our method for obtaining artistic appearances taking the deviation mapping as the underlying basis. Based on the simple linear filtering convolution operation, which is well suited of progressive coarsening of images and for detail extraction, and the image’s detail, such as edge and tone can be preserved in the final artistic appearance. This method has the exact computational complexity, and this technique is easily to implement and the rendering speed is fast.
APA, Harvard, Vancouver, ISO, and other styles
5

Tsunematsu, Yuta, Norihiko Kawai, Tomokazu Sato, and Naokazu Yokoya. "Texture Transfer Based on Energy Minimization for Painterly Rendering." Journal of Information Processing 24, no. 6 (2016): 897–907. http://dx.doi.org/10.2197/ipsjjip.24.897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bajaj, Chandrajit, Insung Ihm, and Sanghun Park. "Compression-Based 3D Texture Mapping for Real-Time Rendering." Graphical Models 62, no. 6 (November 2000): 391–410. http://dx.doi.org/10.1006/gmod.2000.0532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kniss, J., P. McCormick, A. McPherson, J. Ahrens, J. Painter, A. Keahey, and C. Hansen. "Interactive texture-based volume rendering for large data sets." IEEE Computer Graphics and Applications 21, no. 4 (2001): 52–61. http://dx.doi.org/10.1109/38.933524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kähler, Ralf, and Hans-Christian Hege. "Texture-based volume rendering of adaptive mesh refinement data." Visual Computer 18, no. 8 (December 1, 2002): 481–92. http://dx.doi.org/10.1007/s00371-002-0174-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Chao, Shui Yan Dai, Ling Da Wu, and Rong Huan Yu. "Smoothly Rendering of Large-Scale Vector Data on Virtual Globe." Applied Mechanics and Materials 631-632 (September 2014): 516–20. http://dx.doi.org/10.4028/www.scientific.net/amm.631-632.516.

Full text
Abstract:
The method of view-dependent smoothly rendering of large-scale vector data based on the vector texture on virtual globe is presented. The vector texture is rasterized from the vector data based on view-dependent quadtree LOD. And the vector texture is projected on the top of the terrain. The smooth transition of multi-level texture is realized by adjusting the transparency of texture dynamically based on view range in two processes to avoid texture “popping”. In “IN” process, the texture’s alpha value increases when the view range goes up while In “OUT” process, the texture’s alpha value decreases. the vector texture buffer updating method is used to accelerate the texture fetching based on the least-recently-used algorithm. In the end, the real-time large-scale vector data rendering is implemented on virtual globe. The result shows that this method can real-time render large-scale vector data smoothly.
APA, Harvard, Vancouver, ISO, and other styles
10

Zeng, Tao, Yan Liu, and Enshan Ouyang. "Combination of oriented-plane curvature reproduction and squeeze film effect-based texture reproduction to simulate curved and textured surface." Mechanics & Industry 22 (2021): 21. http://dx.doi.org/10.1051/meca/2021024.

Full text
Abstract:
The finger skin contains a variety of receptors, which provide multiple tactile sensing channels. When a finger touches the surface of an object, people can simultaneously perceive curvature, texture, softness, temperature, and so on. However, in most of research activities, the designed haptic feedback devices can only focus on a certain channel. In this paper, the rendering of curved and periodic textured surfaces involving two channels, i.e., curvature and texture, was studied. Two psychophysical experiments were conducted to investigate whether the coupling of kinesthetic feedback of curvature and tactile feedback of texture could reproduce curved and textured surfaces with high fidelity. The results showed a deviation of the point of subjective equality values in terms of curvature and roughness, indicating that the curvature rendering and texture rendering have an impact on each other. Therefore, it is necessary to correct the bias when making virtual rendering. The influence of curvature on texture rendering is reduced by recalculating and adjusting the spatial period of the synthesized texture in real-time; the influence of texture on curvature rendering is eliminate by compensating the force difference between touch on physical strip and artificial stimulus.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Texture-based rendering"

1

Kwatra, Vivek. "Example-based Rendering of Textural Phenomena." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7214.

Full text
Abstract:
This thesis explores synthesis by example as a paradigm for rendering real-world phenomena. In particular, phenomena that can be visually described as texture are considered. We exploit, for synthesis, the self-repeating nature of the visual elements constituting these texture exemplars. Techniques for unconstrained as well as constrained/controllable synthesis of both image and video textures are presented. For unconstrained synthesis, we present two robust techniques that can perform spatio-temporal extension, editing, and merging of image as well as video textures. In one of these techniques, large patches of input texture are automatically aligned and seamless stitched with each other to generate realistic looking images and videos. The second technique is based on iterative optimization of a global energy function that measures the quality of the synthesized texture with respect to the given input exemplar. We also present a technique for controllable texture synthesis. In particular, it allows for generation of motion-controlled texture animations that follow a specified flow field. Animations synthesized in this fashion maintain the structural properties like local shape, size, and orientation of the input texture even as they move according to the specified flow. We cast this problem into an optimization framework that tries to simultaneously satisfy the two (potentially competing) objectives of similarity to the input texture and consistency with the flow field. This optimization is a simple extension of the approach used for unconstrained texture synthesis. A general framework for example-based synthesis and rendering is also presented. This framework provides a design space for constructing example-based rendering algorithms. The goal of such algorithms would be to use texture exemplars to render animations for which certain behavioral characteristics need to be controlled. Our motion-controlled texture synthesis technique is an instantiation of this framework where the characteristic being controlled is motion represented as a flow field.
APA, Harvard, Vancouver, ISO, and other styles
2

Muddala, Suryanarayana Murthy. "Free View Rendering for 3D Video : Edge-Aided Rendering and Depth-Based Image Inpainting." Doctoral thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-25097.

Full text
Abstract:
Three Dimensional Video (3DV) has become increasingly popular with the success of 3D cinema. Moreover, emerging display technology offers an immersive experience to the viewer without the necessity of any visual aids such as 3D glasses. 3DV applications, Three Dimensional Television (3DTV) and Free Viewpoint Television (FTV) are auspicious technologies for living room environments by providing immersive experience and look around facilities. In order to provide such an experience, these technologies require a number of camera views captured from different viewpoints. However, the capture and transmission of the required number of views is not a feasible solution, and thus view rendering is employed as an efficient solution to produce the necessary number of views. Depth-image-based rendering (DIBR) is a commonly used rendering method. Although DIBR is a simple approach that can produce the desired number of views, inherent artifacts are major issues in the view rendering. Despite much effort to tackle the rendering artifacts over the years, rendered views still contain visible artifacts. This dissertation addresses three problems in order to improve 3DV quality: 1) How to improve the rendered view quality using a direct approach without dealing each artifact specifically. 2) How to handle disocclusions (a.k.a. holes) in the rendered views in a visually plausible manner using inpainting. 3) How to reduce spatial inconsistencies in the rendered view. The first problem is tackled by an edge-aided rendering method that uses a direct approach with one-dimensional interpolation, which is applicable when the virtual camera distance is small. The second problem is addressed by using a depth-based inpainting method in the virtual view, which reconstructs the missing texture with background data at the disocclusions. The third problem is undertaken by a rendering method that firstly inpaint occlusions as a layered depth image (LDI) in the original view, and then renders a spatially consistent virtual view. Objective assessments of proposed methods show improvements over the state-of-the-art rendering methods. Visual inspection shows slight improvements for intermediate views rendered from multiview videos-plus-depth, and the proposed methods outperforms other view rendering methods in the case of rendering from single view video-plus-depth. Results confirm that the proposed methods are capable of reducing rendering artifacts and producing spatially consistent virtual views. In conclusion, the view rendering methods proposed in this dissertation can support the production of high quality virtual views based on a limited number of input views. When used to create a multi-scopic presentation, the outcome of this dissertation can benefit 3DV technologies to improve the immersive experience.
APA, Harvard, Vancouver, ISO, and other styles
3

Jansson, Emil. "Matematisk generering och realtidsrendering av vegetation i Gizmo3D." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2324.

Full text
Abstract:

To render outdoor scenes with lots of vegetation in real time is a big challenge. This problem has important applications in the areas of visualization and simulation. Some progress has been made the last years, but a previously unsolved difficulty has been to combine high rendering quality with abundant variation in scenes.

I present a method to mathematically generate and render vegetation in real time, with implementation in the scene graph Gizmo3D. The most important quality of the method is its ability to render scenes with many unique specimens with very low aliasing.

To obtain real time performance, a hierarchical level-of-detail scheme (LOD- scheme) is used which facilitates generation of vegetation in the desired level- of-detail on the fly. The LOD-scheme is texture-based and uses textures that are common for all specimens of a whole species. The most important contribution is that I combine this LOD-scheme with the use of semi- transparency, which makes it possible to obtain low aliasing.

Scenes with semi-transparency require correct rendering order. I solve this problem by introducing a new method for approximate depth sorting. An additional contribution is a variant of axis-aligned billboards, designated blob, which is used in the LOD-scheme. Furthermore, building blocks consisting of small branches are used to increase generation performance.

APA, Harvard, Vancouver, ISO, and other styles
4

Huff, Rafael. "Recorte volumétrico usando técnicas de interação 2D e 3D." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2006. http://hdl.handle.net/10183/7385.

Full text
Abstract:
A visualização de conjuntos de dados volumétricos é comum em diversas áreas de aplicação e há já alguns anos os diversos aspectos envolvidos nessas técnicas vêm sendo pesquisados. No entanto, apesar dos avanços das técnicas de visualização de volumes, a interação com grandes volumes de dados ainda apresenta desafios devido a questões de percepção (ou isolamento) de estruturas internas e desempenho computacional. O suporte do hardware gráfico para visualização baseada em texturas permite o desenvolvimento de técnicas eficientes de rendering que podem ser combinadas com ferramentas de recorte interativas para possibilitar a inspeção de conjuntos de dados tridimensionais. Muitos estudos abordam a otimização do desempenho de ferramentas de recorte, mas muito poucos tratam das metáforas de interação utilizadas por essas ferramentas. O objetivo deste trabalho é desenvolver ferramentas interativas, intuitivas e fáceis de usar para o recorte de imagens volumétricas. Inicialmente, é apresentado um estudo sobre as principais técnicas de visualização direta de volumes e como é feita a exploração desses volumes utilizando-se recorte volumétrico. Nesse estudo é identificada a solução que melhor se enquadra no presente trabalho para garantir a interatividade necessária. Após, são apresentadas diversas técnicas de interação existentes, suas metáforas e taxonomias, para determinar as possíveis técnicas de interação mais fáceis de serem utilizadas por ferramentas de recorte. A partir desse embasamento, este trabalho apresenta o desenvolvimento de três ferramentas de recorte genéricas implementadas usando-se duas metáforas de interação distintas que são freqüentemente utilizadas por usuários de aplicativos 3D: apontador virtual e mão virtual. A taxa de interação dessas ferramentas é obtida através de programas de fragmentos especiais executados diretamente no hardware gráfico. Estes programas especificam regiões dentro do volume a serem descartadas durante o rendering, com base em predicados geométricos. Primeiramente, o desempenho, precisão e preferência (por parte dos usuários) das ferramentas de recorte volumétrico são avaliados para comparar as metáforas de interação empregadas. Após, é avaliada a interação utilizando-se diferentes dispositivos de entrada para a manipulação do volume e ferramentas. A utilização das duas mãos ao mesmo tempo para essa manipulação também é testada. Os resultados destes experimentos de avaliação são apresentados e discutidos.
Visualization of volumetric datasets is common in many fields and has been an active area of research in the past two decades. In spite of developments in volume visualization techniques, interacting with large datasets still demands research efforts due to perceptual and performance issues. The support of graphics hardware for texture-based visualization allows efficient implementation of rendering techniques that can be combined with interactive sculpting tools to enable interactive inspection of 3D datasets. Many studies regarding performance optimization of sculpting tools have been reported, but very few are concerned with the interaction techniques employed. The purpose of this work is the development of interactive, intuitive, and easy-to-use sculpting tools. Initially, a review of the main techniques for direct volume visualization and sculpting is presented. The best solution that guarantees the required interaction is highlighted. Afterwards, in order to identify the most user-friendly interaction technique for volume sculpting, several interaction techniques, metaphors and taxonomies are presented. Based on that, this work presents the development of three generic sculpting tools implemented using two different interaction metaphors, which are often used by users of 3D applications: virtual pointer and virtual hand. Interactive rates for these sculpting tools are obtained by running special fragment programs on the graphics hardware which specify regions within the volume to be discarded from rendering based on geometric predicates. After development, the performance, precision and user preference of the sculpting tools were evaluated to compare the interaction metaphors. Afterward, the tools were evaluated by comparing the use of a 3D mouse against a conventional wheel mouse for guiding volume and tools manipulation. Two-handed input was also tested with both types of mouse. The results from the evaluation experiments are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Ang, Jason. "Offset Surface Light Fields." Thesis, University of Waterloo, 2003. http://hdl.handle.net/10012/1100.

Full text
Abstract:
For producing realistic images, reflection is an important visual effect. Reflections of the environment are important not only for highly reflective objects, such as mirrors, but also for more common objects such as brushed metals and glossy plastics. Generating these reflections accurately at real-time rates for interactive applications, however, is a difficult problem. Previous works in this area have made assumptions that sacrifice accuracy in order to preserve interactivity. I will present an algorithm that tries to handle reflection accurately in the general case for real-time rendering. The algorithm uses a database of prerendered environment maps to render both the original object itself and an additional bidirectional reflection distribution function (BRDF). The algorithm performs image-based rendering in reflection space in order to achieve accurate results. It also uses graphics processing unit (GPU) features to accelerate rendering.
APA, Harvard, Vancouver, ISO, and other styles
6

Borikar, Siddharth Rajkumar. "FAST ALGORITHMS FOR FRAGMENT BASED COMPLETION IN IMAGES OF NATURAL SCENES." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4424.

Full text
Abstract:
Textures are used widely in computer graphics to represent fine visual details and produce realistic looking images. Often it is necessary to remove some foreground object from the scene. Removal of the portion creates one or more holes in the texture image. These holes need to be filled to complete the image. Various methods like clone brush strokes and compositing processes are used to carry out this completion. User skill is required in such methods. Texture synthesis can also be used to complete regions where the texture is stationary or structured. Reconstructing methods can be used to fill in large-scale missing regions by interpolation. Inpainting is suitable for relatively small, smooth and non-textured regions. A number of other approaches focus on the edge and contour completion aspect of the problem. In this thesis we present a novel approach for addressing this image completion problem. Our approach focuses on image based completion, with no knowledge of the underlying scene. In natural images there is a strong horizontal orientation of texture/color distribution. We exploit this fact in our proposed algorithm to fill in missing regions from natural images. We follow the principle of figural familiarity and use the image as our training set to complete the image.
M.S.
School of Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Jiunn-Shyan, and 李俊賢. "A Study of Art-Based Rendering and Example-Based Texture Synthesis." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/20511688817145069158.

Full text
Abstract:
博士
國立中興大學
資訊科學研究所
92
This dissertation introduces new algorithms for computer-generated non-photorealistic images that look like hand-made paintings. Two art-based rendering algorithms are developed: Ink Diffusion Synthesis and Impressionist Line Integral Convolution. Then, we explore a patch-based sampling algorithm for texture synthesis and transfer denoted as Example-Based Texture Synthesis. Calligraphy has blossomed through a long history in the Orient and is appreciated by many people. In this thesis, we first present an interactive system capable of synthesizing realistic ink diffusion for calligraphic writing education. The system provides two frames for potential users. The first frame presents the outline of a Chinese character that is selected by the user, and a menu where a user can specify parameters such as ink density and paper styles. Users imitate the action of calligraphic writing through mouse movements along the skeleton of the character. Once this has been completed, an ink diffusion effect based on the user’s mouse movement is then synthesized and demonstrated in the second frame. We present a physically-based model combined with fibrous paper structures to synthesize the ink diffusion effect. The experiment results show that this system adequately enlightens and entertains both skillful students and naive novices. In conclusion, by using our system users, especially new beginners, benefit from interactively practicing calligraphy as often as they want without feeling bored. Next, we discuss the line integral convolution (LIC) method, which was originally developed for imaging vector field in scientific visualization and has the potential to produce images with directional characteristics. In this study, we present four techniques to explore LIC in generating images in the style of the Impressionists. In particular, we develop an Impressionist Line Integral Convolution algorithm (ILIC) to generate images with Impressionist styles. This algorithm takes advantage of directional information provided by a photograph image, incorporates a shading technique to blend cool and warm colors into the image, and applies the revised LIC method to imitate paintings in the Impressionist style. Furthermore, we propose the color fidelity technique, which takes advantages of the cool-to-warm scheme to imitate conventional artistic painting and enhance the visual depth perception. We also present an information preservation technique, which quantifies image details to control the convolution length, thus preserving subtle information during the convolution process. Finally, we demonstrate a top-down sampling technique where a series of artistic mip-maps are generated to construct aesthetic virtual environments. These maps provide constant strokes of directional cues, achieving frame-to-frame coherence in an interactive walkthrough system. Both silhouette drawing and tour into the picture (TIP) approach are employed to enhance the user’s immersion in a virtual world. The experimental results demonstrate the merits of our techniques in generating images in the Impressionist style and constructing an interactive walkthrough system that provides an immersion experience rendered in painting styles. Last, texture synthesis has been widely studied in recent years and patch-based sampling has proven superior in synthesis quality and computation time. However, it suffers from the problem of non-parallel textures usually captured by tilted camera projection. Here we propose a novel texture synthesis framework to tackle the problem of displacement textures. Initially, we adopt a patch-based sampling algorithm by overlapping texture patches to synthesize textures of arbitrary size with similar appearance. Most importantly, we present a hybrid method of dynamic programming and feathering technique to make possible a consistent transition between two stitched boundaries. Secondly, we develop a synthesis system with no user intervention during the synthesis process. Our system is amenable to synthesizing tiling textures as well as constrained textures. Thirdly, a novel framework for displacement texture synthesis is proposed. Given a tilted source, our algorithm efficiently renders an extended texture with the same slant as the input sample. In addition, our method can easily rectify a displacement image to a vertical image, and vice versa. Experimental results show that the proposed framework succeeds in synthesizing frontal non-parallel textures. Finally, we propose a novel non-iterative transfer algorithm. Given a source and a target, our algorithm efficiently renders the target image by transferring matched source patches without incurring iteration. The method takes into account two principles of target fidelity and neighbor coherence. Experimental results demonstrate that our approach presents a more visually plausible appearance and runs faster than the iterative counterpart.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Shun-Liang, and 吳順良. "Rendering Complex Scenes Based on Spatial Subdivision and Texture-with-Depth." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/25594323917654572643.

Full text
Abstract:
碩士
國立交通大學
資訊工程系
88
In this thesis, we combine geometry-based and image-based rendering techniques to design and implement a VR navigation system that will have efficiency relatively independent of the scene complexity. The system has two phases. In the preprocessing phase, the x-y plane of a 3D scene is partitioned into equal-sized hexagonal cells, called navigation cells, each of which is associated with a larger image cell that has the identical center. Each side face of the image cell will be stored a cached image with depth that is obtained by rendering the scene using the cell''s center as the projection center and the side face as the window. The depth mesh of the cached image will be obtained by triangulating cached image using depth. In the run-time phase, the participant navigates inside a navigation cell and views the image derived by combining the geometry-based rendering of the objects inside the corresponding image cell and image-based rendering of the objects outside the corresponding image cell. Objects outside the image cell will be rendered by warping and reprojecting depth mesh with cached texture image and objects inside the image cell will be rendered by using meshes with appropriate resolution. The visibility culling technique will be also integrated to speedup geometry-based rendering.
APA, Harvard, Vancouver, ISO, and other styles
9

Keng-JungHsu and 許耕榮. "GPU Implementation for Centralized Texture Depth Depacking and Depth Image-based Rendering." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/hm9f3u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Wei-Jhih, and 王暐智. "Compression and Rendering of Multi-Spectral Bidirectional Texture Functions Using GPU-Based Tensor Approximation." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/3527nz.

Full text
Abstract:
碩士
元智大學
資訊工程學系
105
Multi-Spectral Bidirectional Texture Functions (MSBTFs) are designed for the accurate color reproduction of complex materials in virtual scenes with arbitrary illumination. However, rendering MSBTFs at interactive rates is challenging since the amount of datasets is huge. This thesis applies a GPU-based tensor approximation framework for compressing MSBTFs and discusses some practical details about compressing and rendering MSBTFs. We also present a heuristic method that decides suitable compression parameters for better offline performance and a novel technique for efficient rendering at runtime. Finally, we analyze the offline performance of the GPU-based framework with thorough experiments to prove its efficiency and find out appropriate configurations for compressing MSBTFs.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Texture-based rendering"

1

Debevec, Paul, Yizhou Yu, and George Borshukov. "Efficient View-Dependent Image-Based Rendering with Projective Texture-Mapping." In Rendering Techniques ’98, 105–16. Vienna: Springer Vienna, 1998. http://dx.doi.org/10.1007/978-3-7091-6453-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Adi, Waskito, and Suziah Sulaiman. "Haptic Texture Rendering Based on Visual Texture Information: A Study to Achieve Realistic Haptic Texture Rendering." In Lecture Notes in Computer Science, 279–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-05036-7_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Max, Nelson, Oliver Deussen, and Brett Keating. "Hierarchical Image-Based Rendering using Texture Mapping Hardware." In Eurographics, 57–62. Vienna: Springer Vienna, 1999. http://dx.doi.org/10.1007/978-3-7091-6809-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dumont, Reynald, Fabio Pellacini, and James A. Ferwerda. "A Perceptually-Based Texture Caching Algorithm for Hardware-Based Rendering." In Eurographics, 249–56. Vienna: Springer Vienna, 2001. http://dx.doi.org/10.1007/978-3-7091-6242-2_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Yubin, Meijun Sun, Zheng Wang, and Shiyao Wang. "2D Texture Library Based Fast 3D Ink Style Rendering." In Proceedings of the 2012 International Conference on Cybernetics and Informatics, 1919–29. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-3872-4_246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zabulis, Xenophon, Manolis I. A. Lourakis, and Stefanos S. Stefanou. "3D Pose Refinement Using Rendering and Texture-Based Matching." In Computer Vision and Graphics, 672–79. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11331-9_80.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hasegawa, Kyoko, Kozaburo Hachimura, and Satoshi Tanaka. "3D Fused Visualization Based on Particles-Based Rendering with Opacity Using Volume Texture." In Communications in Computer and Information Science, 160–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-45037-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

LaMar, Eric, Mark A. Duchaineau, Bernd Hamann, and Kenneth I. Joy. "Multiresolution Techniques for Interactive Texture-based Rendering of Arbitrarily Oriented Cutting Planes." In Eurographics, 105–14. Vienna: Springer Vienna, 2000. http://dx.doi.org/10.1007/978-3-7091-6783-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Won-Jong, Woo-Chan Park, Jung-Woo Kim, Tack-Don Han, Sung-Bong Yang, and Francis Neelamkavil. "A Bandwidth Reduction Scheme for 3D Texture-Based Volume Rendering on Commodity Graphics Hardware." In Computational Science and Its Applications – ICCSA 2004, 741–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24709-8_78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Meyer, Joerg, Ragnar Borg, Ikuko Takanashi, Eric B. Lum, and Bernd Hamann. "Segmentation and Texture-Based Hierarchical Rendering Techniques for Large-Scale Real-Color Biomedical Image Data." In Data Visualization, 169–82. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4615-1177-9_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Texture-based rendering"

1

Celes, W., and F. Abraham. "Texture-Based Wireframe Rendering." In 2010 23rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI 2010). IEEE, 2010. http://dx.doi.org/10.1109/sibgrapi.2010.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Jialu, Aiguo Song, and Xiaorui Zhang. "Image-based haptic texture rendering." In the 9th ACM SIGGRAPH Conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1900179.1900230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shihao, Chen, He Guiqing, and Hao Chongyang. "Rapid Texture-based Volume Rendering." In 2009 International Conference on Environmental Science and Information Application Technology, ESIAT. IEEE, 2009. http://dx.doi.org/10.1109/esiat.2009.147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wenquan, Sun, Tang Liyu, Chen Chongcheng, and Chen Gang. "Terrain rendering technology based on vertex texture." In 2010 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2010. http://dx.doi.org/10.1109/icalip.2010.5684967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Woodford, O. J., and A. Fitzgibbon. "Fast Image-based Rendering using Hierarchical Texture Priors." In British Machine Vision Conference 2005. British Machine Vision Association, 2005. http://dx.doi.org/10.5244/c.19.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ndj, P., M. Koppel, D. Doshkov, H. Lakshman, P. Merkle, K. Muller, and T. Wiegand. "Depth image based rendering with advanced texture synthesis." In 2010 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2010. http://dx.doi.org/10.1109/icme.2010.5583559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pai, Hong-Yi. "Texture designs and workflows for physically based rendering using procedural texture generation." In 2019 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE). IEEE, 2019. http://dx.doi.org/10.1109/ecice47484.2019.8942651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dong, Junyu, Xinghui Dong, Shanli Mou, and Bo Qin. "Flow Visualization Based on Rendering of 3D Surface Texture." In 2008 International Workshop on Geoscience and Remote Sensing (ETT and GRS). IEEE, 2008. http://dx.doi.org/10.1109/ettandgrs.2008.411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Yixin, Wenying Qiu, Xiaohao Wang, and Min Zhang. "Tactile Rendering of Fabric Textures Based on Texture Recognition." In 2019 IEEE 2nd International Conference on Micro/Nano Sensors for AI, Healthcare, and Robotics (NSENS). IEEE, 2019. http://dx.doi.org/10.1109/nsens49395.2019.9293989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Qian, Changhui Sun, and MeiKe Wang. "Low-pass filter along ray in texture-based volume rendering." In 2012 IEEE International Conference on Computer Science and Automation Engineering (CSAE). IEEE, 2012. http://dx.doi.org/10.1109/csae.2012.6272574.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography