To see the other types of publications on this topic, follow the link: Non-photorealistic rendering.

Journal articles on the topic 'Non-photorealistic rendering'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Non-photorealistic rendering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Greenberg, Steph. "Why non-photorealistic rendering?" ACM SIGGRAPH Computer Graphics 33, no. 1 (February 1999): 56–57. http://dx.doi.org/10.1145/563666.563687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Goldstein, Dan. "Intentional non-photorealistic rendering." ACM SIGGRAPH Computer Graphics 33, no. 1 (February 1999): 62–63. http://dx.doi.org/10.1145/563666.563689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kennelly, Patrick J., and A. Jon Kimerling. "Non-Photorealistic Rendering and Terrain Representation." Cartographic Perspectives, no. 54 (June 1, 2006): 35–54. http://dx.doi.org/10.14714/cp54.345.

Full text
Abstract:
In recent years, a branch of computer graphics termed non-photorealistic rendering (NPR) has defined its own niche in the computer graphics community. While photorealistic rendering attempts to render virtual objects into images that cannot be distinguished from a photograph, NPR looks at techniques designed to achieve other ends. Its goals can be as diverse as imitating an artistic style, mimicking a look comparable to images created with specific reproduction techniques, or adding highlights and details to images. In doing so, NPR has overlapped the study of cartography concerned with representing terrain in two ways. First, NPR has formulated several techniques that are similar or identical to antecedent terrain rendering techniques including inclined contours and hachures. Second, NPR efforts to highlight or add information in renderings often focus on the use of innovative and meaningful combinations of visual variables such as orientation and color. Such efforts are similar to recent terrain rendering research focused on methods to symbolize disparate areas of slope and aspect on shaded terrain representations. We compare these fields of study in an effort to increase awareness and foster collaboration between researchers with similar interests.
APA, Harvard, Vancouver, ISO, and other styles
4

Yoshinori, Sugano. "Manga and non-photorealistic rendering." ACM SIGGRAPH Computer Graphics 33, no. 1 (February 1999): 65–66. http://dx.doi.org/10.1145/563666.563691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mangiatordi, F., E. Pallotti, V. Baroncini, and L. Capodiferro. "Non photorealistic rendering in frequency domain." Electronic Imaging 2016, no. 15 (February 14, 2016): 1–7. http://dx.doi.org/10.2352/issn.2470-1173.2016.15.ipas-182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kass, Michael, and Davide Pesare. "Coherent noise for non-photorealistic rendering." ACM Transactions on Graphics 30, no. 4 (July 2011): 1–6. http://dx.doi.org/10.1145/2010324.1964925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cooper, Doug. "Personal thoughts on non-photorealistic rendering." ACM SIGGRAPH Computer Graphics 33, no. 1 (February 1999): 64. http://dx.doi.org/10.1145/563666.563690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zou, Dan, Wen Hua Qian, Jin Xu, Zheng Peng Zhao, and Zhi Ming Chen. "Non-Photorealistic Rendering Effect of Halftoning." Applied Mechanics and Materials 373-375 (August 2013): 473–77. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.473.

Full text
Abstract:
To provide with an effective method for non-photorealistic rendering for computer generated images with halftoning artistic appearances from 2D images motivates our work in this paper. The methods proposed in this paper are inspired by improved error diffusion method and image enhancement, and the whole diffusion algorithm is based on the average threshold. Firstly, source image should be transferred to the gray image. Then, error diffusion, spread component and parameter confirming can be used to obtain more details. Experimental results show that the proposed method can simulate the halftoning effect in real time, and blemishes of the results can be eliminated.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Haitao, Jian J. Zhang, Stan Z. Li, and Yangsheng Wang. "Shape and texture preserved non-photorealistic rendering." Computer Animation and Virtual Worlds 15, no. 34 (June 16, 2004): 453–61. http://dx.doi.org/10.1002/cav.49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Yang, Wen Hua Qian, Jin Xu, and Zhi Ming Chen. "Sketch Simulating Based on Non-Photorealistic Rendering." Applied Mechanics and Materials 353-356 (August 2013): 3619–22. http://dx.doi.org/10.4028/www.scientific.net/amm.353-356.3619.

Full text
Abstract:
To provide with an effective method for non-photorealistic rendering for computer generated images with sketch artistic appearances from 2D images motivates our work in this paper. The character of lines, details in the source image can be smoothed by the technique of guided filter, and then the abstract smooth appearance of the image can be simulated. In addition, the smooth effect can be enlarged and enhanced by the technique of improved line integral convolution, so the final sketch artistic image can be obtained. Experimental results show that the proposed method can simulate the real sketch effect.
APA, Harvard, Vancouver, ISO, and other styles
11

Baniasadi, Maryam, and Brian J. Ross. "Exploring non-photorealistic rendering with genetic programming." Genetic Programming and Evolvable Machines 16, no. 2 (October 26, 2014): 211–39. http://dx.doi.org/10.1007/s10710-014-9234-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

O'Dowd, Paul, Carinna Parraman, and Mikaela Harding. "Vector Driven 2.5D Printing with Non-Photorealistic Rendering." Electronic Imaging 2016, no. 20 (February 14, 2016): 1–6. http://dx.doi.org/10.2352/issn.2470-1173.2016.20.color-343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Yoon, Hyun-Cheol, and Jong-Seung Park. "Non-Photorealistic Rendering Using CUDA-Based Image Segmentation." KIPS Transactions on Software and Data Engineering 4, no. 11 (November 30, 2015): 529–36. http://dx.doi.org/10.3745/ktsde.2015.4.11.529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Qian, Wen Hua, Dan Xu, Zheng Guan, and Kun Yue. "Fluid Artistic Research Based on Non-Photorealistic Rendering." Advanced Science Letters 6, no. 1 (March 15, 2012): 494–97. http://dx.doi.org/10.1166/asl.2012.2291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Scalera, Lorenzo, Stefano Seriani, Alessandro Gasparetto, and Paolo Gallina. "Non-Photorealistic Rendering Techniques for Artistic Robotic Painting." Robotics 8, no. 1 (February 11, 2019): 10. http://dx.doi.org/10.3390/robotics8010010.

Full text
Abstract:
In this paper, we present non-photorealistic rendering techniques that are applied together with a painting robot to realize artworks with original styles. Our robotic painting system is called Busker Robot and it has been considered of interest in recent art fairs and international exhibitions. It consists of a six degree-of-freedom collaborative robot and a series of image processing and path planning algorithms. In particular, here, two different rendering techniques are presented and a description of the experimental set-up is carried out. Finally, the experimental results are discussed by analyzing the elements that can account for the aesthetic appreciation of the artworks.
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Chuan-Kai, and Hui-Lin Yang. "Realization of Seurat’s pointillism via non-photorealistic rendering." Visual Computer 24, no. 5 (November 15, 2007): 303–22. http://dx.doi.org/10.1007/s00371-007-0183-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Agrawal, Amit. "Non-photorealistic Rendering: Unleashing the Artist's Imagination [Graphically Speaking]." IEEE Computer Graphics and Applications 29, no. 4 (July 2009): 81–85. http://dx.doi.org/10.1109/mcg.2009.61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lindemeier, Thomas, Jens Metzner, Lena Pollak, and Oliver Deussen. "Hardware-Based Non-Photorealistic Rendering Using a Painting Robot." Computer Graphics Forum 34, no. 2 (May 2015): 311–23. http://dx.doi.org/10.1111/cgf.12562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Haller, Michael, Christian Hanl, and Jeremiah Diephuis. "Non-photorealistic rendering techniques for motion in computer games." Computers in Entertainment 2, no. 4 (October 2004): 11. http://dx.doi.org/10.1145/1037851.1037869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Lu, Ping, Bin Sheng, Shengmei Luo, Xia Jia, and Wen Wu. "Image-based non-photorealistic rendering for realtime virtual sculpting." Multimedia Tools and Applications 74, no. 21 (August 22, 2014): 9697–714. http://dx.doi.org/10.1007/s11042-014-2146-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Qian, Wenhua, Dan Xu, Kun Yue, Zheng Guan, Yuanyuan Pu, and Yongjie Shi. "Gourd pyrography art simulating based on non-photorealistic rendering." Multimedia Tools and Applications 76, no. 13 (August 15, 2016): 14559–79. http://dx.doi.org/10.1007/s11042-016-3801-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lu, Jiajun, Fangyan Dong, and Kaoru Hirota. "Gradient-Related Non-Photorealistic Rendering for High Dynamic Range Images." Journal of Advanced Computational Intelligence and Intelligent Informatics 17, no. 4 (July 20, 2013): 628–36. http://dx.doi.org/10.20965/jaciii.2013.p0628.

Full text
Abstract:
A non-photorealistic rendering (NPR) method based on elements, usually strokes, is proposed for rendering high dynamic range (HDR) images to mimic the visual perception of human artists and designers. It enables strokes generated in the rendering process to be placed accurately on account of improvements in computing gradient values especially in regions having particularly high or low luminance. Experimental results using a designed pattern show that angles of gradient values obtained from HDR images have a reduction in averaged error of up to 57.5% in comparison to that of conventional digital images. A partial experiment on incorporating HDR images into other NPR styles, such as dithering, shows the wide compatibility of HDR images in providing source information for NPR processes.
APA, Harvard, Vancouver, ISO, and other styles
23

Chang, Youngha, Suguru Saito, and Masayuki Nakajima. "Non-Photorealistic Line Rendering Based on User's Style of Drawing." Journal of the Institute of Image Information and Television Engineers 56, no. 7 (2002): 1127–33. http://dx.doi.org/10.3169/itej.56.1127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jeon, Jae-Woong, Hyun-Ho Jang, and Yoon-Chul Choy. "Processing Techniques for Non-photorealistic Contents Rendering in Mobile Devices." Journal of the Korea Contents Association 10, no. 8 (August 28, 2010): 119–29. http://dx.doi.org/10.5392/jkca.2010.10.8.119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Hamel, J., and T. Strothotte. "Capturing and Re-Using Rendition Styles for Non-Photorealistic Rendering." Computer Graphics Forum 18, no. 3 (September 1999): 173–82. http://dx.doi.org/10.1111/1467-8659.00338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Csebfalvi, Balazs, Lukas Mroz, Helwig Hauser, Andreas Konig, and Eduard Groller. "Fast Visualization of Object Contours by Non-Photorealistic Volume Rendering." Computer Graphics Forum 20, no. 3 (September 2001): 452–60. http://dx.doi.org/10.1111/1467-8659.00538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Son, Tae-Il, and Kyoung-Ju Park. "Efficient Non-photorealistic Rendering Technique in Single Images and Video." Journal of Korea Multimedia Society 15, no. 8 (August 31, 2012): 977–85. http://dx.doi.org/10.9717/kmms.2012.15.8.977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Mower, James E. "Fast Image-Space Silhouette Extraction for Non-Photorealistic Landscape Rendering." Transactions in GIS 19, no. 5 (November 6, 2014): 678–93. http://dx.doi.org/10.1111/tgis.12118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Collomosse, John, and Tobias Isenberg. "Special section on Non-Photorealistic Animation and Rendering (NPAR) 2010." Computers & Graphics 35, no. 1 (February 2011): iv—v. http://dx.doi.org/10.1016/j.cag.2010.11.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kim, Jong-Hyun, and Jung Lee. "Layered non-photorealistic rendering with anisotropic depth-of-field filtering." Multimedia Tools and Applications 79, no. 1-2 (October 28, 2019): 1291–309. http://dx.doi.org/10.1007/s11042-019-08387-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Elber, Gershon, and Elaine Cohen. "Probabilistic silhouette based importance toward line-art non-photorealistic rendering." Visual Computer 22, no. 9-11 (August 30, 2006): 793–804. http://dx.doi.org/10.1007/s00371-006-0065-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Dong, Feng, and Gordon J. Clapworthy. "Volumetric texture synthesis for non-photorealistic volume rendering of medical data." Visual Computer 21, no. 7 (July 22, 2005): 463–73. http://dx.doi.org/10.1007/s00371-005-0294-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Baso, Budiman, Irit Maulana Sapta, and Saniyatul Mawaddah. "ALGORITMA NON-PHOTOREALISTIC RENDERING UNTUK KARTUN MENGGUNAKAN K-MEANS DAN CANNY." Journal of Information and Technology 1, no. 1 (February 13, 2021): 1–5. http://dx.doi.org/10.32938/jitu.v1i1.888.

Full text
Abstract:
Cartoons are one type of illustration usually in a non-realistic or semi-realistic style. To make a cartoon drawing manually requires good drawing ability. So, not everyone can make cartoons. This research proposes a non-photorealistic rendering algorithm to create cartoon drawings automatically. The algorithm consists of four phases. First, create an image abstraction using bilateral filtering. Second, using kmeans clustering for abstract image quantization. Third, get the contour lines of the drawing using the canny algorithm. Fourth, contour lines and quantized images are combined. The results show that this algorithm can produce good visualization of cartoon images.
APA, Harvard, Vancouver, ISO, and other styles
34

방윤경. "The speculation of PR(Photorealistic Rendering) and NPR(Non Photo-realistic Rendering) technique on commercial films." Journal of Digital Design 9, no. 1 (January 2009): 423–30. http://dx.doi.org/10.17280/jdd.2009.9.1.042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Park, Sung-Won. "A Study on the Visual Effects of Non-Photorealistic Rendering Animation focusing on 'Paperman,' a Short Animation." Cartoon and Animation Studies 40 (September 30, 2015): 139–55. http://dx.doi.org/10.7230/koscas.2015.40.139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Aditya, Christian. "Designing The World Of 3D CG Animation “BALLOON” Through Low Poly Visual Style." ULTIMART Jurnal Komunikasi Visual 9, no. 1 (March 21, 2018): 41–49. http://dx.doi.org/10.31937/ultimart.v9i1.739.

Full text
Abstract:
This research focus on the process of creating the world of “Balloon” through low poly visual style in 3D CG animation using an alternative non-photorealistic rendering method that is both efficient and visually astonishing. Low poly are chosen as a visual style because it represent the spirit of courage to explore the world outside the film’s story. By using low poly visual style in most of the environment, it creates a unique contrast between the environment and the character. The character is made using a traditional high poly modeling for 3D character. Through variety of experiments with various pipeline, the chosen method is considered the best to accommodate the filmmaker needs in both production time and the visual result. In order to determine the best technique, the research is conducted by testing the render results of some environment scenes within the film. The writer also experimented using procedural materials in the design of the world to help speed up rendering of the compositing process, in order to achieve good visual quality in a shorter production time. Keywords : low poly, non-photorealistic rendering, 3D CG animation
APA, Harvard, Vancouver, ISO, and other styles
37

Lee, Young-Hun. "A Study on the NPR(Non-Photorealistic Rendering) Used in 3D Animation." Journal of the Korea Contents Association 7, no. 5 (May 28, 2007): 94–101. http://dx.doi.org/10.5392/jkca.2007.7.5.094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ku, Daychyi, Shengfeng Qin, David K. Wright, and Cuixia Ma. "Online Personalised Non‐photorealistic Rendering Technique for 3D Geometry from Incremental Sketching." Computer Graphics Forum 27, no. 7 (October 2008): 1861–68. http://dx.doi.org/10.1111/j.1467-8659.2008.01333.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Cheok, Adrian David, Zheng Shawn Lim, and Roger Thomas KC Tan. "Humanistic Oriental art created using automated computer processing and non-photorealistic rendering." Computers & Graphics 31, no. 2 (April 2007): 280–91. http://dx.doi.org/10.1016/j.cag.2007.01.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

West, Rex. "Physically-based feature line rendering." ACM Transactions on Graphics 40, no. 6 (December 2021): 1–11. http://dx.doi.org/10.1145/3478513.3480550.

Full text
Abstract:
Feature lines visualize the shape and structure of 3D objects, and are an essential component of many non-photorealistic rendering styles. Existing feature line rendering methods, however, are only able to render feature lines in limited contexts, such as on immediately visible surfaces or in specular reflections. We present a novel, path-based method for feature line rendering that allows for the accurate rendering of feature lines in the presence of complex physical phenomena such as glossy reflection, depth-of-field, and dispersion. Our key insight is that feature lines can be modeled as view-dependent light sources. These light sources can be sampled as a part of ordinary paths , and seamlessly integrate into existing physically-based rendering methods. We illustrate the effectiveness of our method in several real-world rendering scenarios with a variety of different physical phenomena.
APA, Harvard, Vancouver, ISO, and other styles
41

COLLOMOSSE, J. P., and P. M. HALL. "SALIENCE-ADAPTIVE PAINTERLY RENDERING USING GENETIC SEARCH." International Journal on Artificial Intelligence Tools 15, no. 04 (August 2006): 551–75. http://dx.doi.org/10.1142/s0218213006002813.

Full text
Abstract:
We present a new non-photorealistic rendering (NPR) algorithm for rendering photographs in an impasto painterly style. We observe that most existing image-based NPR algorithms operate in a spatially local manner, typically as non-linear image filters seeking to preserve edges and other high-frequency content. By contrast, we argue that figurative artworks are salience maps, and develop a novel painting algorithm that uses a genetic algorithm (GA) to search the space of possible paintings for a given image, so approaching an "optimal" artwork in which salient detail is conserved and non-salient detail is attenuated. Differential rendering styles are also possible by varying stroke style according to the classification of salient artifacts encountered, for example edges or ridges. We demonstrate the results of our technique on a wide range of images, illustrating both the improved control over level of detail due to our salience adaptive painting approach, and the benefits gained by subsequent relaxation of the painting using the GA.
APA, Harvard, Vancouver, ISO, and other styles
42

Qi, Yue. "Rendering of 3D Meshes by Feature-Guided Convolution." International Journal of Advanced Pervasive and Ubiquitous Computing 4, no. 3 (July 2012): 81–90. http://dx.doi.org/10.4018/japuc.2012070105.

Full text
Abstract:
The author presents a feature-guided convolution method for rendering a 3D triangular mesh. In Their work, they compute feature directions on the vertices of a mesh and generate noise on the faces of a mesh. After projecting the directions and noise into 2D image space, the author executes convolution to render the mesh. They used three feature directions: a principal direction, the tangent of an isocurve of view-dependent features, and the tangent of an isophote curve. By controlling the value of noise, the author can produce several non-photorealistic rendering effects such as pencil drawing and hatching. This rendering process is temporally coherent and can therefore be used to create artistic styled animations.
APA, Harvard, Vancouver, ISO, and other styles
43

Du, Chang Qing, Wen Hua Qian, Zheng Guan, Zheng Peng Zhao, and Yu Hua. "A NPR Technique of Many Pieces Rendering." Applied Mechanics and Materials 543-547 (March 2014): 3075–78. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.3075.

Full text
Abstract:
To provide with an effective method for non-photorealistic rendering for computer generated images with artistic appearances from 2D images motivates our work in this paper. The methods proposed in this paper are inspired by the many pieces synthesis mosaic effects. In this paper, we mainly propose the method for generating the artistic appearances based on the pieces synthesis and color tranfer technique. The block pieces can be chosen from arbitrary quantity images in the database. First, to accelerate synthesis speed, the mosaic effect can be generated based on the luminance channel. Further, we establish our method for obtaining RGB appearances taking the color transfer as the underlying basis. Therefore, the color of input image can be rendered to the final mosaic artistic effects. Experimental results show that our methods are effective and efficient.
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, J., R. R. Martin, P. L. Rosin, X. F. Sun, Y. K. Lai, Y. H. Liu, and C. Wallraven. "Use of non-photorealistic rendering and photometric stereo in making bas-reliefs from photographs." Graphical Models 76, no. 4 (July 2014): 202–13. http://dx.doi.org/10.1016/j.gmod.2014.02.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kumar, M. P. Pavan, B. Poornima, H. S. Nagendraswamy, and C. Manjunath. "A comprehensive survey on non-photorealistic rendering and benchmark developments for image abstraction and stylization." Iran Journal of Computer Science 2, no. 3 (May 3, 2019): 131–65. http://dx.doi.org/10.1007/s42044-019-00034-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Qian, Wen Hua, Dan Xu, Kun Yue, Zheng Guan, and Yuan Yuan Pu. "Texture Deviation Mapping Based on Detail Enhancement." Advanced Engineering Forum 6-7 (September 2012): 32–37. http://dx.doi.org/10.4028/www.scientific.net/aef.6-7.32.

Full text
Abstract:
To provide with an effective technique for non-photorealistic rendering for computer generated images with artistic appearances from 2D images motivates our work in this paper. The methods proposed in this paper are inspired by the image deviation mapping constructed from a single texture background image. We establish our method for obtaining artistic appearances taking the deviation mapping as the underlying basis. Based on the simple linear filtering convolution operation, which is well suited of progressive coarsening of images and for detail extraction, and the image’s detail, such as edge and tone can be preserved in the final artistic appearance. This method has the exact computational complexity, and this technique is easily to implement and the rendering speed is fast.
APA, Harvard, Vancouver, ISO, and other styles
47

Kumar, Pavan, Poornima B., Nagendraswamy H. S., and Manjunath C. "Structure Preserving Non-Photorealistic Rendering Framework for Image Abstraction and Stylization of Low-Illuminated and Underexposed Images." International Journal of Computer Vision and Image Processing 11, no. 2 (April 2021): 22–45. http://dx.doi.org/10.4018/ijcvip.2021040102.

Full text
Abstract:
The proposed abstraction framework manipulates the visual-features from low-illuminated and underexposed images while retaining the prominent structural, medium scale details, tonal information, and suppresses the superfluous details like noise, complexity, and irregular gradient. The significant image features are refined at every stage of the work by comprehensively integrating a series of AnshuTMO and NPR filters through rigorous experiments. The work effectively preserves the structural features in the foreground of an image and diminishes the background content of an image. Effectiveness of the work has been validated by conducting experiments on the standard datasets such as Mould, Wang, and many other interesting datasets and the obtained results are compared with similar contemporary work cited in the literature. In addition, user visual feedback and the quality assessment techniques were used to evaluate the work. Image abstraction and stylization applications, constraints, challenges, and future work in the fields of NPR domain are also envisaged in this paper.
APA, Harvard, Vancouver, ISO, and other styles
48

Neto, Liordino dos S. Rocha, and Antonio L. Apolinário Jr. "Real-Time Screen Space Cartoon Water Rendering with the Iterative Separated Bilateral Filter." Journal on Interactive Systems 8, no. 1 (September 14, 2017): 1. http://dx.doi.org/10.5753/jis.2017.672.

Full text
Abstract:
We present the improvements and new results of our method for rendering particle-based liquid simulations that runs in real-time and includes an adjustable performance/quality trade-off. Our approach smooths the fluid surface with an iterative version of the separated bilateral filter, and introduces a new method for generating foam and droplets that is appropriate for non-photorealistic styles. The entire method occurs in screen space, which avoids the usual artifacts of polygonization techniques. All the steps are implemented directly in the graphics hardware. Improvements include a new method to generate foam and droplets, visual enhancements in the optical effects generated by the interaction between water and light, and tests in a environment with support to collision of water particles with rigid bodies. Performance and visual analysis that includes comparisons with previous methods shows the applicability of our approach.
APA, Harvard, Vancouver, ISO, and other styles
49

Xie, Dang En, Hai Na Hu, and Zhi Li Zhang. "An Improved Method for Generating Colored Pencil Drawing." Advanced Materials Research 433-440 (January 2012): 1555–60. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.1555.

Full text
Abstract:
In this paper, we put forward an improved non-photorealistic rendering method for generating a colored pencil drawing from digital image. First, to make sure the result can retain the original color information, we use the original pixel value instead of the black dot which generated by the traditional white-noise generating method. Second, we added a ratio for the Kirsch operator to be suitable for different images with different details. Third, we present a new approach which extruded form the luminance of the original image to determine the stroke orientation. Based on our methods, the quality of traditional pencil illustration can be guaranteed to a certain extent, and an effective and convenient tool is provided to generate the same drawings in style with artists and illustrations even for the users that have not been trained professionally.
APA, Harvard, Vancouver, ISO, and other styles
50

Liang, Dongxue, Kyoungju Park, and Przemyslaw Krompiec. "Facial Feature Model for a Portrait Video Stylization." Symmetry 10, no. 10 (September 28, 2018): 442. http://dx.doi.org/10.3390/sym10100442.

Full text
Abstract:
With the advent of the deep learning method, portrait video stylization has become more popular. In this paper, we present a robust method for automatically stylizing portrait videos that contain small human faces. By extending the Mask Regions with Convolutional Neural Network features (R-CNN) with a CNN branch which detects the contour landmarks of the face, we divided the input frame into three regions: the region of facial features, the region of the inner face surrounded by 36 face contour landmarks, and the region of the outer face. Besides keeping the facial features region as it is, we used two different stroke models to render the other two regions. During the non-photorealistic rendering (NPR) of the animation video, we combined the deformable strokes and optical flow estimation between adjacent frames to follow the underlying motion coherently. The experimental results demonstrated that our method could not only effectively reserve the small and distinct facial features, but also follow the underlying motion coherently.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography