To see the other types of publications on this topic, follow the link: Computer-generated graphic.

Dissertations / Theses on the topic 'Computer-generated graphic'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'Computer-generated graphic.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gaylin, Kenneth B. "An investigation of information display variables utilizing computer-generated graphics for decision support systems." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/53070.

Full text
Abstract:
The effectiveness of selected computer-generated graphics display variables was examined in a mixed-factors factorial experiment using thirty-two subjects. All subjects performed four different graph reading tasks consisting of point-reading, point-comparison, trendreading, and trend-comparison. In each task, line, point, bar, and three-dimensional bar graphs were investigated under two levels of task complexity, and two levels of coding (color and black-and-white). The effects of these independent variables on measures of task performance errors, time to complete the task, subjective mental workload, and preference ratings were obtained in real-time by a microcomputer control program. Separate MANOVA analyses of these measures for each task indicated significant effects of graph-type for the point—reading task, main effects of complexity and coding for all tasks, and a graph-by—coding interaction for the point-reading, point-comparison, and trend-reading tasks. Subsequent ANOVA analyses showed significance for these effects across several of the dependent measures which are specified in the thesis. Recommendations are made for selecting the most effective graph and coding combinations for the particular types of graph-interpretation tasks and complexity levels encountered.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

Jablonski, Sidney E. "Visual metaphors in computer-generated information graphics /." Online version of thesis, 1989. http://hdl.handle.net/1850/11545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Costa, Sousa Mario. "Computer-generated graphite pencil materials and rendering." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0035/NQ46821.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jain, Vinit. "Deep Learning based Video Super- Resolution in Computer Generated Graphics." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292687.

Full text
Abstract:
Super-Resolution is a widely studied problem in the field of computer vision, where the purpose is to increase the resolution of, or super-resolve, image data. In Video Super-Resolution, maintaining temporal coherence for consecutive video frames requires fusing information from multiple frames to super-resolve one frame. Current deep learning methods perform video super-resolution, yet most of them focus on working with natural datasets. In this thesis, we use a recurrent back-projection network for working with a dataset of computer-generated graphics, with example applications including upsampling low-resolution cinematics for the gaming industry. The dataset comes from a variety of gaming content, rendered in (3840 x 2160) resolution. The objective of the network is to produce the upscaled version of the low-resolution frame by learning an input combination of a low-resolution frame, a sequence of neighboring frames, and the optical flow between each neighboring frame and the reference frame. Under the baseline setup, we train the model to perform 2x upsampling from (1920 x 1080) to (3840 x 2160) resolution. In comparison against the bicubic interpolation method, our model achieved better results by a margin of 2dB for Peak Signal-to-Noise Ratio (PSNR), 0.015 for Structural Similarity Index Measure (SSIM), and 9.3 for the Video Multi-method Assessment Fusion (VMAF) metric. In addition, we further demonstrate the susceptibility in the performance of neural networks to changes in image compression quality, and the inefficiency of distortion metrics to capture the perceptual details accurately.<br>Superupplösning är ett allmänt studerat problem inom datorsyn, där syftet är att öka upplösningen på eller superupplösningsbilddata. I Video Super- Resolution kräver upprätthållande av tidsmässig koherens för på varandra följande videobilder sammanslagning av information från flera bilder för att superlösa en bildruta. Nuvarande djupinlärningsmetoder utför superupplösning i video, men de flesta av dem fokuserar på att arbeta med naturliga datamängder. I denna avhandling använder vi ett återkommande bakprojektionsnätverk för att arbeta med en datamängd av datorgenererad grafik, med exempelvis applikationer inklusive upsampling av film med låg upplösning för spelindustrin. Datauppsättningen kommer från en mängd olika spelinnehåll, återgivna i (3840 x 2160) upplösning. Målet med nätverket är att producera en uppskalad version av en ram med låg upplösning genom att lära sig en ingångskombination av en lågupplösningsram, en sekvens av intilliggande ramar och det optiska flödet mellan varje intilliggande ram och referensramen. Under grundinställningen tränar vi modellen för att utföra 2x uppsampling från (1920 x 1080) till (3840 x 2160) upplösning. Jämfört med den bicubiska interpoleringsmetoden uppnådde vår modell bättre resultat med en marginal på 2 dB för Peak Signal-to-Noise Ratio (PSNR), 0,015 för Structural Similarity Index Measure (SSIM) och 9.3 för Video Multimethod Assessment Fusion (VMAF) mätvärde. Dessutom demonstrerar vi vidare känsligheten i neuronal nätverk för förändringar i bildkomprimeringskvaliteten och ineffektiviteten hos distorsionsmätvärden för att fånga de perceptuella detaljerna exakt.
APA, Harvard, Vancouver, ISO, and other styles
5

Patterson, John Andre. "Implementing autonomous crowds in a computer generated feature film." Texas A&M University, 2005. http://hdl.handle.net/1969.1/3107.

Full text
Abstract:
The implementation of autonomous, flocking crowds of background characters in the feature film “Robots” is discussed. The techniques for obstacle avoidance and goal seeking are described. An overview of the implementation of the system as part of the production pipeline for the film is also provided.
APA, Harvard, Vancouver, ISO, and other styles
6

Brunner, Seth A. "Improved Computer-Generated Simulation Using Motion Capture Data." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4182.

Full text
Abstract:
Ever since the first use of crowds in films and videogames there has been an interest in larger, more efficient and more realistic simulations of crowds. Most crowd simulation algorithms are able to satisfy the viewer from a distance but when inspected from close up the flaws in the individual agent's movements become noticeable. One of the bigger challenges faced in crowd simulation is finding a solution that models the actual movement of an individual in a crowd. This paper simulates a more realistic crowd by using individual motion capture data as well as traditional crowd control techniques to reach an agent's desired goal. By augmenting traditional crowd control algorithms with the use of motion capture data for individual agents, we can simulate crowds that mimic more realistic crowd motion, while maintaining real-time simulation speed.
APA, Harvard, Vancouver, ISO, and other styles
7

Lucas, Richard Edward. "Evolving aesthetic criteria for computer generated art : a Delphi study." Connect to resource, 1986. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1157038631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pursel, Eugene Ray. "Synthetic vision : visual perception for computer generated forces using the programmable graphics pipeline /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Sep%5FPursel.pdf.

Full text
Abstract:
Thesis (M.S. in Modeling, Virtual Environments and Simulation (MOVES))--Naval Postgraduate School, Sept. 2004.<br>Thesis Advisor(s): Christian J. Darken. Includes bibliographical references (p. 93-95). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
9

Matthews, Timothy. "Sketch-based digital storyboards and floor plans for authoring computer-generated film pre-visuals." Thesis, Nelson Mandela Metropolitan University, 2012. http://hdl.handle.net/10948/d1008430.

Full text
Abstract:
Pre-visualisation is an important tool for planning films during the pre-production phase of filmmaking. Existing pre-visualisation authoring tools do not effectively support the user in authoring pre-visualisations without impairing software usability. These tools require the user to either have programming skills, be experienced in modelling and animation, or use drag-and-drop style interfaces. These interaction methods do not intuitively fit with pre-production activities such as floor planning and storyboarding, and existing tools that apply a storyboarding metaphor do not automatically interpret user sketches. The goal of this research was to investigate how sketch-based user interfaces and methods from computer vision could be used for supporting pre-visualisation authoring using a storyboarding approach. The requirements for such a sketch-based storyboarding tool were determined from literature and an interview with Triggerfish Animation Studios. A framework was developed to support sketch-based pre-visualisation authoring using a storyboarding approach. Algorithms for describing user sketches, recognising objects and performing pose estimation were designed to automatically interpret user sketches. A proof of concept prototype implementation of this framework was evaluated in order to assess its usability benefit. It was found that the participants could author pre-visualisations effectively, efficiently and easily. The results of the usability evaluation also showed that the participants were satisfied with the overall design and usability of the prototype tool. The positive and negative findings of the evaluation were interpreted and combined with existing heuristics in order to create a set of guidelines for designing similar sketch-based pre-visualisation authoring tools that apply the storyboarding approach. The successful implementation of the proof of concept prototype tool provides practical evidence of the feasibility of sketch-based pre-visualisation authoring. The positive results from the usability evaluation established that sketch-based interfacing techniques can be used effectively with a storyboarding approach for authoring pre-visualisations without impairing software usability.
APA, Harvard, Vancouver, ISO, and other styles
10

Stejmar, Carl. "Temporal Anti-Aliasing and Temporal Supersampling in Three-Dimensional Computer Generated Dynamic Worlds." Thesis, Linköpings universitet, Institutionen för systemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129519.

Full text
Abstract:
This master thesis investigates and evaluates how a temporal component can help anti-aliasing with reduction of general spatial aliasing, preservation of thin geometry and how to get temporal stability in dynamic computer generated worlds. Of spatial aliasing, geometric aliasing is in focus but shading aliasing will also be discussed. Two temporal approaches are proposed. One of the methods utilizes the previous frame while the other method uses four previous frames. In order to do this an efficient way of re-projecting pixels are needed so this thesis deals with that problem and its consequences as well. Further, the results show that the way of taking and accumulating samples in these proposed methods show improvements that would not have been affordable without the temporal component for real-time applications. Thin geometry is preserved up to a degree but the proposed methods do not solve this problem for the general case. The temporal methods' image quality are evaluated against conventional anti-aliasing methods subjectively, by a survey, and objectively, by a numerical method not found elsewhere in anti-aliasing reports. Performance and memory consumption are also evaluated. The evaluation suggests that a temporal component for anti-aliasing can play an important role in increasing image quality and temporal stability without having a substantial negative impact of the performance with less memory consumed.
APA, Harvard, Vancouver, ISO, and other styles
11

Anderson, Dustin Robert. "Two-Dimensional Computer-Generated Ornamentation Using a User-Driven Global Planning Strategy." DigitalCommons@CalPoly, 2008. https://digitalcommons.calpoly.edu/theses/15.

Full text
Abstract:
Hand drawn ornamentation, such as floral or geometric patterns, is a tedious and time consuming task that requires much skill and training in ornamental design principles and aesthetics. Ornamental drawings both historically and presently play critical roles in all things from art to architecture; however, little work has been done in exploring their algorithmic and interactive generation. The field of computer graphics offers many algorithmic possibilities for assisting an artist in creating two-dimensional ornamental art. When computers handle the repetition and overall structure of ornament, considerable savings in time and money can result. Today, the few existing computer algorithms used to generate 2D ornament have over-generalized and over-simplified the process of ornamentation, resulting in the substitution of limited amounts of generic and static "clip art" for once personalized artistic innovations. Two possible approaches to computational ornamentation exist: interactive tools give artists instant feedback on their work while non-interactive programs can carry out complex and sometimes lengthy computations to produce mathematically precise ornamental compositions. Due to the importance of keeping an artist in the loop for the production of ornamentation, we present an application designed and implemented utilizing a user-driven global planning strategy, to help guide the generation of two-dimensional ornament. The system allows for the creation of beautiful organic ornamental 2D art which follows a user-defined curve. We present the application, the algorithmic approaches used, and the potential uses of this application.
APA, Harvard, Vancouver, ISO, and other styles
12

Obert, Juraj. "REAL-TIME CINEMATIC DESIGN OF VISUAL ASPECTS IN COMPUTER-GENERATED IMAGES." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4282.

Full text
Abstract:
Creation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a stunning image, are often forced to resort to the traditional, creativity-baring, pipelines consisting of repeated rendering and parameter tweaking. Our work shifts the attention away from the rendering problem and focuses on the design. We propose to combine non-physical editing with real-time feedback and provide artists with efficient ways of designing complex visual aspects such as global illumination or all-frequency shadows. We conform to existing pipelines by inserting our editing components into existing stages, hereby making editing of visual aspects an inherent part of the design process. Many of the examples showed in this work have been, until now, extremely hard to achieve. The non-physical aspect of our work enables artists to express themselves in more creative ways, not limited by the physical parameters of current renderers. Real-time feedback allows artists to immediately see the effects of applied modifications and compatibility with existing workflows enables easy integration of our algorithms into production pipelines.<br>Ph.D.<br>School of Electrical Engineering and Computer Science<br>Engineering and Computer Science<br>Computer Science PhD
APA, Harvard, Vancouver, ISO, and other styles
13

Baker, Patti R. "Computer generated animation in the classroom : teachers' perceptions of instructional uses and curricular impact /." The Ohio State University, 1986. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487265555439694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Anderson, Dustin Robert Wood Zoë Justine. "Two-dimensional computer-generated ornamentation using a user-driven global planning strategy : a thesis /." [San Luis Obispo, Calif. : California Polytechnic State University], 2008. http://digitalcommons.calpoly.edu/theses/15/.

Full text
Abstract:
Thesis (M.S.)--California Polytechnic State University, 2008.<br>Major professor: Zoë Wood, Ph.D. "Presented to the faculty of California Polytechnic State University, San Luis Obispo." "In partial fulfillment of the requirements for the degree [of] Master of Science in Computer Science." Submitted June 11, 2008. Includes bibliographical references (leaves 77-79). Also available online. Also available on microfiche (1 sheet).
APA, Harvard, Vancouver, ISO, and other styles
15

Tan, Adrian Hadipriono. "A Computer-Generated Model of the Construction of the Roman Colosseum." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354683991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Cooper, C., and n/a. "Space subdivision and distributed databases in a multiprocessor raytracer." University of Canberra. Information Sciences & Engineering, 1991. http://erl.canberra.edu.au./public/adt-AUC20060629.145540.

Full text
Abstract:
This thesis deals with computer generated images. The thesis begins with an overview of a generalised computer graphics system, including a brief survey of typical methods for generating photorealistic images. One such technique, ray tracing, is used as the basis for the work which follows. The overview section concludes with a statement of the aim which is to: Investigate the effective use of available processing power and effective utilisation of available memory by implementing a ray tracing programme which uses space subdivision, multiple processors and a distributed world model database. The problem formulation section describes the ray tracing principle and then introduces the main areas of study. The INMOS Transputer (a building block for concurrent systems) is used to implement the multiple process ray tracer. Space subdivision is achieved by repeated and regular subdivision of a world cube (which contains the scene to be ray traced) into named cubes, called octrees. The subdivision algorithm continues to subdivide space until no octree contains more than a specified number of objects, or until the practical limit of space subdivision is reached. The objects in the world model database are distributed in a round robin manner to the ray trace processes. During execution of the ray trace programme, information about each object is passed between processes by a message mechanism. The concurrent code for the transputer processes, written in OCCAM 2, was developed using timing diagrams and signal flow diagrams derived by analogy from digital electronics. Structure diagrams, modified to be consistent with OCCAM 2 processes, were derived from the timing diagrams and signal flow diagrams. These were used as a basis for the coding. The results show that space subdivision is an effective use of processor power because the number of trial intersections of rays with objects is dramatically reduced. In addition, distribution of the world model database avoids duplication of the database in the memory of each process and hence better utilisation of available memory is achieved. The programmes are supported by a menu driven interface (running on a PC AT) which enables the user to control the ray trace processes running on the transputer board housed in the PC.
APA, Harvard, Vancouver, ISO, and other styles
17

Zama, Ramirez Pierluigi. "Estimation of depth and semantics by a CNN trained on computer-generated and real data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/12921/.

Full text
Abstract:
La “stima” o “estrazione” della profondità si riferisce ad un gruppo di tecniche che puntano ad ottenere una rappresentazione della struttura tridimensionale della scena, a partire da immagini bidimensionali. In altre parole, si ricerca la distanza dalla telecamera, per ogni punto appartenente alla scena vista. Con “segmentazione semantica” invece si intende l’insieme delle tecniche che come obbiettivo hanno la suddivisione dell’immagine in gruppi, dove ogni gruppo è composto da elementi della stessa classe. La tesi proposta punta ad affrontare in maniera congiunta le due questioni sopra citate attraverso l’utilizzo di una rete neurale convoluzionale, in particolare si avvale di una struttura detta Fully Convolutional Neural Network, all'interno dell’ambiente urbano. Data la scarsità di dataset di immagini con corrispettive ground truth di semantica e profondità scarseggiano, è stato creato un dataset sintetico a partire da un modello tridimensionale di una città, sviluppato tramite l’utilizzo del software Blender. La rete quindi viene prima allenata sui dati artificiali e in seguito viene eseguita un’operazione di rifinitura su un dataset di immagini reali, CityScapes. La rete così allenata ottiene buoni risultati in riferimento ad entrambi gli obbiettivi, riuscendo a raggiungere una buona accuratezza e un basso errore sia nella predizione della profondità che nella segmentazione semantica. Inoltre, la predizione contemporanea dei due risultati permette tempi di computazione minori rispetto all'esecuzione separata dei due processi, di predizione semantica e di profondità.
APA, Harvard, Vancouver, ISO, and other styles
18

Jackson, Linda A. "A training module for the integration of text, scanned graphics, and computer-generated artwork into a page layout program on a Macintosh design system /." Online version of thesis, 1990. http://hdl.handle.net/1850/11158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Tracy, Judd. "AN APPROACH FOR COMPUTING INTERVISIBILITY USING GRAPHICAL PROCESSING U." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2505.

Full text
Abstract:
In large scale entity-level military force-on-force simulations it is essential to know when one entity can visibly see another entity. This visibility determination plays an important role in the simulation and can affect the outcome of the simulation. When virtual Computer Generated Forces (CGF) are introduced into the simulation these intervisibilities must now be calculated by the virtual entities on the battlefield. But as the simulation size increases so does the complexity of calculating visibility between entities. This thesis presents an algorithm for performing these visibility calculations using Graphical Processing Units (GPU) instead of the Central Processing Units (CPU) that have been traditionally used in CGF simulations. This algorithm can be distributed across multiple GPUs in a cluster and its scalability exceeds that of CGF-based algorithms. The poor correlations of the two visibility algorithms are demonstrated showing that the GPU algorithm provides a necessary condition for a "Fair Fight" when paired with visual simulations.<br>M.S.Cp.E.<br>Department of Electrical and Computer Engineering<br>Engineering and Computer Science<br>Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
20

Lorins, Peterson Marthen. "A Comparative Analysis Between Context-Based Reasoning (CxBR) and Contextual Graphs (CxGs)." Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2302.

Full text
Abstract:
Context-based Reasoning (CxBR) and Contextual Graphs (CxGs) involve the modeling of human behavior in autonomous and decision-support situations in which optimal human decision-making is of utmost importance. Both formalisms use the notion of contexts to allow the implementation of intelligent agents equipped with a context sensitive knowledge base. However, CxBR uses a set of discrete contexts, implying that models created using CxBR operate within one context at a given time interval. CxGs use a continuous context-based representation for a given problem-solving scenario for decision-support processes. Both formalisms use contexts dynamically by continuously changing between necessary contexts as needed in appropriate instances. This thesis identifies a synergy between these two formalisms by looking into their similarities and differences. It became clear during the research that each paradigm was designed with a very specific family of problems in mind. Thus, CXBR best implements models of autonomous agents in environment, while CxGs is best implemented in a decision support setting that requires the development of decision-making procedures. Cross applications were implemented on each and the results are discussed.<br>M.S.Cp.E.<br>Department of Electrical and Computer Engineering<br>Engineering and Computer Science<br>Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
21

Larsson, Markus, and David Ångström. "A Performance Comparison of Auto-Generated GraphQL Server Implementations." Thesis, Linköpings universitet, Tekniska fakulteten, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170825.

Full text
Abstract:
As databases and traffic over the internet is becoming larger by the day, the performance of sending information has become a target of great importance. In past years, other software architectural styles such as REST have been used as it is a reliable framework and works really well when one has a dependable internet connection. In 2015, the querying language GraphQL was released by Facebook to the public as an alternative to REST. GraphQL made improvements in fetching data by for example removing the possibility of under- and overfitting. This means that a client only gets the data which they have requested, nothing more, nothing less. To create a GraphQL schema and server implementation requires time, effort and knowledge. This is however a requirement to run GraphQL over your current legacy database. For this reason multiple server implementation tools have been created by vendors to reduce development time and instead auto-generates a GraphQL schema and server implementation using an already existing database. This bachelor thesis will pick, run and compare the benchmarks of the two different server implementation tools Hasura and PostGraphile. This is done using a benchmark methodology based on technical difficulties (choke points). The result of our benchmark suggests that the throughput is larger for Hasura compared to PostGraphile whilst the query execution time as well as query response time is similar. PostGraphile is better at paging without offset as well as ordering, but on all other cases Hasura outperforms PostGraphile or shows similar results.<br>Linköping GraphQL Benchmark (LinGBM)
APA, Harvard, Vancouver, ISO, and other styles
22

Welker, Cécile. "La fabrique des "nouvelles images" : l’émergence des images de synthèse en France dans la création audiovisuelle (1968-1989)." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCA116/document.

Full text
Abstract:
Entre la première thèse en informatique graphique (soutenue en 1968) et la première publicité entièrement synthétique diffusée à la télévision (en 1983), les images de synthèse se transforment en “nouvelles images”. Il ne s’agit pas d’évaluer ces prétendues images nouvelles en fonction de leurs qualités distinctives de rupture ou de continuité que l’expression a tendance à appeler, mais d’étudier leur mode de production et de représentation afin de déterminer ce qu’elles montrent des techniques employées, et les imaginaires que ces dernières véhiculent au moment d’émergence de l’image numérique, comme autant de propositions créatives mais aussi d’enjeux idéologiques. Étudiées d’un point de vue technique puis esthétique, grâce à l’étude croisée de témoignages, de littérature grise, et de l’analyse formelle des films, ces productions donnent à voir un processus de réappropriation du médium, avant et après l’image. Si elle définit une histoire « officielle » des images de synthèse en France, en replaçant aussi précisément que possible les productions dans leur environnement technique, politique et culturel, cette thèse a finalement mis en évidence les temps des images de synthèse en tant que produit innovant, depuis leurs lieux de fabrication jusqu’à leurs lieux de légitimation. Ses conclusions mettent en jeu les différentes circulations locales des hommes, des outils et des images, au moment où la politique culturelle de 1981 favorise la jonction art-ordinateur, et l’indiscipline des créations<br>Between the first PhD in computer graphics (defended in 1968) and the first entirely synthetic advertisement broadcast on television (1983), computer-generated images became “new images”. The aim is not to assess these so-called new images according to their distinctive qualities of rupture or continuity, as the expression would suggest, but rather to study their mode of production and representation in order to determine what they show from the techniques employed, and the imaginative worlds that they convey at the time of the emergence of digital images, like so many creative suggestions but also ideological issues. Studied first from a technical, then from an aesthetic point of view, thanks to the cross-study of testimonies, grey literature and a formal analysis of the movies, these productions show a process of recovery of the medium, before and after the image. This PhD not only defines an “official” history of computer-generated images in France, replacing as precisely as possible the productions in their technical, political and cultural environment, but it also reveals when computer-generated images are innovating products, from their places of fabrication to their places of legitimacy. The conclusions question the different local circulations of people, tools and images, at a time when the cultural policies of the year 1981 promotes the bond between art and computer, and the indiscipline of creations
APA, Harvard, Vancouver, ISO, and other styles
23

Russ, Ricardo. "Service Level Achievments - Test Data for Optimal Service Selection." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-50538.

Full text
Abstract:
This bachelor’s thesis was written in the context of a joint research group, which developed a framework for finding and providing the best-fit web service for a user. The problem of the research group lays in testing their developed framework sufficiently. The framework can either be tested with test data produced by real web services which costs money or by generated test data based on a simulation of web service behavior. The second attempt has been developed within this scientific paper in the form of a test data generator. The generator simulates a web service request by defining internal services, whereas each service has an own internal graph which considers the structure of a service. A service can be atomic or can be compose of other services that are called in a specific manner (sequential, loop, conditional). The generation of the test data is done by randomly going through the services which result in variable response times, since the graph structure changes every time the system has been initialized. The implementation process displayed problems which have not been solved within the time frame. Those problems are displaying interesting challenges for the dynamical generation of random graphs. Those challenges should be targeted in further research.
APA, Harvard, Vancouver, ISO, and other styles
24

Tan, Enhua. "Spam Analysis and Detection for User Generated Content in Online Social Networks." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1365520334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ysasi, Alonso Alejandro. "La obra gráfica de Pedro Quetglas “Xam” (1915-2001): la riqueza de un patrimonio." Doctoral thesis, Universitat de les Illes Balears, 2014. http://hdl.handle.net/10803/284394.

Full text
Abstract:
És una investigació, anàlisis, i aproximació a l’obra gràfica de l’artista mallorquí, del segle XX, Pere Quetglas, conegut pel pseudònim de “Xam”. La seva activitat s'ha sistematitzat sobre la base la biografia, tècniques treballades i el seu entorn. Xam, es va exercitar en la caricatura, el dibuix, el cartell, el gravat xilogràfic, la pintura, els monotips, la serigrafia i en el gravat calcogràfic. Del conjunt de tota la seva producció l’autor se centra en l'obra gràfica produïda a partir de 1944, quan pot datar-se la seva primera xilografia, i la seva defunció, l’any 2001, en el qual realitza la seva última litografia. El treball s'insereix en un àmbit sense tradició immediata sobre l'obra gràfica a Mallorca, pràcticament desapareguda després de la important impremta Guasp. S'han pogut documentar més de 400 matrius. Alhora, s'han treballat les estampacions d'aquestes, que ascendeixen a 600 estampes calcogràfiques, xilogràfiques, serigràfiques i litogràfiques.<br>Es una investigación, análisis, y aproximación a la obra gráfica del artista mallorquín, del siglo XX, Pedro Quetglas, conocido por el seudónimo de “Xam”. Su actividad se ha sistematizado en base a la biografía, técnicas trabajadas y a su entorno. Xam, se ejercitó en la caricatura, el dibujo, el cartel, el grabado xilográfico, la pintura, los monotipos, la serigrafía y en el grabado calcográfico. Del conjunto de toda su producción se centra en la obra gráfica producida a partir de 1944, cuando puede datarse su primera xilografía, y su fallecimiento, en 2001, en el cual realiza su última litografía. La tarea se inserta en un ámbito sin tradición inmediata sobre la obra gráfica en Mallorca, prácticamente desaparecida tras la importante imprenta Guasp. Se han podido documentar más de 400 matrices. A su vez, se han trabajado las estampaciones de estas, que ascienden a 600 estampas calcográficas, xilográficas, serigráficas y litográficas.<br>The thesis is research, analysis and approach to the graphic work of the Majorcan artist of the 20th century, Pedro Quetglas, known by his pseudonym "Xam". Xam worked in several art fields, such as caricature, drawing, designing and painting posters, woodcut, painting, monotype, serigraphy and calcography engraving. From the sum of his work the thesis is centred in the graphic work produced between 1944, when we can date the first xylography, and his death, 2001, when he finished his last lithography. The task was inserted in a field without immediate tradition on the graphic work in Mallorca, which practically went missing after the important Guasp printing house closed down. It has been possible to document more than 400 blocks and, at the same time, the prints of those which add up to 600 prints on chalcography, xylography, serigraphy and lithography.
APA, Harvard, Vancouver, ISO, and other styles
26

Kuo, Li-Chieh, and 郭立傑. "A Computer-generated Hologram Using 3D Graphic Model." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/60777404946283776883.

Full text
Abstract:
碩士<br>國立臺北大學<br>電機工程研究所<br>97<br>For generating holograms from 3D computer graphic (CG) models composed of non-parallel 2D polygons, a computational method conveying from tilted object planes to the hologram based on the modified Fresnel-Kirchhoff diffraction formula is proposed and demonstrated. The modified Fresnel-Kirchhoff diffraction formula with tilted plane can be numerically implemented using the Fresnel transformed approach. The experimental results show that the hologram generated by the proposed approach can reconstruct 3D image with correct depth cues and suffers less sampling restriction compared to that by the angular spectrum approach.
APA, Harvard, Vancouver, ISO, and other styles
27

Marshall, Bronwyn Gillian. "Computer generated lighting techniques: the study of mood in an interior visualisation." Thesis, 2009. http://hdl.handle.net/10539/7300.

Full text
Abstract:
Abstract The report investigates computer generated (CG) lighting techniques with a focus on the rendering of interior architectural visualisations. With rapid advancements in CG technology, the demand and expectation for greater photorealism in visualisations are increasing. The tools to achieve this are widely available and fairly easy to apply; however, renderings on a local scale are still displaying functionality and lack visual appeal. The research discusses how design principles and aesthetics can be used effectively to create visual interest and display mood in the visualisation, with strong attention to the elements that are defined as the fundamentals in achieving photorealism. The focus is on a solid understanding of CG lighting techniques and principles in order to achieve high quality, dynamic visualisations. Case studies examine the work of lighting artist James Turrell and 3D artist Jose Pedro Costa and apply the findings to a creative project, encompassing the discussions in the report. The result is the completion of three photorealistic renderings of an interior visualisation, using different CG lighting techniques to convey mood. The research provides a platform for specialisation in the 3D environment and encourages a multidisciplinary approach to learning.
APA, Harvard, Vancouver, ISO, and other styles
28

Griffin, Christopher Corey. "Automated Vehicle Articulation and Animation: A Maxscript Approach." 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8857.

Full text
Abstract:
This thesis presents an efficient, animation production-centric solution to the articulation and animation of computer generated automobiles for creating animations with a high degree of believability. The thesis has two main foci which include an automated and customizable articulation system for automobile models and a vehicle animation system that utilizes minimal simulation techniques. The primary contribution of this thesis is the definition of a computer graphics animation software program that utilizes simulation and key-frame methods for defining vehicle motion. There is an emphasis on maintaining efficiency to prevent long wait times during the animation process and allow for immediate interactivity. The program, when implemented, allows for animation of a vehicle with minimal input and setup. These automated tools could make animating an automobile, or multiple automobiles of varying form and dimensions much more efficient and believable in a film, animation, or game production environment.
APA, Harvard, Vancouver, ISO, and other styles
29

Losure, Michael Robert. "A nNon-photorealistic Model for Procedural Painterly Rendered Trees in the Style of Corot." 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3080.

Full text
Abstract:
This thesis describes the development of a system for the procedural generation and painterly rendering of trees. Specifically, the rendered trees are modeled after those found in the oil landscape paintings of 19th century French painter Camille Corot. The rendering system, which is a combination of MEL-scripted Maya tools and Renderman shaders, facilitates the creation of still images that look convincingly painterly, as well as 3D animations with temporal coherence. Brush stroke properties are animated based on distance from the camera, so that traditional painting techniques for representing depth are incorporated into the computer-generated animations. During the development process, the system was generalized to apply to other structures, such as grass and rocks, and allows for the creation and rendering of entire landscapes. Several example animations were created with the system to demonstrate the ideas developed during the process and the quality of the results.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography