To see the other types of publications on this topic, follow the link: 3D computer models.

Dissertations / Theses on the topic '3D computer models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic '3D computer models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhu, Junyi S. M. Massachusetts Institute of Technology. "A software pipeline for converting 3D models into 3D breadboards." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122732.

Full text
Abstract:
Thesis: S.M. in Computer Science and Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 44-46).
3D breadboards are a new form of physical prototypes with breadboard functions directly integrated into its surfaces. 3D breadboards offer both the flexibility and re-configurability of breadboards, while also integrating well with the shape of the prototype. As a result, 3D breadboards can be used to test function directly in context of the actual physical form. Our custom 3D editor plugin supports designers in the process of converting 3D models into 3D breadboards. Our plugin first generates a pinhole pattern on the surface of the 3D model; designers can then connect the holes into power lines and terminal strips depending on the desired layout. To fabricate the 3D breadboards, designers only have to 3D print the housing and then fill the wire channels with conductive silicone. We explore a number of computational design and computer graphics approaches to convert arbitrary 3D models into 3D breadboards. We demonstrate a range of different interactive prototypes designed by our software system, and report on a user study with six participants to validate the concept of integrating breadboards into physical prototypes.
by Junyi Zhu.
S.M. in Computer Science and Engineering
S.M.inComputerScienceandEngineering Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
2

Rasmus, Siljedahl. "3D Conversion from CAD models to polygon models." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129881.

Full text
Abstract:
This thesis describes the design and implementation of an application that converts CAD models into polygon models. When going from CAD models to 3D polygon models a conversion of the file type has to be performed. XperDI uses these polygon models in their tool, called sales configurator, to create a photo realistic environment to be able to have a look at the end product before it is manufactured. Existing tools are difficult to use and is missing features that is important for the Sales Configurator. The purpose of this thesis is to create a proof of concept application that converts CAD models into 3D polygon models. This new lightweight application is a simpler alternative to convert CAD models into polygon models and offers features needed for the intended use of these models, that the alternative products do not offer.
APA, Harvard, Vancouver, ISO, and other styles
3

Bici, Mehmet Oguz. "Robust Transmission Of 3d Models." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612690/index.pdf.

Full text
Abstract:
In this thesis, robust transmission of 3D models represented by static or time consistent animated meshes is studied from the aspects of scalable coding, multiple description coding (MDC) and error resilient coding. First, three methods for MDC of static meshes are proposed which are based on multiple description scalar quantization, partitioning wavelet trees and optimal protection of scalable bitstream by forward error correction (FEC) respectively. For each method, optimizations and tools to decrease complexity are presented. The FEC based MDC method is also extended as a method for packet loss resilient transmission followed by in-depth analysis of performance comparison with state of the art techniques, which pointed significant improvement. Next, three methods for MDC of animated meshes are proposed which are based on layer duplication and partitioning of the set of vertices of a scalable coded animated mesh by spatial or temporal subsampling where each set is encoded separately to generate independently decodable bitstreams. The proposed MDC methods can achieve varying redundancy allocations by including a number of encoded spatial or temporal layers from the other description. The algorithms are evaluated with redundancy-rate-distortion curves and per-frame reconstruction analysis. Then for layered predictive compression of animated meshes, three novel prediction structures are proposed and integrated into a state of the art layered predictive coder. The proposed structures are based on weighted spatial/temporal prediction and angular relations of triangles between current and previous frames. The experimental results show that compared to state of the art scalable predictive coder, up to 30% bitrate reductions can be achieved with the combination of proposed prediction schemes depending on the content and quantization level. Finally, optimal quality scalability support is proposed for the state of the art scalable predictive animated mesh coding structure, which only supports resolution scalability. Two methods based on arranging the bitplane order with respect to encoding or decoding order are proposed together with a novel trellis based optimization framework. Possible simplifications are provided to achieve tradeoff between compression performance and complexity. Experimental results show that the optimization framework achieves quality scalability with significantly better compression performance than state of the art without optimization.
APA, Harvard, Vancouver, ISO, and other styles
4

Magnusson, Henrik. "Integrated generic 3D visualization of Modelica models." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15453.

Full text
Abstract:

OpenModelica is a complete environment for developing and simulatingModelica models based on free software. It is promoted and developed bythe OpenModelica Consortium. This thesis details a method for describingand consequently displaying visualizations of Modelica models in OMNote-book, an application in the OpenModelica suite where models can be writtenand simulated in a document mixed with text, images and plots. Two dif-ferent approaches are discussed; one based on Modelica annotations and onebased on creating a simple object hierarchy which can be connected to exist-ing models. Trial implementations are done which make it possible to discardthe annotation approach, and show that an object based solution is the onebest suited for a complete implementation. It is expanded into a working 3Dvisualization solution, embedded in OMNotebook.

APA, Harvard, Vancouver, ISO, and other styles
5

Aubry, Mathieu. "Representing 3D models for alignment and recognition." Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0006/document.

Full text
Abstract:
Cette thèse explore différentes représentations de modèles 3D pour la mise en correspondance de formes 3D, l’alignement d’une instance 2D et de son modèle 3D et l’alignement de modèles 3D à une image 2D pour toute une catégorie d’objets. La géométrie d’une forme 3D est presque entièrement encodée par les fonctions et valeurs propres de l’opérateur de Laplace-Beltrami sur cette forme. Nous utilisons cette représentation mathématiquement élégante pour caractériser les points d’une forme en développant une nouvelle notion d’échelle. Nous montrons que cette signature présente plusieurs avantages. Un élément clé de la compréhension d’une image est l’alignement des objets qu’elle contient à leur modèle 3D. Nous considérons ce problème d’alignement 2D-3D pour une représentation 2D arbitraire, telle un dessin ou une peinture. Nous représentons le modèle d’un site architectural par un ensemble d’elements visuels discriminants. Nous développons ensuite une procédure pour mettre ces éléments en correspondance avec une représentation 2D du site. Nous validons notre méthode sur une nouvelle base de données de représentations historiques et non-photographiques. Nous étendons cette approche pour décrire non pas un unique site architectural, mais une catégorie entière d’objets, représentée par une grande collection de modèles 3D. Notre méthode d’alignement 2D-3D pour une catégorie d’objets non seulement détecte les instances, mais identifie une approximation de leur style et de leur point de vue. Nous évaluons notre approche sur un sous-ensemble de la difficile base de donnée “Pascal VOC 2007” pour la catégorie des chaises, que nous représentons pas une base de donnée de 1394 modèles 3D
Thanks to the success of 3D reconstruction algorithms and the development of online tools for computer-aided design (CAD) the number of publicly available 3D models has grown significantly in recent years, and will continue to do so. This thesis investigates representations of 3D models for 3D shape matching, instance-level 2D-3D alignment, and category-level 2D-3D recognition. The geometry of a 3D shape can be represented almost completely by the eigen-functions and eigen-values of the Laplace-Beltrami operator on the shape. We use this mathematically elegant representation to characterize points on the shape, with a new notion of scale. This 3D point signature can be interpreted in the framework of quantum mechanics and we call it the Wave Kernel Signature (WKS). We show that it has advantages with respect to the previous state-of-the-art shape descriptors, and can be used for 3D shape matching, segmentation and recognition. A key element for understanding images is the ability to align an object depicted in an image to its given 3D model. We tackle this instance level 2D-3D alignment problem for arbitrary 2D depictions including drawings, paintings, and historical photographs. This is a tremendously diffcult task as the appearance and scene structure in the 2D depictions can be very different from the appearance and geometry of the 3D model, e.g., due to the specific rendering style, drawing error, age, lighting or change of seasons. We represent the 3D model of an entire architectural site by a set of visual parts learned from rendered views of the site. We then develop a procedure to match those scene parts that we call 3D discriminative visual elements to the 2D depiction of the architectural site. We validate our method on a newly collected dataset of non-photographic and historical depictions of three architectural sites. We extend this approach to describe not only a single architectural site but an entire object category, represented by a large collection of 3D CAD models. We develop a category-level 2D-3D alignment method that not only detects objects in cluttered images but also identifies their approximate style and viewpoint. We evaluate our approach both qualitatively and quantitatively on a subset of the challenging Pascal VOC 2012 images of the \chair" category using a reference library of 1394 CAD models downloaded from the Internet
APA, Harvard, Vancouver, ISO, and other styles
6

Gil, Camacho Carlos. "Part Detection in Oneline-Reconstructed 3D Models." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-51600.

Full text
Abstract:
This thesis introduces a system to identify objects into a 3D reconstructed model. In particular, this is applied to automatize the inspection of an engine of a truck by detecting some parts in an online reconstructed 3D model. In this way, the work shows how the use of the augmented reality and the computer vision can be applied into a real application to automatize a task of inspection. To do this, the system employs the Signed Distance Function for the 3D representation which has been proven in other research as an efficient method for 3D reconstruction of environments. Then, some of the common processes for the recognition of shapes are applied to identify the pose of a specific part of the 3D model. This thesis explains the steps to achieve this task. The model is built using an industrial robot arm with a depth camera attached to the end effector. This allows taking snapshots from different viewpoints that are fused in a same frame to reconstruct the 3D model. The path for the robot is generated by applying translations to the initial pose of the end effector. Once the model is generated, the identification of the part is carried out. The reconstructed model and the model to be detected are analysed by detecting keypoints and features descriptors. These features can be computed together to obtain several instances over the target model, in this case the engine. Last, these instances can be filtered by the application of some constrains to get the true pose of the object over the scene. Last, some results are presented. The models were generated from a real engine truck. Then, these models were analysed to detect the oil filters by using different keypoint detectors. The results show that the quality of the recognition is good for almost all of the cases but it still presents some failures for some of the detectors. Keypoints too distinctive are more prune to produce wrong registrations due to the differences between the target and the scene. At the same time, more constrains make the detection more robust but also make the system less flexible.
APA, Harvard, Vancouver, ISO, and other styles
7

Buchholz, Henrik. "Real-time visualization of 3D city models." Phd thesis, Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2007/1333/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shlyakhter, Ilya 1975, and Max 1976 Rozenoer. "Reconstruction of 3D tree models from instrumented photographs." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80136.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Also issued with order of names reversed on t.p.
Includes bibliographical references (leaves 33-36).
by Ilya Shlyakhter and Max Rozenoer.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Jennifer 1980. "Component-based face recognition with 3D morphable models." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87403.

Full text
Abstract:
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2003.
Includes bibliographical references (p. 36-38).
by Jennifer Huang.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
10

Marks, Tim K. "Facing uncertainty 3D face tracking and learning with generative models /." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3196545.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed February 27, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 143-148).
APA, Harvard, Vancouver, ISO, and other styles
11

Lindberg, Mimmi. "Forensic Validation of 3D models." Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171159.

Full text
Abstract:
3D reconstruction can be used in forensic science to reconstruct crime scenes and objects so that measurements and further information can be acquired off-site. It is desirable to use image based reconstruction methods but there is currently no procedure available for determining the uncertainty of such reconstructions. In this thesis the uncertainty of Structure from Motion is investigated. This is done by exploring the literature available on the subject and compiling the relevant information in a literary summary. Also, Monte Carlo simulations are conducted to study how the feature position uncertainty affects the uncertainty of the parameters estimated by bundle adjustment. The experimental results show that poses of cameras that contain few image correspondences are estimated with higher uncertainty. The poses of such cameras are estimated with lesser uncertainty if they have feature correspondences in cameras that contain a higher number of projections.
APA, Harvard, Vancouver, ISO, and other styles
12

Simek, Kyle. "Branching Gaussian Process Models for Computer Vision." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/612094.

Full text
Abstract:
Bayesian methods provide a principled approach to some of the hardest problems in computer vision—low signal-to-noise ratios, ill-posed problems, and problems with missing data. This dissertation applies Bayesian modeling to infer multidimensional continuous manifolds (e.g., curves, surfaces) from image data using Gaussian process priors. Gaussian processes are ideal priors in this setting, providing a stochastic model over continuous functions while permitting efficient inference. We begin by introducing a formal mathematical representation of branch curvilinear structures called a curve tree and we define a novel family of Gaussian processes over curve trees called branching Gaussian processes. We define two types of branching Gaussian properties and show how to extend them to branching surfaces and hypersurfaces. We then apply Gaussian processes in three computer vision applications. First, we perform 3D reconstruction of moving plants from 2D images. Using a branching Gaussian process prior, we recover high quality 3D trees while being robust to plant motion and camera calibration error. Second, we perform multi-part segmentation of plant leaves from highly occluded silhouettes using a novel Gaussian process model for stochastic shape. Our method obtains good segmentations despite highly ambiguous shape evidence and minimal training data. Finally, we estimate 2D trees from microscope images of neurons with highly ambiguous branching structure. We first fit a tree to a blurred version of the image where structure is less ambiguous. Then we iteratively deform and expand the tree to fit finer images, using a branching Gaussian process regularizing prior for deformation. Our method infers natural tree topologies despite ambiguous branching and image data containing loops. Our work shows that Gaussian processes can be a powerful building block for modeling complex structure, and they perform well in computer vision problems having significant noise and ambiguity.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhao, Shu Jie. "Line drawings abstraction from 3D models." Thesis, University of Macau, 2009. http://umaclib3.umac.mo/record=b2130124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

BHAT, RAVINDRA K. "VISUALIZATION USING COMPUTER GENERATED 3D MODELS AND THEIR APPLICATIONS IN ARCHITECTURAL PRACTICE." The University of Arizona, 1994. http://hdl.handle.net/10150/555416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bayar, Hakan. "Texture Mapping By Multi-image Blending For 3d Face Models." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12609063/index.pdf.

Full text
Abstract:
Computer interfaces has changed to 3D graphics environments due to its high number of applications ranging from scientific importance to entertainment. To enhance the realism of the 3D models, an established rendering technique, texture mapping, is used. In computer vision, a way to generate this texture is to combine extracted parts of multiple images of real objects and it is the topic studied in this thesis. While the 3D face model is obtained by using 3D scanner, the texture to cover the model is constructed from multiple images. After marking control points on images and also on 3D face model, a texture image to cover the 3D face model is generated. Moreover, effects of the some features of OpenGL, a graphical library, on 3D texture covered face model are studied.
APA, Harvard, Vancouver, ISO, and other styles
16

Duval, Thierry. "Models for design, implementation and deployment of 3D Collaborative Virtual Environments." Habilitation à diriger des recherches, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00764830.

Full text
Abstract:
This work aims at providing some cues in order to address the essential requirements about the design of 3D Collaborative Virtual Environments (CVE). We have identified six essential topics that must be addressed when designing a CVE. For each of them, we present a state of the art about the solutions that can address this topic, then we show our own contributions: how we improve existing solutions and what are our new propositions. 1 - Choosing a model for the distribution of a CVE We need a distribution model to distribute as efficiently as possible the content of a CVE among all the nodes involved in its execution, including the machines of the distant users. Our proposition is to allow CVE designers to mix in a same CVE the three main distribution models usually encountered: centralized on a server, totally replicated on each site, or distributed according to a hybrid distribution model. 2 - Choosing a model for the synchronization of these nodes To maintain consistency between all the nodes involved in the execution of a CVE, we must choose between a strong synchronization or a relaxed one, or an in-between solution. Our proposition is to manage some temporary relaxation of the synchronization due to network breakdowns, with several synchronization groups of users, making them aware of these network breakdowns, and to allow some shared objects to migrate from one site to another. 3 - Adapting the Virtual Environment to various hardware systems VR applications must be adapted to the software and to the hardware input and output devices that are available at run-time, in order to be able to deploy a CVE onto di fferent kinds of hardware and software. Our solution is the PAC-C3D software architectural model which is able to deal with the three main distribution modes encountered in CVE. 4 - Designing interaction and collaboration in the VE Expressing the interactive and collaborative capabilities of the content of a CVE goes one step beyond geometric modeling, by adding interactive and collaborative features to virtual objects. We propose a unified model of dialog between interactive objects and interaction tools, with an extension to Collada in order to describe interactive and collaborative properties of these interactive objects and interaction tools. 5 - Choosing the best metaphors for collaborative interactions Most of the time single-user interaction tools and metaphors are not adapted to off er effi cient collaboration between users of a CVE. We adapt some of these tools and metaphors to collaborative interactions, and we propose new really collaborative metaphors to enhance real multi-user collaborative interactions, with dedicated collaborative feedback. 6 - Embedding the users' physical workspaces within the CVE Taking into account users' physical workspaces makes it possible to adapt a CVE to the hardware input and output devices of the users, and to make them aware of their physical limitations and of those of the other users, for better interaction and collaboration. We propose the Immersive Interactive Virtual Cabin (IIVC) concept to embed such 3D representations in CVE.
APA, Harvard, Vancouver, ISO, and other styles
17

Weaver, Alexandra Alden. "Gaming Reality: Real-World 3D Models in Interactive Media." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/scripps_theses/619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sundberg, Simon. "Evolving digital 3D models using interactive genetic algorithm." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177580.

Full text
Abstract:
The search space of digital 3D models designs is vast and can be hard to navigate. In this study, a system that evolves digital 3D models using an interactive genetic algorithm (IGA) was constructed in order to aid this process. The goal of the study was to investigate how such a system can be constructed in order to aid the design space exploration of digital 3D models.  The system is integrated with the 3D creation suite Blender and uses its Python API to programmatically edit models and generate images of the results, which are displayed on a web page where users can rate the results to evolve the model. The proposed system exposes all settings for the genetic algorithm, which includes population size, mutation rate, crossover algorithm, selection algorithm and more. Furthermore, the settings can be modified throughout the evolutionary process as well as the ability to rewind the algorithm and go back to previous generations in order to give more control in the progression of the algorithm. The script based nature of the proposed system is powerful but not practical for people without programming experience. For widespread adoption of IGAs as an exploratory design aid tool, it would help if the IGA is directly integrated into the design software being used in order to make it easier to use and reduce user fatigue.
APA, Harvard, Vancouver, ISO, and other styles
19

Yao, Miaojun. "3D Printable Designs of Rigid and Deformable Models." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1502906675481174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Villanueva, Aylagas Mónica. "Reconstruction and recommendation of realistic 3D models using cGANs." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231492.

Full text
Abstract:
Three-dimensional modeling is the process of creating a representation of a surface or object in three dimensions via a specialized software where the modeler scans a real-world object into a point cloud, creates a completely new surface or edits the selected representation. This process can be challenging due to factors like the complexity of the 3D creation software or the number of dimensions in play. This work proposes a framework that recommends three types of reconstructions of an incomplete or rough 3D model using Generative AdversarialNetworks (GANs). These reconstructions follow the distribution of real data, resemble the user model and stay close to the dataset while keeping features of the input, respectively. The main advantage of this approach is the acceptance of 3Dmodels as input for the GAN instead of latent vectors, which prevents the need of training an extra network to project the model into the latent space. The systems are evaluated both quantitatively and qualitatively. The quantitative measure lies upon the Intersection over Union (IoU) metric while the quantitative evaluation is measured by a user study. Experiments show that it is hard to create a system that generates realistic models, following the distribution of the dataset, since users have different opinions on what is realistic. However, similarity between the user input and the reconstruction is well accomplished and, in fact, the most valued feature for modelers.
Tredimensionell modellering är processen att skapa en representation av en yta eller ett objekt i tre dimensioner via en specialiserad programvara där modelleraren skannar ett verkligt objekt i ett punktmoln, skapar en helt ny yta eller redigerar den valda representationen. Denna process kan vara utmanande på grund av faktorer som komplexiteten i den 3D-skapande programvaran eller antalet dimensioner i spel. I det här arbetet föreslås ett ramverk som rekommenderar tre typer av rekonstruktioner av en ofullständig eller grov 3D-modell med Generative Adversarial Networks (GAN). Dessa rekonstruktioner följer distributionen av reella data, liknar användarmodellen och håller sig nära datasetet medan respektive egenskaper av ingången behålls. Den främsta fördelen med detta tillvägagångssätt är acceptansen av 3D-modeller som input för GAN istället för latentavektorer, vilket förhindrar behovet av att träna ett extra nätverk för att projicera modellen i latent rymd. Systemen utvärderas både kvantitativt och kvalitativt. Den kvantitativa åtgärden beror på Intersection over Union (IoU) metrisk medan den kvantitativa utvärderingen mäts av en användarstudie. Experiment visar att det är svårt att skapa ett system som genererar realistiska modeller efter distributionen av datasetet, eftersom användarna har olika åsikter om vad som är realistiskt. Likvärdighet mellan användarinmatning och rekonstruktion är väl genomförd och i själva verket den mest uppskattade funktionen för modellerare.
APA, Harvard, Vancouver, ISO, and other styles
21

Mazzolini, Ryan. "Procedurally generating surface detail for 3D models using voxel-based cellular automata." Master's thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/20502.

Full text
Abstract:
Procedural generation is used extensively in the field of computer graphics to automate content generation and speed up development. One particular area often automated is the generation of additional colour and structural detail for existing 3D models. This empowers artists by providing a tool-set that enhances their existing work-ow and saves time. 3D surface structures are traditionally represented by polygon mesh-based models augmented by 2D mapping techniques. These methods can approximate features, such as caves and overhangs, however they are complex and difficult to modify. As an alternative, a grid of voxels can model 3D shapes and surfaces, similar to how 2D pixels form an image. The regular form of voxel-based models is easier to alter, at the cost of additional computational overhead. One technique for generating and altering voxel content is by using Cellular Automata (CA). CAs are able to produce complex structures from simple rules and also easily map to higher dimensions, such as voxel datasets. However, creating CA rule-sets can be difficult and tedious. This is especially true when creating multidimensional CA. In our work we use a grammar system to create surface detail CA. The grammar we develop is similar to formal grammars used in procedural generation, such as L-systems and shape grammars. Our system is composed of three main sections: a model converter, grammar and CA executor. The model converter changes polygon-mesh models to and from a voxel-based model. The grammar provides a simple language to create CA that can consider 3D neighbourhoods and query parameters, such as colour or structure. Finally, the CA executor interprets the produced grammars into surface-oriented CAs. The final output of this system is a polygon-mesh model, altered by the CA, which is usable for graphics applications. We test the system by replicating a number of CA use-cases with our grammar system. From the results, we conclude that our grammar system is capable of creating a wide range of 3D detail CA. However, the high resolution of resulting meshes and slow processing times make the process more suited to o_-line processing and pre-production.
APA, Harvard, Vancouver, ISO, and other styles
22

Trapp, Matthias. "Analysis and exploration of virtual 3D city models using 3D information lenses." Master's thesis, Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2008/1393/.

Full text
Abstract:
This thesis addresses real-time rendering techniques for 3D information lenses based on the focus & context metaphor. It analyzes, conceives, implements, and reviews its applicability to objects and structures of virtual 3D city models. In contrast to digital terrain models, the application of focus & context visualization to virtual 3D city models is barely researched. However, the purposeful visualization of contextual data of is extreme importance for the interactive exploration and analysis of this field. Programmable hardware enables the implementation of new lens techniques, that allow the augmentation of the perceptive and cognitive quality of the visualization compared to classical perspective projections. A set of 3D information lenses is integrated into a 3D scene-graph system: • Occlusion lenses modify the appearance of virtual 3D city model objects to resolve their occlusion and consequently facilitate the navigation. • Best-view lenses display city model objects in a priority-based manner and mediate their meta information. Thus, they support exploration and navigation of virtual 3D city models. • Color and deformation lenses modify the appearance and geometry of 3D city models to facilitate their perception. The presented techniques for 3D information lenses and their application to virtual 3D city models clarify their potential for interactive visualization and form a base for further development.
Diese Diplomarbeit behandelt echtzeitfähige Renderingverfahren für 3D Informationslinsen, die auf der Fokus-&-Kontext-Metapher basieren. Im folgenden werden ihre Anwendbarkeit auf Objekte und Strukturen von virtuellen 3D-Stadtmodellen analysiert, konzipiert, implementiert und bewertet. Die Focus-&-Kontext-Visualisierung für virtuelle 3D-Stadtmodelle ist im Gegensatz zum Anwendungsbereich der 3D Geländemodelle kaum untersucht. Hier jedoch ist eine gezielte Visualisierung von kontextbezogenen Daten zu Objekten von großer Bedeutung für die interaktive Exploration und Analyse. Programmierbare Computerhardware erlaubt die Umsetzung neuer Linsen-Techniken, welche die Steigerung der perzeptorischen und kognitiven Qualität der Visualisierung im Vergleich zu klassischen perspektivischen Projektionen zum Ziel hat. Für eine Auswahl von 3D-Informationslinsen wird die Integration in ein 3D-Szenengraph-System durchgeführt: • Verdeckungslinsen modifizieren die Gestaltung von virtuellen 3D-Stadtmodell- Objekten, um deren Verdeckungen aufzulösen und somit die Navigation zu erleichtern. • Best-View Linsen zeigen Stadtmodell-Objekte in einer prioritätsdefinierten Weise und vermitteln Meta-Informationen virtueller 3D-Stadtmodelle. Sie unterstützen dadurch deren Exploration und Navigation. • Farb- und Deformationslinsen modifizieren die Gestaltung und die Geometrie von 3D-Stadtmodell-Bereichen, um deren Wahrnehmung zu steigern. Die in dieser Arbeit präsentierten Techniken für 3D Informationslinsen und die Anwendung auf virtuelle 3D Stadt-Modelle verdeutlichen deren Potenzial in der interaktiven Visualisierung und bilden eine Basis für Weiterentwicklungen.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Xiaochen. "Tracking vertex flow on 3D dynamic facial models." Diss., Online access via UMI:, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
24

Eriksson, Oliver, and William Lindblom. "Comparing Perception of Animated Imposters and 3D Models." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280333.

Full text
Abstract:
In modern 3D games and movies, large character crowds are commonly rendered which can be expensive with regard to rendering times. As character complexity increases, so does the need for optimizations. Level of Detail (LOD) techniques are used to optimize rendering by reducing geometric complexity in a scene. One such technique is reducing a complex character to a textured flat plane, a so called imposter. Previous research has shown that imposters are a good way of optimizing 3D-rendering, and can be done without decreasing visual fidelity compared to 3D-models if rendered statically up to a one- to-one pixel to texel ratio. In this report we look further into using imposers as an LOD technique by investigating how animation, in particular rotation, of imposters at different distances affects human perception when observing character crowds. The results, with regards to static non rotating characters, goes in line with previous research showing that imposters are indistinguishable from 3D-models when standing still. When introducing rotation, slow rotation speed is shown to be a dominant factor compared to distance which reveals crowds of imposters. On the other hand, the results suggest that fast movements could be used as a means for hiding flaws in pre-rendered imposters, even at near distances, where non moving imposters otherwise could be distinguishable.
I moderna 3D-spel och filmer är rendering av stora mängder karaktärer vanligt förekommande, vilket kan vara kostsamt med avseende på renderingstider. Allt eftersom karaktärernas komplexitet ökar så ökar behovet av optimeringar. Level of Detail (LOD) tekniker används för att optimera rendering genom att reducera geometrisk komplexitet i en scen. En sådan teknik bygger på att reducera en komplex karaktär till ett texturtäckt plan, en så kallad imposter. Tidigare forskning har visat att imposters är ett bra sätt att optimera 3D-rendering, och kan användas utan att minska visuell trohet jämfört med 3D-modeller om de renderas statiskt upp till ett förhållande av en-till-en pixel per texel. I den här rapporten tittar vi vidare på imposters som en LOD teknik genom att undersöka hur animering, i synnerhet rotation, av imposters vid olika avstånd påverkar mänsklig iaktagelseförmåga när folkmassor av karaktärer observeras. Resultaten, med hänsyn till statiska icke-roterande karaktärer, går i linje med tidigare forskning och visar att imposters inte är urskiljbara från 3D-modeller när de står stilla. När rotation introduceras visar det sig att långsam rotation är en dominerande faktor jämfört med avstånd som avslöjar folkmassor av imposters. Å andra sidan tyder resultaten på att snabba rörelser skulle kunna användas för att dölja brister hos förrenderade imposters, även vid små avstånd, där stillastående imposters annars kan vara urskiljbara.
APA, Harvard, Vancouver, ISO, and other styles
25

Orriols, Majoral Xavier. "Generative Models for Video Analysis and 3D Range Data Applications." Doctoral thesis, Universitat Autònoma de Barcelona, 2004. http://hdl.handle.net/10803/3037.

Full text
Abstract:
La mayoría de problemas en Visión por computador no contienen una relación directa entre el estímulo que proviene de sensores de tipo genérico y su correspondiente categoría perceptual. Este tipo de conexión requiere de una tarea de aprendizaje compleja. De hecho, las formas básicas de energía, y sus posibles combinaciones, son un número reducido en comparación a las infinitas categorías perceptuales correspondientes a objetos, acciones, relaciones entre objetos, etc. Dos factores principales determinan el nivel de dificultad de cada problema específico: i) los diferentes niveles de información que se utilizan, y ii) la complejidad del modelo que se emplea con el objetivo de explicar las observaciones.
La elección de una representación adecuada para los datos toma una relevancia significativa cuando se tratan invariancias, dado que estas siempre implican una reducción del los grados de libertad del sistema, i.e., el número necesario de coordenadas para la representación es menor que el empleado en la captura de datos. De este modo, la descomposición en unidades básicas y el cambio de representación dan lugar a que un problema complejo se pueda transformar en uno de manejable. Esta simplificación del problema de la estimación debe depender del mecanismo propio de combinación de estas primitivas con el fin de obtener una descripción óptima del modelo complejo global. Esta tesis muestra como los Modelos de Variables Latentes reducen dimensionalidad, que teniendo en cuenta las simetrías internas del problema, ofrecen una manera de tratar con datos parciales y dan lugar a la posibilidad de predicciones de nuevas observaciones.
Las líneas de investigación de esta tesis están dirigidas al manejo de datos provinentes de múltiples fuentes. Concretamente, esta tesis presenta un conjunto de nuevos algoritmos aplicados a dos áreas diferentes dentro de la Visión por Computador: i) video análisis y sumarización y ii) datos range 3D. Ambas áreas se han enfocado a través del marco de los Modelos Generativos, donde se han empleado protocolos similares para representar datos.
The majority of problems in Computer Vision do not contain a direct relation between the stimuli provided by a general purpose sensor and its corresponding perceptual category. A complex learning task must be involved in order to provide such a connection. In fact, the basic forms of energy, and their possible combinations are a reduced number compared to the infinite possible perceptual categories corresponding to objects, actions, relations among objects... Two main factors determine the level of difficulty of a specific problem: i) The different levels of information that are employed and ii) The complexity of the model that is intended to explain the observations.
The choice of an appropriate representation for the data takes a significant relevance when it comes to deal with invariances, since these usually imply that the number of intrinsic degrees of
freedom in the data distribution is lower than the coordinates used to represent it. Therefore, the decomposition into basic units (model parameters) and the change of representation, make that a complex problem can be transformed into a manageable one. This simplification of the estimation problem has to rely on a proper mechanism of combination of those primitives in order to give an optimal description of the global complex model. This thesis shows how Latent Variable Models reduce dimensionality, taking into account the internal symmetries of a problem, provide a manner of dealing with missing data and make possible predicting new observations.
The lines of research of this thesis are directed to the management of multiple data sources. More specifically, this thesis presents a set of new algorithms applied to two different areas in Computer Vision: i) video analysis and summarization, and ii) 3D range data. Both areas have been approached through the Generative Models framework, where similar protocols for representing data have been employed.
APA, Harvard, Vancouver, ISO, and other styles
26

Waldow, Walter E. "An Adversarial Framework for Deep 3D Target Template Generation." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1597334881614898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Glander, Tassilo. "Multi-scale representations of virtual 3D city models." Phd thesis, Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2013/6411/.

Full text
Abstract:
Virtual 3D city and landscape models are the main subject investigated in this thesis. They digitally represent urban space and have many applications in different domains, e.g., simulation, cadastral management, and city planning. Visualization is an elementary component of these applications. Photo-realistic visualization with an increasingly high degree of detail leads to fundamental problems for comprehensible visualization. A large number of highly detailed and textured objects within a virtual 3D city model may create visual noise and overload the users with information. Objects are subject to perspective foreshortening and may be occluded or not displayed in a meaningful way, as they are too small. In this thesis we present abstraction techniques that automatically process virtual 3D city and landscape models to derive abstracted representations. These have a reduced degree of detail, while essential characteristics are preserved. After introducing definitions for model, scale, and multi-scale representations, we discuss the fundamentals of map generalization as well as techniques for 3D generalization. The first presented technique is a cell-based generalization of virtual 3D city models. It creates abstract representations that have a highly reduced level of detail while maintaining essential structures, e.g., the infrastructure network, landmark buildings, and free spaces. The technique automatically partitions the input virtual 3D city model into cells based on the infrastructure network. The single building models contained in each cell are aggregated to abstracted cell blocks. Using weighted infrastructure elements, cell blocks can be computed on different hierarchical levels, storing the hierarchy relation between the cell blocks. Furthermore, we identify initial landmark buildings within a cell by comparing the properties of individual buildings with the aggregated properties of the cell. For each block, the identified landmark building models are subtracted using Boolean operations and integrated in a photo-realistic way. Finally, for the interactive 3D visualization we discuss the creation of the virtual 3D geometry and their appearance styling through colors, labeling, and transparency. We demonstrate the technique with example data sets. Additionally, we discuss applications of generalization lenses and transitions between abstract representations. The second technique is a real-time-rendering technique for geometric enhancement of landmark objects within a virtual 3D city model. Depending on the virtual camera distance, landmark objects are scaled to ensure their visibility within a specific distance interval while deforming their environment. First, in a preprocessing step a landmark hierarchy is computed, this is then used to derive distance intervals for the interactive rendering. At runtime, using the virtual camera distance, a scaling factor is computed and applied to each landmark. The scaling factor is interpolated smoothly at the interval boundaries using cubic Bézier splines. Non-landmark geometry that is near landmark objects is deformed with respect to a limited number of landmarks. We demonstrate the technique by applying it to a highly detailed virtual 3D city model and a generalized 3D city model. In addition we discuss an adaptation of the technique for non-linear projections and mobile devices. The third technique is a real-time rendering technique to create abstract 3D isocontour visualization of virtual 3D terrain models. The virtual 3D terrain model is visualized as a layered or stepped relief. The technique works without preprocessing and, as it is implemented using programmable graphics hardware, can be integrated with minimal changes into common terrain rendering techniques. Consequently, the computation is done in the rendering pipeline for each vertex, primitive, i.e., triangle, and fragment. For each vertex, the height is quantized to the nearest isovalue. For each triangle, the vertex configuration with respect to their isovalues is determined first. Using the configuration, the triangle is then subdivided. The subdivision forms a partial step geometry aligned with the triangle. For each fragment, the surface appearance is determined, e.g., depending on the surface texture, shading, and height-color-mapping. Flexible usage of the technique is demonstrated with applications from focus+context visualization, out-of-core terrain rendering, and information visualization. This thesis presents components for the creation of abstract representations of virtual 3D city and landscape models. Re-using visual language from cartography, the techniques enable users to build on their experience with maps when interpreting these representations. Simultaneously, characteristics of 3D geovirtual environments are taken into account by addressing and discussing, e.g., continuous scale, interaction, and perspective.
Gegenstand der Arbeit sind virtuelle 3D-Stadt- und Landschaftsmodelle, die den städtischen Raum in digitalen Repräsentationen abbilden. Sie werden in vielfältigen Anwendungen und zu unterschiedlichen Zwecken eingesetzt. Dabei ist die Visualisierung ein elementarer Bestandteil dieser Anwendungen. Durch realitätsnahe Darstellung und hohen Detailgrad entstehen jedoch zunehmend fundamentale Probleme für eine verständliche Visualisierung. So führt beispielsweise die hohe Anzahl von detailliert ausmodellierten und texturierten Objekten eines virtuellen 3D-Stadtmodells zu Informationsüberflutung beim Betrachter. In dieser Arbeit werden Abstraktionsverfahren vorgestellt, die diese Probleme behandeln. Ziel der Verfahren ist die automatische Transformation virtueller 3D-Stadt- und Landschaftsmodelle in abstrakte Repräsentationen, die bei reduziertem Detailgrad wichtige Charakteristika erhalten. Nach der Einführung von Grundbegriffen zu Modell, Maßstab und Mehrfachrepräsentationen werden theoretische Grundlagen zur Generalisierung von Karten sowie Verfahren zur 3D-Generalisierung betrachtet. Das erste vorgestellte Verfahren beschreibt die zellbasierte Generalisierung von virtuellen 3DStadtmodellen. Es erzeugt abstrakte Repräsentationen, die drastisch im Detailgrad reduziert sind, erhält dabei jedoch die wichtigsten Strukturen, z.B. das Infrastrukturnetz, Landmarkengebäude und Freiflächen. Dazu wird in einem vollautomatischen Verfahren das Eingabestadtmodell mithilfe des Infrastrukturnetzes in Zellen zerlegt. Pro Zelle wird abstrakte Gebäudegeometrie erzeugt, indem die enthaltenen Einzelgebäude mit ihren Eigenschaften aggregiert werden. Durch Berücksichtigung gewichteter Elemente des Infrastrukturnetzes können Zellblöcke auf verschiedenen Hierarchieebenen berechnet werden. Weiterhin werden Landmarken gesondert berücksichtigt: Anhand statistischer Abweichungen der Eigenschaften der Einzelgebäudes von den aggregierten Eigenschaften der Zelle werden Gebäude gegebenenfalls als initiale Landmarken identifiziert. Schließlich werden die Landmarkengebäude aus den generalisierten Blöcken mit Booleschen Operationen ausgeschnitten und realitätsnah dargestellt. Die Ergebnisse des Verfahrens lassen sich in interaktiver 3D-Darstellung einsetzen. Das Verfahren wird beispielhaft an verschiedenen Datensätzen demonstriert und bezüglich der Erweiterbarkeit diskutiert. Das zweite vorgestellte Verfahren ist ein Echtzeit-Rendering-Verfahren für geometrische Hervorhebung von Landmarken innerhalb eines virtuellen 3D-Stadtmodells: Landmarkenmodelle werden abhängig von der virtuellen Kameradistanz vergrößert, so dass sie innerhalb eines spezifischen Entfernungsintervalls sichtbar bleiben; dabei wird ihre Umgebung deformiert. In einem Vorverarbeitungsschritt wird eine Landmarkenhierarchie bestimmt, aus der die Entfernungsintervalle für die interaktive Darstellung abgeleitet werden. Zur Laufzeit wird anhand der virtuellen Kameraentfernung je Landmarke ein dynamischer Skalierungsfaktor bestimmt, der das Landmarkenmodell auf eine sichtbare Größe skaliert. Dabei wird der Skalierungsfaktor an den Intervallgrenzen durch kubisch interpoliert. Für Nicht-Landmarkengeometrie in der Umgebung wird die Deformation bezüglich einer begrenzten Menge von Landmarken berechnet. Die Eignung des Verfahrens wird beispielhaft anhand verschiedener Datensätze demonstriert und bezüglich der Erweiterbarkeit diskutiert. Das dritte vorgestellte Verfahren ist ein Echtzeit-Rendering-Verfahren, das eine abstrakte 3D-Isokonturen-Darstellung von virtuellen 3D-Geländemodellen erzeugt. Für das Geländemodell wird eine Stufenreliefdarstellung für eine Menge von nutzergewählten Höhenwerten erzeugt. Das Verfahren arbeitet ohne Vorverarbeitung auf Basis programmierbarer Grafikkarten-Hardware. Entsprechend erfolgt die Verarbeitung in der Prozesskette pro Geometrieknoten, pro Dreieck, und pro Bildfragment. Pro Geometrieknoten wird zunächst die Höhe auf den nächstliegenden Isowert quantisiert. Pro Dreieck wird dann die Konfiguration bezüglich der Isowerte der drei Geometrieknoten bestimmt. Anhand der Konfiguration wird eine geometrische Unterteilung vorgenommen, so dass ein Stufenausschnitt entsteht, der dem aktuellen Dreieck entspricht. Pro Bildfragment wird schließlich die finale Erscheinung definiert, z.B. anhand von Oberflächentextur, durch Schattierung und Höheneinfärbung. Die vielfältigen Einsatzmöglichkeiten werden mit verschiedenen Anwendungen demonstriert. Die Arbeit stellt Bausteine für die Erzeugung abstrakter Darstellungen von virtuellen 3D-Stadt und Landschaftsmodellen vor. Durch die Orientierung an kartographischer Bildsprache können die Nutzer auf bestehende Erfahrungen bei der Interpretation zurückgreifen. Dabei werden die charakteristischen Eigenschaften 3D geovirtueller Umgebungen berücksichtigt, indem z.B. kontinuierlicher Maßstab, Interaktion und Perspektive behandelt und diskutiert werden.
APA, Harvard, Vancouver, ISO, and other styles
28

Rosato, Matthew J. "Applying conformal mapping to the vertex correspondence problem for 3D face models." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
29

Mao, Bo. "Visualisation and Generalisation of 3D City Models." Licentiate thesis, KTH, Geoinformatics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-24345.

Full text
Abstract:

3D city models have been widely used in different applications such as urban planning, traffic control, disaster management etc. Effective visualisation of 3D city models in various scales is one of the pivotal techniques to implement these applications. In this thesis, a framework is proposed to visualise the 3D city models both online and offline using City Geography Makeup Language (CityGML) and Extensible 3D (X3D) to represent and present the models. Then, generalisation methods are studied and tailored to create 3D city scenes in multi-scale dynamically. Finally, the quality of generalised 3D city models is evaluated by measuring the visual similarity from the original models.

 

In the proposed visualisation framework, 3D city models are stored in CityGML format which supports both geometric and semantic information. These CityGML files are parsed to create 3D scenes and be visualised with existing 3D standard. Because the input and output in the framework are all standardised, it is possible to integrate city models from different sources and visualise them through the different viewers.

 

Considering the complexity of the city objects, generalisation methods are studied to simplify the city models and increase the visualisation efficiency. In this thesis, the aggregation and typification methods are improved to simplify the 3D city models.

 

Multiple representation data structures are required to store the generalisation information for dynamic visualisation. One of these is the CityTree, a novel structure to represent building group, which is tested for building aggregation. Meanwhile, Minimum Spanning Tree (MST) is employed to detect the linear building group structures in the city models and they are typified with different strategies. According to the experiments results, by using the CityTree, the generalised 3D city model creation time is reduced by more than 50%.

 

Different generalisation strategies lead to different outcomes. It is important to evaluate the quality of the generalised models. In this thesis a new evaluation method is proposed: visual features of the 3D city models are represented by Attributed Relation Graph (ARG) and their similarity distances are calculated with Nested Earth Mover’s Distance (NEMD) algorithm. The calculation results and user survey show that the ARG and NEMD methods can reflect the visual similarity between generalised city models and the original ones.


QC 20100923
ViSuCity Project
APA, Harvard, Vancouver, ISO, and other styles
30

Berg, Martin. "Pose Recognition for Tracker Initialization Using 3D Models." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11080.

Full text
Abstract:

In this thesis it is examined whether the pose of an object can be determined by a system trained with a synthetic 3D model of said object. A number of variations of methods using P-channel representation are examined. Reference images are rendered from the 3D model, features, such as gradient orientation and color information are extracted and encoded into P-channels. The P-channel representation is then used to estimate an overlapping channel representation, using B1-spline functions, to estimate a density function for the feature set. Experiments were conducted with this representation as well as the raw P-channel representation in conjunction with a number of distance measures and estimation methods.

It is shown that, with correct preprocessing and choice of parameters, the pose can be detected with some accuracy and, if not in real-time, fast enough to be useful in a tracker initialization scenario. It is also concluded that the success rate of the estimation depends heavily on the nature of the object.

APA, Harvard, Vancouver, ISO, and other styles
31

Vandeventer, Jason. "4D (3D Dynamic) statistical models of conversational expressions and the synthesis of highly-realistic 4D facial expression sequences." Thesis, Cardiff University, 2015. http://orca.cf.ac.uk/82405/.

Full text
Abstract:
In this thesis, a novel approach for modelling 4D (3D Dynamic) conversational interactions and synthesising highly-realistic expression sequences is described. To achieve these goals, a fully-automatic, fast, and robust pre-processing pipeline was developed, along with an approach for tracking and inter-subject registering 3D sequences (4D data). A method for modelling and representing sequences as single entities is also introduced. These sequences can be manipulated and used for synthesising new expression sequences. Classification experiments and perceptual studies were performed to validate the methods and models developed in this work. To achieve the goals described above, a 4D database of natural, synced, dyadic conversations was captured. This database is the first of its kind in the world. Another contribution of this thesis is the development of a novel method for modelling conversational interactions. Our approach takes into account the time-sequential nature of the interactions, and encompasses the characteristics of each expression in an interaction, as well as information about the interaction itself. Classification experiments were performed to evaluate the quality of our tracking, inter-subject registration, and modelling methods. To evaluate our ability to model, manipulate, and synthesise new expression sequences, we conducted perceptual experiments. For these perceptual studies, we manipulated modelled sequences by modifying their amplitudes, and had human observers evaluate the level of expression realism and image quality. To evaluate our coupled modelling approach for conversational facial expression interactions, we performed a classification experiment that differentiated predicted frontchannel and backchannel sequences, using the original sequences in the training set. We also used the predicted backchannel sequences in a perceptual study in which human observers rated the level of similarity of the predicted and original sequences. The results of these experiments help support our methods and our claim of our ability to produce 4D, highly-realistic expression sequences that compete with state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
32

Kim, Taewoo. "A 3D XML-based modeling and simulation framework for dynamic models." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1000134.

Full text
Abstract:
Thesis (Ph. D.)--University of Florida, 2002.
Title from title page of source document. Document formatted into pages; contains xii, 132 p.; also contains graphics. Includes vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
33

Reichert, Thomas. "Development of 3D lattice models for predicting nonlinear timber joint behaviour." Thesis, Edinburgh Napier University, 2009. http://researchrepository.napier.ac.uk/Output/2827.

Full text
Abstract:
This work presents the development of a three-dimensional lattice material model for wood and its application to timber joints including the potential strengthening benefit of second order effects. A lattice of discrete elements was used to capture the heterogeneity and fracture behaviour and the model results compared to tested Sitka spruce (Picea sitchensis) specimens. Despite the general applicability of lattice models to timber, they are computationally demanding, due to the nonlinear solution and large number of degrees of freedom required. Ways to reduce the computational costs are investigated. Timber joints fail due to plastic deformation of the steel fastener(s), embedment, or brittle fracture of the timber. Lattice models, contrary to other modelling approaches such as continuum finite elements, have the advantage to take into account brittle fracture, crack development and material heterogeneity by assigning certain strength and stiffness properties to individual elements. Furthermore, plastic hardening is considered to simulate timber embedment. The lattice is an arrangement of longitudinal, lateral and diagonal link elements with a tri-linear load-displacement relation. The lattice is used in areas with high stress gradients and normal continuum elements are used elsewhere. Heterogeneity was accounted for by creating an artificial growth ring structure and density profile upon which the mean strength and stiffness properties were adjusted. Solution algorithms, such as Newton-Raphson, encounter problems with discrete elements for which 'snap-back' in the global load-displacement curves would occur. Thus, a specialised solution algorithm, developed by Jirasek and Bazant, was adopted to create a bespoke FE code in MATLAB that can handle the jagged behaviour of the load displacement response, and extended to account for plastic deformation. The model's input parameters were calibrated by determining the elastic stiffness from literature values and adjusting the strength, post-yield and heterogeneity parameters of lattice elements to match the load-displacement from laboratory tests under various loading conditions. Although problems with the modified solution algorithm were encountered, results of the model show the potential of lattice models to be used as a tool to predict load-displacement curves and fracture patterns of timber specimens.
APA, Harvard, Vancouver, ISO, and other styles
34

Mutapcic, Emir. "Optimised part programs for excimer laser-ablation micromachining directly from 3D CAD models." Australian Digital Thesis Program, 2006. http://adt.lib.swin.edu.au/public/adt-VSWT20061117.154651.

Full text
Abstract:
Thesis (PhD) - Swinburne University of Technology, Industrial Research Institute Swinburne - 2006.
A thesis submitted to the Industrial Research Institute, Faculty of Engineering and Industrial Sciences, in fulfillment of the requirements for the degree of Doctor of Philosophy, Swinburne, ne University of Technology - 2006. Typescript. Includes bibliographical references (p. 218-229).
APA, Harvard, Vancouver, ISO, and other styles
35

Svensson, Stina. "Representing and analyzing 3D digital shape using distance information /." Uppsala : Swedish Univ. of Agricultural Sciences (Sveriges lantbruksuniv.), 2001. http://epsilon.slu.se/avh/2001/91-576-6095-6.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Baecher, Moritz Niklaus. "From Digital to Physical: Computational Aspects of 3D Manufacturing." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:11149.

Full text
Abstract:
The desktop publishing revolution of the 1980s is currently repeating itself in 3D, referred to as desktop manufacturing. Online services such as Shapeways have become available, making personalized manufacturing on cutting edge additive manufacturing (AM) technologies accessible to a broad audience. Affordable desktop printers will soon take over, enabling people to fabricate
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
37

Chiu, Wai-kei, and 趙偉奇. "Hollowing and reinforcing 3D CAD models and representing multiple material objects for rapid prototyping." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B29768470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chiu, Wai-kei. "Hollowing and reinforcing 3D CAD models and representing multiple material objects for rapid prototyping /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22050498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Smith, Robert L. M. Eng Massachusetts Institute of Technology. "Afterimage Toon Blur : procedural generation of cartoon blur for 3D models in real time." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106376.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 48).
One of the notable distinctions of traditional animation techniques is the emphasis placed on motion. Objects in motion often make use of visual stylistic effects to visually enhance the motion, such as speed lines or afterimages. Unfortunately, at present, 2D animation makes much more use of these techniques than 3D animation, which is especially clear in the stylistic differences between 2D and 3D videogames. For 3D videogame designers fond of the look and feel of traditional animation, it would be beneficial if 3D models could emulate that 2D style. In that regard, I propose two techniques that use the location history of 3D models to, in real time, construct non-photorealistic motion blur effects in the vein of 2D traditional animation. With these procedural techniques, designers can maximize the convenience of 3D models while still retaining an aesthetic normally constrained to 2D animation.
by Robert L. Smith.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Changling. "Sketch based 3D freeform object modeling with non-manifold data structure /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?MECH%202002%20WANGC.

Full text
Abstract:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 143-152). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
41

Zouhar, Marek. "Tvorba a demonstrace 3D modelů pro VR." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363852.

Full text
Abstract:
This thesis deals with the concept of virtual reality, its history, present-day possibilities and available devices and technologies for virtual reality and ways of creating assets such as models, textures and animations for virtual reality applications. The practical part of this work deals with design and creation of three-dimensional models, textures, animations and environment for use in interactive application in virtual reality and also with design and creation of such application to demonstrate their use.
APA, Harvard, Vancouver, ISO, and other styles
42

Hittner, Brian Edward. "Rendering large-scale terrain models and positioning objects in relation to 3D terrain." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Dec%5FHittner.pdf.

Full text
Abstract:
Thesis (M.S. in Modeling, Virtual Environments and Simulation)--Naval Postgraduate School, December 2003.
Thesis advisor(s): Don Brutzman, Curt Blais. Includes bibliographical references (p. 117-118). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
43

ARMENISE, VALENTINA. "Estimation of the 3D Pose of objects in a scenecaptured with Kinect camera using CAD models." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142471.

Full text
Abstract:
This report presents a method for estimating the 3D pose of an object in a picture captured by recent structured-light sensors, such as the PrimeSense sensor commercialized under the name Kinect. In particular the work is focused on the estimation of the pose of a target represented by a CAD model without passing through the transformation of the model to a point cloud dataset, as required by the traditional approach. We study an iterative method which uses the geometrical entities which a CAD model is composed of and which works on the distance point-face rather that on the distance point-point as the traditional ICP does. For this reason we will refer to this method as "Iterative Closest Face" (ICF). The work demonstrates that the algorithm, in contrast with the traditional approach, is able to converge to the solution without any initial knowledge on the pose of the target although its efficiency could be stepped up by the use of rough-alignment algorithms. In order to achieve the goal of finding the pose of an object in a scene composed of several objects, we adopt an algorithm to extract the points belonging to different objects and associate them to different clusters, proposed by the Point Cloud Library. For each different cluster, the ICF maximizes an objective function whose value is proportional to the quality of the match. The presented pose-estimation method is intended to create an alternative to the ICP algorithm when the geometry of the CAD-model representing the target is easy enough to render the ICF more efficient than the traditional approach.
APA, Harvard, Vancouver, ISO, and other styles
44

Lorenz, Haik. "Texturierung und Visualisierung virtueller 3D-Stadtmodelle." Phd thesis, Universität Potsdam, 2011. http://opus.kobv.de/ubp/volltexte/2011/5387/.

Full text
Abstract:
Im Mittelpunkt dieser Arbeit stehen virtuelle 3D-Stadtmodelle, die Objekte, Phänomene und Prozesse in urbanen Räumen in digitaler Form repräsentieren. Sie haben sich zu einem Kernthema von Geoinformationssystemen entwickelt und bilden einen zentralen Bestandteil geovirtueller 3D-Welten. Virtuelle 3D-Stadtmodelle finden nicht nur Verwendung als Mittel für Experten in Bereichen wie Stadtplanung, Funknetzplanung, oder Lärmanalyse, sondern auch für allgemeine Nutzer, die realitätsnah dargestellte virtuelle Städte in Bereichen wie Bürgerbeteiligung, Tourismus oder Unterhaltung nutzen und z. B. in Anwendungen wie GoogleEarth eine räumliche Umgebung intuitiv erkunden und durch eigene 3D-Modelle oder zusätzliche Informationen erweitern. Die Erzeugung und Darstellung virtueller 3D-Stadtmodelle besteht aus einer Vielzahl von Prozessschritten, von denen in der vorliegenden Arbeit zwei näher betrachtet werden: Texturierung und Visualisierung. Im Bereich der Texturierung werden Konzepte und Verfahren zur automatischen Ableitung von Fototexturen aus georeferenzierten Schrägluftbildern sowie zur Speicherung oberflächengebundener Daten in virtuellen 3D-Stadtmodellen entwickelt. Im Bereich der Visualisierung werden Konzepte und Verfahren für die multiperspektivische Darstellung sowie für die hochqualitative Darstellung nichtlinearer Projektionen virtueller 3D-Stadtmodelle in interaktiven Systemen vorgestellt. Die automatische Ableitung von Fototexturen aus georeferenzierten Schrägluftbildern ermöglicht die Veredelung vorliegender virtueller 3D-Stadtmodelle. Schrägluftbilder bieten sich zur Texturierung an, da sie einen Großteil der Oberflächen einer Stadt, insbesondere Gebäudefassaden, mit hoher Redundanz erfassen. Das Verfahren extrahiert aus dem verfügbaren Bildmaterial alle Ansichten einer Oberfläche und fügt diese pixelpräzise zu einer Textur zusammen. Durch Anwendung auf alle Oberflächen wird das virtuelle 3D-Stadtmodell flächendeckend texturiert. Der beschriebene Ansatz wurde am Beispiel des offiziellen Berliner 3D-Stadtmodells sowie der in GoogleEarth integrierten Innenstadt von München erprobt. Die Speicherung oberflächengebundener Daten, zu denen auch Texturen zählen, wurde im Kontext von CityGML, einem international standardisierten Datenmodell und Austauschformat für virtuelle 3D-Stadtmodelle, untersucht. Es wird ein Datenmodell auf Basis computergrafischer Konzepte entworfen und in den CityGML-Standard integriert. Dieses Datenmodell richtet sich dabei an praktischen Anwendungsfällen aus und lässt sich domänenübergreifend verwenden. Die interaktive multiperspektivische Darstellung virtueller 3D-Stadtmodelle ergänzt die gewohnte perspektivische Darstellung nahtlos um eine zweite Perspektive mit dem Ziel, den Informationsgehalt der Darstellung zu erhöhen. Diese Art der Darstellung ist durch die Panoramakarten von H. C. Berann inspiriert; Hauptproblem ist die Übertragung des multiperspektivischen Prinzips auf ein interaktives System. Die Arbeit stellt eine technische Umsetzung dieser Darstellung für 3D-Grafikhardware vor und demonstriert die Erweiterung von Vogel- und Fußgängerperspektive. Die hochqualitative Darstellung nichtlinearer Projektionen beschreibt deren Umsetzung auf 3D-Grafikhardware, wobei neben der Bildwiederholrate die Bildqualität das wesentliche Entwicklungskriterium ist. Insbesondere erlauben die beiden vorgestellten Verfahren, dynamische Geometrieverfeinerung und stückweise perspektivische Projektionen, die uneingeschränkte Nutzung aller hardwareseitig verfügbaren, qualitätssteigernden Funktionen wie z.~B. Bildraumgradienten oder anisotroper Texturfilterung. Beide Verfahren sind generisch und unterstützen verschiedene Projektionstypen. Sie ermöglichen die anpassungsfreie Verwendung gängiger computergrafischer Effekte wie Stilisierungsverfahren oder prozeduraler Texturen für nichtlineare Projektionen bei optimaler Bildqualität. Die vorliegende Arbeit beschreibt wesentliche Technologien für die Verarbeitung virtueller 3D-Stadtmodelle: Zum einen lassen sich mit den Ergebnissen der Arbeit Texturen für virtuelle 3D-Stadtmodelle automatisiert herstellen und als eigenständige Attribute in das virtuelle 3D-Stadtmodell einfügen. Somit trägt diese Arbeit dazu bei, die Herstellung und Fortführung texturierter virtueller 3D-Stadtmodelle zu verbessern. Zum anderen zeigt die Arbeit Varianten und technische Lösungen für neuartige Projektionstypen für virtueller 3D-Stadtmodelle in interaktiven Visualisierungen. Solche nichtlinearen Projektionen stellen Schlüsselbausteine dar, um neuartige Benutzungsschnittstellen für und Interaktionsformen mit virtuellen 3D-Stadtmodellen zu ermöglichen, insbesondere für mobile Geräte und immersive Umgebungen.
This thesis concentrates on virtual 3D city models that digitally encode objects, phenomena, and processes in urban environments. Such models have become core elements of geographic information systems and constitute a major component of geovirtual 3D worlds. Expert users make use of virtual 3D city models in various application domains, such as urban planning, radio-network planning, and noise immision simulation. Regular users utilize virtual 3D city models in domains, such as tourism, and entertainment. They intuitively explore photorealistic virtual 3D city models through mainstream applications such as GoogleEarth, which additionally enable users to extend virtual 3D city models by custom 3D models and supplemental information. Creation and rendering of virtual 3D city models comprise a large number of processes, from which texturing and visualization are in the focus of this thesis. In the area of texturing, this thesis presents concepts and techniques for automatic derivation of photo textures from georeferenced oblique aerial imagery and a concept for the integration of surface-bound data into virtual 3D city model datasets. In the area of visualization, this thesis presents concepts and techniques for multiperspective views and for high-quality rendering of nonlinearly projected virtual 3D city models in interactive systems. The automatic derivation of photo textures from georeferenced oblique aerial imagery is a refinement process for a given virtual 3D city model. Our approach uses oblique aerial imagery, since it provides a citywide highly redundant coverage of surfaces, particularly building facades. From this imagery, our approach extracts all views of a given surface and creates a photo texture by selecting the best view on a pixel level. By processing all surfaces, the virtual 3D city model becomes completely textured. This approach has been tested for the official 3D city model of Berlin and the model of the inner city of Munich accessible in GoogleEarth. The integration of surface-bound data, which include textures, into virtual 3D city model datasets has been performed in the context of CityGML, an international standard for the exchange and storage of virtual 3D city models. We derive a data model from a set of use cases and integrate it into the CityGML standard. The data model uses well-known concepts from computer graphics for data representation. Interactive multiperspective views of virtual 3D city models seamlessly supplement a regular perspective view with a second perspective. Such a construction is inspired by panorama maps by H. C. Berann and aims at increasing the amount of information in the image. Key aspect is the construction's use in an interactive system. This thesis presents an approach to create multiperspective views on 3D graphics hardware and exemplifies the extension of bird's eye and pedestrian views. High-quality rendering of nonlinearly projected virtual 3D city models focuses on the implementation of nonlinear projections on 3D graphics hardware. The developed concepts and techniques focus on high image quality. This thesis presents two such concepts, namely dynamic mesh refinement and piecewise perspective projections, which both enable the use of all graphics hardware features, such as screen space gradients and anisotropic texture filtering under nonlinear projections. Both concepts are generic and customizable towards specific projections. They enable the use of common computer graphics effects, such as stylization effects or procedural textures, for nonlinear projections at optimal image quality and interactive frame rates. This thesis comprises essential techniques for virtual 3D city model processing. First, the results of this thesis enable automated creation of textures for and their integration as individual attributes into virtual 3D city models. Hence, this thesis contributes to an improved creation and continuation of textured virtual 3D city models. Furthermore, the results provide novel approaches to and technical solutions for projecting virtual 3D city models in interactive visualizations. Such nonlinear projections are key components of novel user interfaces and interaction techniques for virtual 3D city models, particularly on mobile devices and in immersive environments.
APA, Harvard, Vancouver, ISO, and other styles
45

Guo, Jinjiang. "Contributions to objective and subjective visual quality assessment of 3d models." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI099.

Full text
Abstract:
Dans le domaine de l’informatique graphique, les données tridimensionnelles, généralement représentées par des maillages triangulaires, sont employées dans une grande variété d’applications (par exemple, le lissage, la compression, le remaillage, la simplification, le rendu, etc.). Cependant, ces procédés introduisent inévitablement des artefacts qui altèrent la qualité visuelle des données 3D rendues. Ainsi, afin de guider perceptuellement les algorithmes de traitement, il y a un besoin croissant d'évaluations subjectives et objectives de la qualité visuelle à la fois performantes et adaptées, pour évaluer et prédire les artefacts visuels. Dans cette thèse, nous présentons d'abord une étude exhaustive sur les différentes sources d'artefacts associés aux données numériques graphiques, ainsi que l’évaluation objective et subjective de la qualité visuelle des artefacts. Ensuite, nous introduisons une nouvelle étude sur la qualité subjective conçue sur la base de l’évaluations de la visibilité locale des artefacts géométriques, dans laquelle il a été demandé à des observateurs de marquer les zones de maillages 3D qui contiennent des distorsions visibles. Les cartes de distorsion visuelle collectées sont utilisées pour illustrer plusieurs fonctionnalités perceptuelles du système visuel humain (HVS), et servent de vérité-terrain pour évaluer les performances des attributs et des mesures géométriques bien connus pour prédire la visibilité locale des distorsions. Notre deuxième étude vise à évaluer la qualité visuelle de modèles 3D texturés, subjectivement et objectivement. Pour atteindre ces objectifs, nous avons introduit 136 modèles traités avec à la fois des distorsions géométriques et de texture, mené une expérience subjective de comparaison par paires, et invité 101 sujets pour évaluer les qualités visuelles des modèles à travers deux protocoles de rendu. Motivés par les opinions subjectives collectées, nous proposons deux mesures de qualité visuelle objective pour les maillages texturés, en se fondant sur les combinaisons optimales des mesures de qualité issues de la géométrie et de la texture. Ces mesures de perception proposées surpassent leurs homologues en termes de corrélation avec le jugement humain
In computer graphics realm, three-dimensional graphical data, generally represented by triangular meshes, have become commonplace, and are deployed in a variety of application processes (e.g., smoothing, compression, remeshing, simplification, rendering, etc.). However, these processes inevitably introduce artifacts, altering the visual quality of the rendered 3D data. Thus, in order to perceptually drive the processing algorithms, there is an increasing need for efficient and effective subjective and objective visual quality assessments to evaluate and predict the visual artifacts. In this thesis, we first present a comprehensive survey on different sources of artifacts in digital graphics, and current objective and subjective visual quality assessments of the artifacts. Then, we introduce a newly designed subjective quality study based on evaluations of the local visibility of geometric artifacts, in which observers were asked to mark areas of 3D meshes that contain noticeable distortions. The collected perceived distortion maps are used to illustrate several perceptual functionalities of the human visual system (HVS), and serve as ground-truth to evaluate the performances of well-known geometric attributes and metrics for predicting the local visibility of distortions. Our second study aims to evaluate the visual quality of texture mapped 3D model subjectively and objectively. To achieve these goals, we introduced 136 processed models with both geometric and texture distortions, conducted a paired-comparison subjective experiment, and invited 101 subjects to evaluate the visual qualities of the models under two rendering protocols. Driven by the collected subjective opinions, we propose two objective visual quality metrics for textured meshes, relying on the optimal combinations of geometry and texture quality measures. These proposed perceptual metrics outperform their counterparts in term of the correlation with the human judgment
APA, Harvard, Vancouver, ISO, and other styles
46

Mian, Ajmal Saeed. "Representations and matching techniques for 3D free-form object and face recognition." University of Western Australia. School of Computer Science and Software Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0046.

Full text
Abstract:
[Truncated abstract] The aim of visual recognition is to identify objects in a scene and estimate their pose. Object recognition from 2D images is sensitive to illumination, pose, clutter and occlusions. Object recognition from range data on the other hand does not suffer from these limitations. An important paradigm of recognition is model-based whereby 3D models of objects are constructed offline and saved in a database, using a suitable representation. During online recognition, a similar representation of a scene is matched with the database for recognizing objects present in the scene . . . The tensor representation is extended to automatic and pose invariant 3D face recognition. As the face is a non-rigid object, expressions can significantly change its 3D shape. Therefore, the last part of this thesis investigates representations and matching techniques for automatic 3D face recognition which are robust to facial expressions. A number of novelties are proposed in this area along with their extensive experimental validation using the largest available 3D face database. These novelties include a region-based matching algorithm for 3D face recognition, a 2D and 3D multimodal hybrid face recognition algorithm, fully automatic 3D nose ridge detection, fully automatic normalization of 3D and 2D faces, a low cost rejection classifier based on a novel Spherical Face Representation, and finally, automatic segmentation of the expression insensitive regions of a face.
APA, Harvard, Vancouver, ISO, and other styles
47

Brown, Steven W. "Interactive Part Selection for Mesh and Point Models Using Hierarchical Graph-cut Partitioning." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2420.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Roussellet, Valentin. "Implicit muscle models for interactive character skinning." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30055/document.

Full text
Abstract:
En animation de personnages 3D, la déformation de surface, ou skinning, est une étape cruciale. Son rôle est de déformer la représentation surfacique d'un personnage pour permettre son rendu dans une succession de poses spécifiées par un animateur. La plausibilité et la qualité visuelle du résultat dépendent directement de la méthode de skinning choisie. Sa rapidité d'exécution et sa simplicité d'utilisation sont également à prendre en compte pour rendre possible son usage interactif lors des sessions de production des artistes 3D. Les différentes méthodes de skinning actuelles se divisent en trois catégories. Les méthodes géométriques sont rapides et simples d'utilisation, mais leur résultats manquent de plausibilité. Les approches s'appuyant sur des exemples produisent des résultats réalistes, elles nécessitent en revanche une base de données d'exemples volumineuse, et le contrôle de leur résultat est fastidieux. Enfin, les algorithmes de simulation physique sont capables de modéliser les phénomènes dynamiques les plus complexes au prix d'un temps de calcul souvent prohibitif pour une utilisation interactive. Les travaux décrits dans cette thèse s'appuient sur Implicit Skinning, une méthode géométrique corrective utilisant une représentation implicite des surfaces, qui permet de résoudre de nombreux problèmes rencontrés avec les méthodes géométriques classiques, tout en gardant des performances permettant son usage interactif. La contribution principale de ces travaux est un modèle d'animation qui prend en compte les effets des muscles des personnages et de leur interactions avec d'autres éléments anatomiques, tout en bénéficiant des avantages apportés par Implicit Skinning. Les muscles sont représentés par une surface d'extrusion le long d'axes centraux. Les axes des muscles sont contrôlés par une méthode de simulation physique simplifiée. Cette représentation permet de modéliser les collisions des muscles entre eux et avec les os, d'introduire des effets dynamiques tels que rebonds et secousses, tout en garantissant la conservation du volume, afin de représenter le comportement réel des muscles. Ce modèle produit des déformations plus plausibles et dynamiques que les méthodes géométriques de l'état de l'art, tout en conservant des performances suffisantes pour permettre son usage dans une session d'édition interactive. Elle offre de plus aux infographistes un contrôle intuitif sur la forme des muscles pour que les déformations obtenues se conforment à leur vision artistique
Surface deformation, or skinning is a crucial step in 3D character animation. Its role is to deform the surface representation of a character to be rendered in the succession of poses specified by an animator. The quality and plausiblity of the displayed results directly depends on the properties of the skinning method. However, speed and simplicity are also important criteria to enable their use in interactive editing sessions. Current skinning methods can be divided in three categories. Geometric methods are fast and simple to use, but their results lack plausibility. Example-based approaches produce realistic results, yet they require a large database of examples while remaining tedious to edit. Finally, physical simulations can model the most complex dynamical phenomena, but at a very high computational cost, making their interactive use impractical. The work presented in this thesis are based on, Implicit Skinning, is a corrective geometric approach using implicit surfaces to solve many issues of standard geometric skinning methods, while remaining fast enough for interactive use. The main contribution of this work is an animation model that adds anatomical plausibility to a character by representing muscle deformations and their interactions with other anatomical features, while benefiting from the advantages of Implicit Skinning. Muscles are represented by an extrusion surface along a central axis. These axes are driven by a simplified physics simulation method, introducing dynamic effects, such as jiggling. The muscle model guarantees volume conservation, a property of real-life muscles. This model adds plausibility and dynamics lacking in state-of-the-art geometric methods at a moderate computational cost, which enables its interactive use. In addition, it offers intuitive shape control to animators, enabling them to match the results with their artistic vision
APA, Harvard, Vancouver, ISO, and other styles
49

Al-Douri, Firas A. Salman. "Impact of utilizing 3D digital urban models on the design content of urban design plans in US cities." Texas A&M University, 2006. http://hdl.handle.net/1969.1/4324.

Full text
Abstract:
Some experts suggest that urban design plans in US cities may lack adequate coverage of the essential design aspects, particularly three-dimensional design aspects of the physical environment. Digital urban models and information technology tools may help designers visualize and interact with design alternatives, large urban data sets, and 3D information more effectively, thus correcting this problem. However, there is a limited understanding of the impact that these models may have on the quality of the design product and consequently hesitation about the appropriate methods of their usage. These suggest a need for research into how the usage of digital models can affect the extent with which urban design plans cover the essential design aspects. This research discusses the role digital models can play in supporting designers in addressing the essential design aspects. The research objective is to understand how the usage of digital models affects the coverage of the essential design aspects. The research applies a novel perspective of examining both the methods of modeling-supported urban design and the design content of urban design to attempt to reveal a correlation or causal relation. Using the mixed method approach, this research includes three phases. The first, literature review, focused on reviewing secondary sources to construct theoretical propositions about the impact of digital modeling on urban design against which empirical observations were compared. Using qualitative content analysis, the second phase involved examining 14 plans to assess their design content and conducting structured interviews with the designers of four selected plans. The third phase involved sending questionnaire forms to designers in the planning departments and firms that developed the examined plans. The analysis results were compared with the theoretical propositions and discussed to derive conclusions. The extent of design aspects coverage was found to be correlated with the usage of digital modeling. Computational plans appear to have achieved a higher level of design aspects coverage and a better translation of design goals and objectives. In those plans, 3D urban-wide design aspects were addressed more effectively than in conventional plans. The effective usage of the model's functions appears to improve the quality of the decision-making process through increasing designers' visualization and analytical capabilities, and providing a platform for communicating design ideas among and across design teams. The results helped suggest a methodological framework for the best practices of modeling usage to improve the design content.
APA, Harvard, Vancouver, ISO, and other styles
50

Abayowa, Bernard Olushola. "Automatic Registration of Optical Aerial Imagery to a LiDAR Point Cloud for Generation of Large Scale City Models." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1372508452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography