Dissertations / Theses on the topic '3D computer models'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic '3D computer models.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Zhu, Junyi S. M. Massachusetts Institute of Technology. "A software pipeline for converting 3D models into 3D breadboards." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122732.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 44-46).
3D breadboards are a new form of physical prototypes with breadboard functions directly integrated into its surfaces. 3D breadboards offer both the flexibility and re-configurability of breadboards, while also integrating well with the shape of the prototype. As a result, 3D breadboards can be used to test function directly in context of the actual physical form. Our custom 3D editor plugin supports designers in the process of converting 3D models into 3D breadboards. Our plugin first generates a pinhole pattern on the surface of the 3D model; designers can then connect the holes into power lines and terminal strips depending on the desired layout. To fabricate the 3D breadboards, designers only have to 3D print the housing and then fill the wire channels with conductive silicone. We explore a number of computational design and computer graphics approaches to convert arbitrary 3D models into 3D breadboards. We demonstrate a range of different interactive prototypes designed by our software system, and report on a user study with six participants to validate the concept of integrating breadboards into physical prototypes.
by Junyi Zhu.
S.M. in Computer Science and Engineering
S.M.inComputerScienceandEngineering Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Rasmus, Siljedahl. "3D Conversion from CAD models to polygon models." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129881.
Full textBici, Mehmet Oguz. "Robust Transmission Of 3d Models." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612690/index.pdf.
Full textMagnusson, Henrik. "Integrated generic 3D visualization of Modelica models." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15453.
Full textOpenModelica is a complete environment for developing and simulatingModelica models based on free software. It is promoted and developed bythe OpenModelica Consortium. This thesis details a method for describingand consequently displaying visualizations of Modelica models in OMNote-book, an application in the OpenModelica suite where models can be writtenand simulated in a document mixed with text, images and plots. Two dif-ferent approaches are discussed; one based on Modelica annotations and onebased on creating a simple object hierarchy which can be connected to exist-ing models. Trial implementations are done which make it possible to discardthe annotation approach, and show that an object based solution is the onebest suited for a complete implementation. It is expanded into a working 3Dvisualization solution, embedded in OMNotebook.
Aubry, Mathieu. "Representing 3D models for alignment and recognition." Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0006/document.
Full textThanks to the success of 3D reconstruction algorithms and the development of online tools for computer-aided design (CAD) the number of publicly available 3D models has grown significantly in recent years, and will continue to do so. This thesis investigates representations of 3D models for 3D shape matching, instance-level 2D-3D alignment, and category-level 2D-3D recognition. The geometry of a 3D shape can be represented almost completely by the eigen-functions and eigen-values of the Laplace-Beltrami operator on the shape. We use this mathematically elegant representation to characterize points on the shape, with a new notion of scale. This 3D point signature can be interpreted in the framework of quantum mechanics and we call it the Wave Kernel Signature (WKS). We show that it has advantages with respect to the previous state-of-the-art shape descriptors, and can be used for 3D shape matching, segmentation and recognition. A key element for understanding images is the ability to align an object depicted in an image to its given 3D model. We tackle this instance level 2D-3D alignment problem for arbitrary 2D depictions including drawings, paintings, and historical photographs. This is a tremendously diffcult task as the appearance and scene structure in the 2D depictions can be very different from the appearance and geometry of the 3D model, e.g., due to the specific rendering style, drawing error, age, lighting or change of seasons. We represent the 3D model of an entire architectural site by a set of visual parts learned from rendered views of the site. We then develop a procedure to match those scene parts that we call 3D discriminative visual elements to the 2D depiction of the architectural site. We validate our method on a newly collected dataset of non-photographic and historical depictions of three architectural sites. We extend this approach to describe not only a single architectural site but an entire object category, represented by a large collection of 3D CAD models. We develop a category-level 2D-3D alignment method that not only detects objects in cluttered images but also identifies their approximate style and viewpoint. We evaluate our approach both qualitatively and quantitatively on a subset of the challenging Pascal VOC 2012 images of the \chair" category using a reference library of 1394 CAD models downloaded from the Internet
Gil, Camacho Carlos. "Part Detection in Oneline-Reconstructed 3D Models." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-51600.
Full textBuchholz, Henrik. "Real-time visualization of 3D city models." Phd thesis, Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2007/1333/.
Full textShlyakhter, Ilya 1975, and Max 1976 Rozenoer. "Reconstruction of 3D tree models from instrumented photographs." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80136.
Full textAlso issued with order of names reversed on t.p.
Includes bibliographical references (leaves 33-36).
by Ilya Shlyakhter and Max Rozenoer.
M.Eng.
Huang, Jennifer 1980. "Component-based face recognition with 3D morphable models." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87403.
Full textIncludes bibliographical references (p. 36-38).
by Jennifer Huang.
M.Eng.and S.B.
Marks, Tim K. "Facing uncertainty 3D face tracking and learning with generative models /." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3196545.
Full textTitle from first page of PDF file (viewed February 27, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 143-148).
Lindberg, Mimmi. "Forensic Validation of 3D models." Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171159.
Full textSimek, Kyle. "Branching Gaussian Process Models for Computer Vision." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/612094.
Full textZhao, Shu Jie. "Line drawings abstraction from 3D models." Thesis, University of Macau, 2009. http://umaclib3.umac.mo/record=b2130124.
Full textBHAT, RAVINDRA K. "VISUALIZATION USING COMPUTER GENERATED 3D MODELS AND THEIR APPLICATIONS IN ARCHITECTURAL PRACTICE." The University of Arizona, 1994. http://hdl.handle.net/10150/555416.
Full textBayar, Hakan. "Texture Mapping By Multi-image Blending For 3d Face Models." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12609063/index.pdf.
Full textDuval, Thierry. "Models for design, implementation and deployment of 3D Collaborative Virtual Environments." Habilitation à diriger des recherches, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00764830.
Full textWeaver, Alexandra Alden. "Gaming Reality: Real-World 3D Models in Interactive Media." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/scripps_theses/619.
Full textSundberg, Simon. "Evolving digital 3D models using interactive genetic algorithm." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177580.
Full textYao, Miaojun. "3D Printable Designs of Rigid and Deformable Models." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1502906675481174.
Full textVillanueva, Aylagas Mónica. "Reconstruction and recommendation of realistic 3D models using cGANs." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231492.
Full textTredimensionell modellering är processen att skapa en representation av en yta eller ett objekt i tre dimensioner via en specialiserad programvara där modelleraren skannar ett verkligt objekt i ett punktmoln, skapar en helt ny yta eller redigerar den valda representationen. Denna process kan vara utmanande på grund av faktorer som komplexiteten i den 3D-skapande programvaran eller antalet dimensioner i spel. I det här arbetet föreslås ett ramverk som rekommenderar tre typer av rekonstruktioner av en ofullständig eller grov 3D-modell med Generative Adversarial Networks (GAN). Dessa rekonstruktioner följer distributionen av reella data, liknar användarmodellen och håller sig nära datasetet medan respektive egenskaper av ingången behålls. Den främsta fördelen med detta tillvägagångssätt är acceptansen av 3D-modeller som input för GAN istället för latentavektorer, vilket förhindrar behovet av att träna ett extra nätverk för att projicera modellen i latent rymd. Systemen utvärderas både kvantitativt och kvalitativt. Den kvantitativa åtgärden beror på Intersection over Union (IoU) metrisk medan den kvantitativa utvärderingen mäts av en användarstudie. Experiment visar att det är svårt att skapa ett system som genererar realistiska modeller efter distributionen av datasetet, eftersom användarna har olika åsikter om vad som är realistiskt. Likvärdighet mellan användarinmatning och rekonstruktion är väl genomförd och i själva verket den mest uppskattade funktionen för modellerare.
Mazzolini, Ryan. "Procedurally generating surface detail for 3D models using voxel-based cellular automata." Master's thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/20502.
Full textTrapp, Matthias. "Analysis and exploration of virtual 3D city models using 3D information lenses." Master's thesis, Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2008/1393/.
Full textDiese Diplomarbeit behandelt echtzeitfähige Renderingverfahren für 3D Informationslinsen, die auf der Fokus-&-Kontext-Metapher basieren. Im folgenden werden ihre Anwendbarkeit auf Objekte und Strukturen von virtuellen 3D-Stadtmodellen analysiert, konzipiert, implementiert und bewertet. Die Focus-&-Kontext-Visualisierung für virtuelle 3D-Stadtmodelle ist im Gegensatz zum Anwendungsbereich der 3D Geländemodelle kaum untersucht. Hier jedoch ist eine gezielte Visualisierung von kontextbezogenen Daten zu Objekten von großer Bedeutung für die interaktive Exploration und Analyse. Programmierbare Computerhardware erlaubt die Umsetzung neuer Linsen-Techniken, welche die Steigerung der perzeptorischen und kognitiven Qualität der Visualisierung im Vergleich zu klassischen perspektivischen Projektionen zum Ziel hat. Für eine Auswahl von 3D-Informationslinsen wird die Integration in ein 3D-Szenengraph-System durchgeführt: • Verdeckungslinsen modifizieren die Gestaltung von virtuellen 3D-Stadtmodell- Objekten, um deren Verdeckungen aufzulösen und somit die Navigation zu erleichtern. • Best-View Linsen zeigen Stadtmodell-Objekte in einer prioritätsdefinierten Weise und vermitteln Meta-Informationen virtueller 3D-Stadtmodelle. Sie unterstützen dadurch deren Exploration und Navigation. • Farb- und Deformationslinsen modifizieren die Gestaltung und die Geometrie von 3D-Stadtmodell-Bereichen, um deren Wahrnehmung zu steigern. Die in dieser Arbeit präsentierten Techniken für 3D Informationslinsen und die Anwendung auf virtuelle 3D Stadt-Modelle verdeutlichen deren Potenzial in der interaktiven Visualisierung und bilden eine Basis für Weiterentwicklungen.
Chen, Xiaochen. "Tracking vertex flow on 3D dynamic facial models." Diss., Online access via UMI:, 2008.
Find full textEriksson, Oliver, and William Lindblom. "Comparing Perception of Animated Imposters and 3D Models." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280333.
Full textI moderna 3D-spel och filmer är rendering av stora mängder karaktärer vanligt förekommande, vilket kan vara kostsamt med avseende på renderingstider. Allt eftersom karaktärernas komplexitet ökar så ökar behovet av optimeringar. Level of Detail (LOD) tekniker används för att optimera rendering genom att reducera geometrisk komplexitet i en scen. En sådan teknik bygger på att reducera en komplex karaktär till ett texturtäckt plan, en så kallad imposter. Tidigare forskning har visat att imposters är ett bra sätt att optimera 3D-rendering, och kan användas utan att minska visuell trohet jämfört med 3D-modeller om de renderas statiskt upp till ett förhållande av en-till-en pixel per texel. I den här rapporten tittar vi vidare på imposters som en LOD teknik genom att undersöka hur animering, i synnerhet rotation, av imposters vid olika avstånd påverkar mänsklig iaktagelseförmåga när folkmassor av karaktärer observeras. Resultaten, med hänsyn till statiska icke-roterande karaktärer, går i linje med tidigare forskning och visar att imposters inte är urskiljbara från 3D-modeller när de står stilla. När rotation introduceras visar det sig att långsam rotation är en dominerande faktor jämfört med avstånd som avslöjar folkmassor av imposters. Å andra sidan tyder resultaten på att snabba rörelser skulle kunna användas för att dölja brister hos förrenderade imposters, även vid små avstånd, där stillastående imposters annars kan vara urskiljbara.
Orriols, Majoral Xavier. "Generative Models for Video Analysis and 3D Range Data Applications." Doctoral thesis, Universitat Autònoma de Barcelona, 2004. http://hdl.handle.net/10803/3037.
Full textLa elección de una representación adecuada para los datos toma una relevancia significativa cuando se tratan invariancias, dado que estas siempre implican una reducción del los grados de libertad del sistema, i.e., el número necesario de coordenadas para la representación es menor que el empleado en la captura de datos. De este modo, la descomposición en unidades básicas y el cambio de representación dan lugar a que un problema complejo se pueda transformar en uno de manejable. Esta simplificación del problema de la estimación debe depender del mecanismo propio de combinación de estas primitivas con el fin de obtener una descripción óptima del modelo complejo global. Esta tesis muestra como los Modelos de Variables Latentes reducen dimensionalidad, que teniendo en cuenta las simetrías internas del problema, ofrecen una manera de tratar con datos parciales y dan lugar a la posibilidad de predicciones de nuevas observaciones.
Las líneas de investigación de esta tesis están dirigidas al manejo de datos provinentes de múltiples fuentes. Concretamente, esta tesis presenta un conjunto de nuevos algoritmos aplicados a dos áreas diferentes dentro de la Visión por Computador: i) video análisis y sumarización y ii) datos range 3D. Ambas áreas se han enfocado a través del marco de los Modelos Generativos, donde se han empleado protocolos similares para representar datos.
The majority of problems in Computer Vision do not contain a direct relation between the stimuli provided by a general purpose sensor and its corresponding perceptual category. A complex learning task must be involved in order to provide such a connection. In fact, the basic forms of energy, and their possible combinations are a reduced number compared to the infinite possible perceptual categories corresponding to objects, actions, relations among objects... Two main factors determine the level of difficulty of a specific problem: i) The different levels of information that are employed and ii) The complexity of the model that is intended to explain the observations.
The choice of an appropriate representation for the data takes a significant relevance when it comes to deal with invariances, since these usually imply that the number of intrinsic degrees of
freedom in the data distribution is lower than the coordinates used to represent it. Therefore, the decomposition into basic units (model parameters) and the change of representation, make that a complex problem can be transformed into a manageable one. This simplification of the estimation problem has to rely on a proper mechanism of combination of those primitives in order to give an optimal description of the global complex model. This thesis shows how Latent Variable Models reduce dimensionality, taking into account the internal symmetries of a problem, provide a manner of dealing with missing data and make possible predicting new observations.
The lines of research of this thesis are directed to the management of multiple data sources. More specifically, this thesis presents a set of new algorithms applied to two different areas in Computer Vision: i) video analysis and summarization, and ii) 3D range data. Both areas have been approached through the Generative Models framework, where similar protocols for representing data have been employed.
Waldow, Walter E. "An Adversarial Framework for Deep 3D Target Template Generation." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1597334881614898.
Full textGlander, Tassilo. "Multi-scale representations of virtual 3D city models." Phd thesis, Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2013/6411/.
Full textGegenstand der Arbeit sind virtuelle 3D-Stadt- und Landschaftsmodelle, die den städtischen Raum in digitalen Repräsentationen abbilden. Sie werden in vielfältigen Anwendungen und zu unterschiedlichen Zwecken eingesetzt. Dabei ist die Visualisierung ein elementarer Bestandteil dieser Anwendungen. Durch realitätsnahe Darstellung und hohen Detailgrad entstehen jedoch zunehmend fundamentale Probleme für eine verständliche Visualisierung. So führt beispielsweise die hohe Anzahl von detailliert ausmodellierten und texturierten Objekten eines virtuellen 3D-Stadtmodells zu Informationsüberflutung beim Betrachter. In dieser Arbeit werden Abstraktionsverfahren vorgestellt, die diese Probleme behandeln. Ziel der Verfahren ist die automatische Transformation virtueller 3D-Stadt- und Landschaftsmodelle in abstrakte Repräsentationen, die bei reduziertem Detailgrad wichtige Charakteristika erhalten. Nach der Einführung von Grundbegriffen zu Modell, Maßstab und Mehrfachrepräsentationen werden theoretische Grundlagen zur Generalisierung von Karten sowie Verfahren zur 3D-Generalisierung betrachtet. Das erste vorgestellte Verfahren beschreibt die zellbasierte Generalisierung von virtuellen 3DStadtmodellen. Es erzeugt abstrakte Repräsentationen, die drastisch im Detailgrad reduziert sind, erhält dabei jedoch die wichtigsten Strukturen, z.B. das Infrastrukturnetz, Landmarkengebäude und Freiflächen. Dazu wird in einem vollautomatischen Verfahren das Eingabestadtmodell mithilfe des Infrastrukturnetzes in Zellen zerlegt. Pro Zelle wird abstrakte Gebäudegeometrie erzeugt, indem die enthaltenen Einzelgebäude mit ihren Eigenschaften aggregiert werden. Durch Berücksichtigung gewichteter Elemente des Infrastrukturnetzes können Zellblöcke auf verschiedenen Hierarchieebenen berechnet werden. Weiterhin werden Landmarken gesondert berücksichtigt: Anhand statistischer Abweichungen der Eigenschaften der Einzelgebäudes von den aggregierten Eigenschaften der Zelle werden Gebäude gegebenenfalls als initiale Landmarken identifiziert. Schließlich werden die Landmarkengebäude aus den generalisierten Blöcken mit Booleschen Operationen ausgeschnitten und realitätsnah dargestellt. Die Ergebnisse des Verfahrens lassen sich in interaktiver 3D-Darstellung einsetzen. Das Verfahren wird beispielhaft an verschiedenen Datensätzen demonstriert und bezüglich der Erweiterbarkeit diskutiert. Das zweite vorgestellte Verfahren ist ein Echtzeit-Rendering-Verfahren für geometrische Hervorhebung von Landmarken innerhalb eines virtuellen 3D-Stadtmodells: Landmarkenmodelle werden abhängig von der virtuellen Kameradistanz vergrößert, so dass sie innerhalb eines spezifischen Entfernungsintervalls sichtbar bleiben; dabei wird ihre Umgebung deformiert. In einem Vorverarbeitungsschritt wird eine Landmarkenhierarchie bestimmt, aus der die Entfernungsintervalle für die interaktive Darstellung abgeleitet werden. Zur Laufzeit wird anhand der virtuellen Kameraentfernung je Landmarke ein dynamischer Skalierungsfaktor bestimmt, der das Landmarkenmodell auf eine sichtbare Größe skaliert. Dabei wird der Skalierungsfaktor an den Intervallgrenzen durch kubisch interpoliert. Für Nicht-Landmarkengeometrie in der Umgebung wird die Deformation bezüglich einer begrenzten Menge von Landmarken berechnet. Die Eignung des Verfahrens wird beispielhaft anhand verschiedener Datensätze demonstriert und bezüglich der Erweiterbarkeit diskutiert. Das dritte vorgestellte Verfahren ist ein Echtzeit-Rendering-Verfahren, das eine abstrakte 3D-Isokonturen-Darstellung von virtuellen 3D-Geländemodellen erzeugt. Für das Geländemodell wird eine Stufenreliefdarstellung für eine Menge von nutzergewählten Höhenwerten erzeugt. Das Verfahren arbeitet ohne Vorverarbeitung auf Basis programmierbarer Grafikkarten-Hardware. Entsprechend erfolgt die Verarbeitung in der Prozesskette pro Geometrieknoten, pro Dreieck, und pro Bildfragment. Pro Geometrieknoten wird zunächst die Höhe auf den nächstliegenden Isowert quantisiert. Pro Dreieck wird dann die Konfiguration bezüglich der Isowerte der drei Geometrieknoten bestimmt. Anhand der Konfiguration wird eine geometrische Unterteilung vorgenommen, so dass ein Stufenausschnitt entsteht, der dem aktuellen Dreieck entspricht. Pro Bildfragment wird schließlich die finale Erscheinung definiert, z.B. anhand von Oberflächentextur, durch Schattierung und Höheneinfärbung. Die vielfältigen Einsatzmöglichkeiten werden mit verschiedenen Anwendungen demonstriert. Die Arbeit stellt Bausteine für die Erzeugung abstrakter Darstellungen von virtuellen 3D-Stadt und Landschaftsmodellen vor. Durch die Orientierung an kartographischer Bildsprache können die Nutzer auf bestehende Erfahrungen bei der Interpretation zurückgreifen. Dabei werden die charakteristischen Eigenschaften 3D geovirtueller Umgebungen berücksichtigt, indem z.B. kontinuierlicher Maßstab, Interaktion und Perspektive behandelt und diskutiert werden.
Rosato, Matthew J. "Applying conformal mapping to the vertex correspondence problem for 3D face models." Diss., Online access via UMI:, 2007.
Find full textMao, Bo. "Visualisation and Generalisation of 3D City Models." Licentiate thesis, KTH, Geoinformatics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-24345.
Full text3D city models have been widely used in different applications such as urban planning, traffic control, disaster management etc. Effective visualisation of 3D city models in various scales is one of the pivotal techniques to implement these applications. In this thesis, a framework is proposed to visualise the 3D city models both online and offline using City Geography Makeup Language (CityGML) and Extensible 3D (X3D) to represent and present the models. Then, generalisation methods are studied and tailored to create 3D city scenes in multi-scale dynamically. Finally, the quality of generalised 3D city models is evaluated by measuring the visual similarity from the original models.
In the proposed visualisation framework, 3D city models are stored in CityGML format which supports both geometric and semantic information. These CityGML files are parsed to create 3D scenes and be visualised with existing 3D standard. Because the input and output in the framework are all standardised, it is possible to integrate city models from different sources and visualise them through the different viewers.
Considering the complexity of the city objects, generalisation methods are studied to simplify the city models and increase the visualisation efficiency. In this thesis, the aggregation and typification methods are improved to simplify the 3D city models.
Multiple representation data structures are required to store the generalisation information for dynamic visualisation. One of these is the CityTree, a novel structure to represent building group, which is tested for building aggregation. Meanwhile, Minimum Spanning Tree (MST) is employed to detect the linear building group structures in the city models and they are typified with different strategies. According to the experiments results, by using the CityTree, the generalised 3D city model creation time is reduced by more than 50%.
Different generalisation strategies lead to different outcomes. It is important to evaluate the quality of the generalised models. In this thesis a new evaluation method is proposed: visual features of the 3D city models are represented by Attributed Relation Graph (ARG) and their similarity distances are calculated with Nested Earth Mover’s Distance (NEMD) algorithm. The calculation results and user survey show that the ARG and NEMD methods can reflect the visual similarity between generalised city models and the original ones.
QC 20100923
ViSuCity Project
Berg, Martin. "Pose Recognition for Tracker Initialization Using 3D Models." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11080.
Full textIn this thesis it is examined whether the pose of an object can be determined by a system trained with a synthetic 3D model of said object. A number of variations of methods using P-channel representation are examined. Reference images are rendered from the 3D model, features, such as gradient orientation and color information are extracted and encoded into P-channels. The P-channel representation is then used to estimate an overlapping channel representation, using B1-spline functions, to estimate a density function for the feature set. Experiments were conducted with this representation as well as the raw P-channel representation in conjunction with a number of distance measures and estimation methods.
It is shown that, with correct preprocessing and choice of parameters, the pose can be detected with some accuracy and, if not in real-time, fast enough to be useful in a tracker initialization scenario. It is also concluded that the success rate of the estimation depends heavily on the nature of the object.
Vandeventer, Jason. "4D (3D Dynamic) statistical models of conversational expressions and the synthesis of highly-realistic 4D facial expression sequences." Thesis, Cardiff University, 2015. http://orca.cf.ac.uk/82405/.
Full textKim, Taewoo. "A 3D XML-based modeling and simulation framework for dynamic models." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1000134.
Full textTitle from title page of source document. Document formatted into pages; contains xii, 132 p.; also contains graphics. Includes vita. Includes bibliographical references.
Reichert, Thomas. "Development of 3D lattice models for predicting nonlinear timber joint behaviour." Thesis, Edinburgh Napier University, 2009. http://researchrepository.napier.ac.uk/Output/2827.
Full textMutapcic, Emir. "Optimised part programs for excimer laser-ablation micromachining directly from 3D CAD models." Australian Digital Thesis Program, 2006. http://adt.lib.swin.edu.au/public/adt-VSWT20061117.154651.
Full textA thesis submitted to the Industrial Research Institute, Faculty of Engineering and Industrial Sciences, in fulfillment of the requirements for the degree of Doctor of Philosophy, Swinburne, ne University of Technology - 2006. Typescript. Includes bibliographical references (p. 218-229).
Svensson, Stina. "Representing and analyzing 3D digital shape using distance information /." Uppsala : Swedish Univ. of Agricultural Sciences (Sveriges lantbruksuniv.), 2001. http://epsilon.slu.se/avh/2001/91-576-6095-6.pdf.
Full textBaecher, Moritz Niklaus. "From Digital to Physical: Computational Aspects of 3D Manufacturing." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:11149.
Full textEngineering and Applied Sciences
Chiu, Wai-kei, and 趙偉奇. "Hollowing and reinforcing 3D CAD models and representing multiple material objects for rapid prototyping." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B29768470.
Full textChiu, Wai-kei. "Hollowing and reinforcing 3D CAD models and representing multiple material objects for rapid prototyping /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22050498.
Full textSmith, Robert L. M. Eng Massachusetts Institute of Technology. "Afterimage Toon Blur : procedural generation of cartoon blur for 3D models in real time." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106376.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 48).
One of the notable distinctions of traditional animation techniques is the emphasis placed on motion. Objects in motion often make use of visual stylistic effects to visually enhance the motion, such as speed lines or afterimages. Unfortunately, at present, 2D animation makes much more use of these techniques than 3D animation, which is especially clear in the stylistic differences between 2D and 3D videogames. For 3D videogame designers fond of the look and feel of traditional animation, it would be beneficial if 3D models could emulate that 2D style. In that regard, I propose two techniques that use the location history of 3D models to, in real time, construct non-photorealistic motion blur effects in the vein of 2D traditional animation. With these procedural techniques, designers can maximize the convenience of 3D models while still retaining an aesthetic normally constrained to 2D animation.
by Robert L. Smith.
M. Eng.
Wang, Changling. "Sketch based 3D freeform object modeling with non-manifold data structure /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?MECH%202002%20WANGC.
Full textIncludes bibliographical references (leaves 143-152). Also available in electronic version. Access restricted to campus users.
Zouhar, Marek. "Tvorba a demonstrace 3D modelů pro VR." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363852.
Full textHittner, Brian Edward. "Rendering large-scale terrain models and positioning objects in relation to 3D terrain." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Dec%5FHittner.pdf.
Full textThesis advisor(s): Don Brutzman, Curt Blais. Includes bibliographical references (p. 117-118). Also available online.
ARMENISE, VALENTINA. "Estimation of the 3D Pose of objects in a scenecaptured with Kinect camera using CAD models." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142471.
Full textLorenz, Haik. "Texturierung und Visualisierung virtueller 3D-Stadtmodelle." Phd thesis, Universität Potsdam, 2011. http://opus.kobv.de/ubp/volltexte/2011/5387/.
Full textThis thesis concentrates on virtual 3D city models that digitally encode objects, phenomena, and processes in urban environments. Such models have become core elements of geographic information systems and constitute a major component of geovirtual 3D worlds. Expert users make use of virtual 3D city models in various application domains, such as urban planning, radio-network planning, and noise immision simulation. Regular users utilize virtual 3D city models in domains, such as tourism, and entertainment. They intuitively explore photorealistic virtual 3D city models through mainstream applications such as GoogleEarth, which additionally enable users to extend virtual 3D city models by custom 3D models and supplemental information. Creation and rendering of virtual 3D city models comprise a large number of processes, from which texturing and visualization are in the focus of this thesis. In the area of texturing, this thesis presents concepts and techniques for automatic derivation of photo textures from georeferenced oblique aerial imagery and a concept for the integration of surface-bound data into virtual 3D city model datasets. In the area of visualization, this thesis presents concepts and techniques for multiperspective views and for high-quality rendering of nonlinearly projected virtual 3D city models in interactive systems. The automatic derivation of photo textures from georeferenced oblique aerial imagery is a refinement process for a given virtual 3D city model. Our approach uses oblique aerial imagery, since it provides a citywide highly redundant coverage of surfaces, particularly building facades. From this imagery, our approach extracts all views of a given surface and creates a photo texture by selecting the best view on a pixel level. By processing all surfaces, the virtual 3D city model becomes completely textured. This approach has been tested for the official 3D city model of Berlin and the model of the inner city of Munich accessible in GoogleEarth. The integration of surface-bound data, which include textures, into virtual 3D city model datasets has been performed in the context of CityGML, an international standard for the exchange and storage of virtual 3D city models. We derive a data model from a set of use cases and integrate it into the CityGML standard. The data model uses well-known concepts from computer graphics for data representation. Interactive multiperspective views of virtual 3D city models seamlessly supplement a regular perspective view with a second perspective. Such a construction is inspired by panorama maps by H. C. Berann and aims at increasing the amount of information in the image. Key aspect is the construction's use in an interactive system. This thesis presents an approach to create multiperspective views on 3D graphics hardware and exemplifies the extension of bird's eye and pedestrian views. High-quality rendering of nonlinearly projected virtual 3D city models focuses on the implementation of nonlinear projections on 3D graphics hardware. The developed concepts and techniques focus on high image quality. This thesis presents two such concepts, namely dynamic mesh refinement and piecewise perspective projections, which both enable the use of all graphics hardware features, such as screen space gradients and anisotropic texture filtering under nonlinear projections. Both concepts are generic and customizable towards specific projections. They enable the use of common computer graphics effects, such as stylization effects or procedural textures, for nonlinear projections at optimal image quality and interactive frame rates. This thesis comprises essential techniques for virtual 3D city model processing. First, the results of this thesis enable automated creation of textures for and their integration as individual attributes into virtual 3D city models. Hence, this thesis contributes to an improved creation and continuation of textured virtual 3D city models. Furthermore, the results provide novel approaches to and technical solutions for projecting virtual 3D city models in interactive visualizations. Such nonlinear projections are key components of novel user interfaces and interaction techniques for virtual 3D city models, particularly on mobile devices and in immersive environments.
Guo, Jinjiang. "Contributions to objective and subjective visual quality assessment of 3d models." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI099.
Full textIn computer graphics realm, three-dimensional graphical data, generally represented by triangular meshes, have become commonplace, and are deployed in a variety of application processes (e.g., smoothing, compression, remeshing, simplification, rendering, etc.). However, these processes inevitably introduce artifacts, altering the visual quality of the rendered 3D data. Thus, in order to perceptually drive the processing algorithms, there is an increasing need for efficient and effective subjective and objective visual quality assessments to evaluate and predict the visual artifacts. In this thesis, we first present a comprehensive survey on different sources of artifacts in digital graphics, and current objective and subjective visual quality assessments of the artifacts. Then, we introduce a newly designed subjective quality study based on evaluations of the local visibility of geometric artifacts, in which observers were asked to mark areas of 3D meshes that contain noticeable distortions. The collected perceived distortion maps are used to illustrate several perceptual functionalities of the human visual system (HVS), and serve as ground-truth to evaluate the performances of well-known geometric attributes and metrics for predicting the local visibility of distortions. Our second study aims to evaluate the visual quality of texture mapped 3D model subjectively and objectively. To achieve these goals, we introduced 136 processed models with both geometric and texture distortions, conducted a paired-comparison subjective experiment, and invited 101 subjects to evaluate the visual qualities of the models under two rendering protocols. Driven by the collected subjective opinions, we propose two objective visual quality metrics for textured meshes, relying on the optimal combinations of geometry and texture quality measures. These proposed perceptual metrics outperform their counterparts in term of the correlation with the human judgment
Mian, Ajmal Saeed. "Representations and matching techniques for 3D free-form object and face recognition." University of Western Australia. School of Computer Science and Software Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0046.
Full textBrown, Steven W. "Interactive Part Selection for Mesh and Point Models Using Hierarchical Graph-cut Partitioning." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2420.pdf.
Full textRoussellet, Valentin. "Implicit muscle models for interactive character skinning." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30055/document.
Full textSurface deformation, or skinning is a crucial step in 3D character animation. Its role is to deform the surface representation of a character to be rendered in the succession of poses specified by an animator. The quality and plausiblity of the displayed results directly depends on the properties of the skinning method. However, speed and simplicity are also important criteria to enable their use in interactive editing sessions. Current skinning methods can be divided in three categories. Geometric methods are fast and simple to use, but their results lack plausibility. Example-based approaches produce realistic results, yet they require a large database of examples while remaining tedious to edit. Finally, physical simulations can model the most complex dynamical phenomena, but at a very high computational cost, making their interactive use impractical. The work presented in this thesis are based on, Implicit Skinning, is a corrective geometric approach using implicit surfaces to solve many issues of standard geometric skinning methods, while remaining fast enough for interactive use. The main contribution of this work is an animation model that adds anatomical plausibility to a character by representing muscle deformations and their interactions with other anatomical features, while benefiting from the advantages of Implicit Skinning. Muscles are represented by an extrusion surface along a central axis. These axes are driven by a simplified physics simulation method, introducing dynamic effects, such as jiggling. The muscle model guarantees volume conservation, a property of real-life muscles. This model adds plausibility and dynamics lacking in state-of-the-art geometric methods at a moderate computational cost, which enables its interactive use. In addition, it offers intuitive shape control to animators, enabling them to match the results with their artistic vision
Al-Douri, Firas A. Salman. "Impact of utilizing 3D digital urban models on the design content of urban design plans in US cities." Texas A&M University, 2006. http://hdl.handle.net/1969.1/4324.
Full textAbayowa, Bernard Olushola. "Automatic Registration of Optical Aerial Imagery to a LiDAR Point Cloud for Generation of Large Scale City Models." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1372508452.
Full text