Dissertations / Theses on the topic '3D system'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic '3D system.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Mehmood, Zahid. "A 3D optical vision system." Thesis, University of Liverpool, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284253.
Full textXia, Ziqi, and Alvandian Sohrab Mani. "3D Visualized Indoor Positioning System." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-244001.
Full textTredimensionell visualisering refererar till processen genom vilken grafisk innehåll skapas med hjälp av tredimensionell programvara. Under arbetet med tredimensionell visualisering kan olika inomhus positioneringstekniker användas för att upptäcka och spåra rörelser av object. Kombinationen av dessa två tekniker ger möjlighet att övervaka ett rum och dess föremål i realtid. Positionering är processen att spela in rörelser av objekt eller personer. Positionering kan användas i många olika områden såsom nödsituationer och spårning av föremål eller brandmän i enbyggnad som brinner eller detektering av polishundar som är utbildade för att hitta sprängämnen i en byggnad. Det är inte självklart hur bra ett sådant system skulle fungera i de givna sammanhangen. För att ta itu med detta, har metoden bestått av en litteraturstudie inriktat på befintliga teorier om positionering, olika faktorer som påverkar positionerings resultatet samt en fallstudie om positioneringssystem i ett antal befintliga inomhus positioneringssystem. Syftet med detta projekt är att presentera och utvärdera en prototyp där ett inomhuspositioneringssystem kombineras med en specifik plattform som arbetar med enkla typer av hårdvaru signaler för att generera tredimensionella modeller. Målet är att presentera ett system som kommer kunna användas utan någon infrastruktur eller extern hårdvara. Olika inomhus positioneringssystem kommer att analyserar såväl som deras användning i olika scenarier. Denna avhandling utvärderar olika tekniska val och ger en översikt över några av de befintliga trådlösa inomhuspositioneringlösningarna och ger teorin och metoderna, innan fallstudien beskrivs, inklusive: utvecklingsprocessen, problem, resultat och experimentella testresultat. Sammanfattningsvis presenterar avhandlingen en prototyp som valideras för att uppfylla de grundläggande förväntningarna för ett tredimensionellt visualiserat inomhus positioneringssystem.
Thyagaraj, Suraj. "Dynamic System Analysis of 3D Ultrasonic Neuro-Navigation System." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1967797551&sid=3&Fmt=2&clientId=1509&RQT=309&VName=PQD.
Full textWang, Cishen. "Maintenance of a 3D Visualization System." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2320.
Full textVizz3D is a powerful 3D visualization system. The current version is neither perfect nor up-to-date. Furthermore, some important features are missing. In order to keep the tool valuable it needs to be maintained. I implemented a new feature allowing to save and load the view port in the graph to control the camera position. I also improved the CPU utilization and the navigation system to solve the limitations in Vizz3D and to improve the overall performance.
Apel, Marcus. "A 3d geoscience information system framework." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2009. http://nbn-resolving.de/urn:nbn:de:swb:105-3300478.
Full textKnutsson, Niklas. "An FPGA-based 3D Graphics System." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2822.
Full textThis report documents the work done by the author to design and implement a 3D graphics system on an FPGA (Field Programmable Gate Array). After a preamble with a background presentation to the project, a very brief introduction in computer graphics techniques and computer graphics theory is given. Then, the hardware available to the project, along with an analysis of general requirements is examined. The following chapter contains the proposed graphics system design for FPGA implementation. A broad approach to separate the design and the eventual implementation was used. Two 3D pipelines are suggested - one fully capable high-end version and one which use minimal resources. The documentation of the effort to implement the minimal graphics system previously discussed then follows. The documentation outlines the work done without going too deep into detail, and is followed by the largest of the tests conducted. Finally, chapter seven concludes the project with the most important project conclusions and some suggestions for future work.
Apel, Marcus. "A 3d geoscience information system framework." Doctoral thesis, Vandoeuvre-les-Nancy, INPL, 2004. https://tubaf.qucosa.de/id/qucosa%3A22479.
Full textAlsaedi, Mohammed Abbas Soudai. "Development of 3D Accelerometer Testing System." PDXScholar, 2016. https://pdxscholar.library.pdx.edu/open_access_etds/3371.
Full textYu, Xiaoju, Min Liang, and Corey Shemelya. "3D Printable Multilayer RF Integrated System." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596450.
Full textIn this work, a 3D-printable multilayer phased array system is designed to demonstrate the applicability of additive manufacturing technique combining dielectric and conductor processes at room temperature for RF systems. Phased array systems normally include feeding networks, antennas, and active components such as switches, phase shifters and amplifiers. To make the integrated system compact, the array system here uses multilayer structure that can fully utilize the 3D space. The vertical interconnections between layers are carefully designed to reduce the loss between layers. Simulated results show good impedance matching and high-directive scanning beam. This multilayer phased array will finally be 3D printed by integrating thermal / ultrasound wire mesh embedding method (for metal) and fused-deposition-modeling technique (for dielectric).
Alvermann, Klaus. "A Transputer Based 3D-Graphics System." International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/611934.
Full textThe Institute for Flight Mechanics operates the flying simulators ATTAS (a wing aircraft) and ATTHeS (a helicopter), their respective ground based simulators and uses realtime and offline simulations for system identification and other purposes. Based on a parallel transputer architecture, a 3D-graphics tool for visualization and view simulation to be used with the simulations has been developed. The tool uses data received by telemetry, realtime data from a simulation, or recorded data to show the movement and orientation of an aircraft in realtime 3D-graphics. The aircraft or scene may be observed from any point of view. Placing the camera in the cockpit of the aircraft and showing the environment results in a view simulation. The use of a parallel transputer architecture allows a modular and scalable structure, i.e. the system may be adapted to the needs of the application. By adding software modules and transputers we may include 24 bit colour, shadowing, a higher resolution, a better shading algorithm or other things which are required by an application. On the other hand we may remove transputers to get a small and cheap system if the requirements are low. A small system may consist of only 8 transputers, whereas a big system may include 50 or 60 transputers.
Florková, Miroslava. "Prostorové analýzy nad 3D modelem města." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2017. http://www.nusl.cz/ntk/nusl-390217.
Full textSchneider, Judith. "Dynamical structures and manifold detection in 2D and 3D chaotic flows." Phd thesis, [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=973637420.
Full textLång, Magnus. "3D Teleconferencing : The construction of a fully functional, novel 3D Teleconferencing system." Thesis, Linköping University, Linköping University, The Institute of Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-51466.
Full textThis report summarizes the work done to develop a 3D teleconferencing system, which enables remote participants anywhere in the world to be scanned in 3D, transmitted and displayed on a constructed 3D display with correct vertical and horizontal parallax, correct eye contact and eye gaze. The main focus of this report is the development of this system and especially how to in an efficient and general manner render to the novel 3D display. The 3D display is built out of modified commodity hardware and show a 3D scene for observers in up to 360 degrees around it and all heights. The result is a fully working 3D Teleconferencing system, resembling communication envisioned in movies such as holograms from Star Wars. The system transmits over the internet, at similar bandwidth requirements as concurrent 2D videoconferencing systems.
Project done at USC Institute for Creative Technologies, LA, USA. Presented at SIGGRAPH09.
Nilsson, Johan, and Stark Lars Stranne. "Optimala vinkeln vid 3D skanning." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-42476.
Full textLindt, Irma. "Adaptive 3D-User-Interfaces." München Verl. Dr. Hut, 2009. http://d-nb.info/993260241/04.
Full textOprea, Alexandra. "3D Fuel Tank Models for System Simulation." Thesis, KTH, Aerodynamik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102084.
Full textHuang, Conglin. "3D RECONSTRUCTION USING MULTI-VIEW IMAGING SYSTEM." UKnowledge, 2009. http://uknowledge.uky.edu/gradschool_theses/600.
Full textSnow, Daniel P. (Daniel Peter) 1974. "A system for real time 3D reconstruction." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86515.
Full textSvoboda, Jan. "System for Recognition of 3D Hand Geometry." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-412913.
Full textRoy, Debashish. "3D Cryo-Imaging System For Whole Mouse." Case Western Reserve University School of Graduate Studies / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=case1259006676.
Full textKovařík, Roman. "Evoluční návrh 3D struktur." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236549.
Full textAndersson, Oskar. "Simulations in 3D research : Can Unity3D be used to simulate a 3D display system?" Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-28044.
Full textMcDonald, Christopher Ernest. "Framework for a visual energy use system." Thesis, [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1892.
Full textErcan, Munir. "A 3d Topological Tracking System For Augmented Reality." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611623/index.pdf.
Full textocclusion resolving
detection rate with respect to marker size, camera-marker angle, false positive marker detection
performance with respect marker library size. Our system achieved 90% marker detection success with 50 pixels marker size and an average of 1.1 false positive marker detection in ten dierent test videos. We made all tests in comparison with the widely used ARToolkit library. Our system surpasses the ARToolkit library for all tests performed. In addition, our system enables spatially distinct placement of marker parts and permits occlusion unless the topology of the marker is not corrupted, but the ARToolkit library does not have these features.
Poulsen, Carsten. "Development of a positioning system for 3D ultrasound." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-101805-180813/.
Full textRasmusson, Jonathan. "3D modelling for an intraoperative stereo-vision system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq28980.pdf.
Full textYang, Jun. "Interactive volume queries in a 3D visualization system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ38421.pdf.
Full textPoon, Nelson. "3D scenario and interactive multimedia courseware authoring system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0018/MQ48443.pdf.
Full textSoron, Mikael. "Robot System for Flexible 3D Friction Stir Welding /." Örebro : Universitetsbiblioteket, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-1675.
Full textDo, Chau. "3D MEMS Microassembly." Thesis, 2008. http://hdl.handle.net/10012/3952.
Full textChen, You-Sheng, and 陳友聖. "Adaptive 3D Video Generation System." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/zctjr5.
Full text國立雲林科技大學
電子工程系
103
The 3D filming system can be divided into two methods: array cameras and depth cameras. With these cameras, the 3D video can be produced, and the stereoscopic display is combined for playing videos. Planar videos that apply other filming methods have to go through the post conversion technology to produce stereoscopic depth, and the Depth Image Based Rendering (DIBR) is applied to generate left and right perspectives. Finally, the videos are played with stereoscopic display. Since the source of 3D filming is more expensive and planar videos are common video types, to develop a 2D to 3D conversion technology can solve the problem of insufficient 3D contents. In addition, the costs of 3D content production are lower and more flexible. This paper applies automatic depth estimation method to develop the 2D to 3D video conversion technology. With the Depth Image Based Rendering (DIBR), the left and right perspectives are generated, and the videos are played with stereoscopic display. This paper will automatically adjust the required threshold for establishing video groups according to the RGB distribution of source images, and determine that if the video contents have to adjust to extend the depth information according to the change of RGB distribution under the timeline. In addition, this paper also uses the type of the movement amount of an image to modify redundant movement amount and applies the feature of object movement in front of cameras to establish the object depth. Moreover, this paper uses video groups to calculate the texture features and determines the depth directions of atmospheric perspective, particular condition, and linear perspective. The above depth information is integrated with the movement trajectory, which can emphasize the object depth and suppress wrong depth at the same time to establish the stereoscopic depth. The proposed 2D to 3D video conversion technology can establish the stereoscopic depth in different videos flexibly as well as establish the outline of stereoscopic depth accurately. This paper can solve the problem that the current 2D to 3D video conversion technologies cannot emphasize the 3D features and maintain low beating of 3D videos at the same time. Finally, the p-thread and the CUDA multi-core are applied to accelerate the proposed adaptive 2D to 3D depth estimation algorithm, which makes the video conversion update rate of 1080P can achieve over 15fps.
Lin, Fan-Hsiang, and 林凡翔. "VR System for 3D Surveillance." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/5s8b34.
Full text國立交通大學
資訊學院資訊學程
107
Abstract Surveillance systems serve an important role in modern society, as it secures both human safety and property protection effectively. Traditional surveillance systems are used passively, often used to review footages after incidents instead of preventing them. Traditional surveillance systems also need a large increase in manpower in proportion to the scale of the surveillance area along with mistakes in finding the exact location. We observed that traditional surveillance systems have the above common problems, and hope to use modern technology Virtual Reality (VR) to solve and avoid these problems. Thus, we proposed to integrate VR devices and monitor the surveillance area in 3D space. The features and benefits of our system includes: 1. Improved locating ability by the use of 3D models over only 2D images. 2. Easily expandable system to assist the surveillant to complete complex work. 3. Straightforward and easy to use interface to easily navigate through the surveilling area. First we provided a method to reconstruct a large scale 3D model to generate a 3D model of the surveillance area. Then 3D models and camera options can be modified and updated through editing the system configuration files. The video stream from each camera will be displayed by real-time texturing on the corresponding location in the 3D scene. Our system provides good expandability and is able to communicate with external processes to achieve distributed computing, using a small amount of hardware resource to increase system functionality, reducing hardware cost requirement. Location and view can be quickly changed to handle various situations with the VR device when surveilling in our system by going into the menu and selecting a preset landmark or patrol area or by navigating with movements such as flying, scene dragging, and eagle view. Our system also includes motion detection and population density map features to assist the surveillant. Compared to traditional surveillance systems, surveilling through a VR 3D system is more straightforward and simple to navigate and increases the surveillants awareness and effectiveness, reducing the manpower needed to perform more complex work.
LIOU, MING HUA, and 劉明華. "3D image wireless transmission system." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/76157729662521171933.
Full text萬能科技大學
影像顯示技術研究所
102
This paper focuses on a real-time 3D image transmission system It makes use of dual image sensors to capture 3D images, the use of two wireless transceiving system, namely RF wireless transmitting and receiving modules for image transmission and infrared transmitting and receiving circuits for viewing frame synchronization , respectively, The wireless transmission System prevails 2.4GHz wireless transmission technology, infrared circuit is simple, within the effective reception range, multiple receiving circuits work well. the radio transmission signal is the signal will not cause mutual interference with IR. This paper presents interlaced 3D image wireless transmission system implementation. It uses two image sensors to simulate the spacing of human eyes shoot, the video signal processing circuit offers staggered screen switching. With wireless transmission module sending the images to 3D projector, infrared circuit sends signals to achieve synchronization with interleaved glasses, you can instantly see the formation of a real-time 3D imaging video
Brandão, Diogo Pina. "Intelligent 3D Vision System for Robotic System integration." Master's thesis, 2019. https://hdl.handle.net/10216/123116.
Full textThis project has as main objective the development of a vision system capable of detecting rectangular boxes, such as a medicine package. The information obtained from the developed vision algorithm serves as input for a robotic arm to move the boxes to the desired locations.
Liu, Yu-Hao, and 劉于豪. "A 3D Model Retrieval System Using 3D Spectral and Cepstral Features." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/p7u332.
Full text中華大學
資訊工程學系碩士班
101
With the recent advancement in computer graphic, 3D models have become widely used in computer aided design, computer animations, electronic commerce, digital library, and so on. The searching for specific 3D models becomes an important issue. Techniques for effective and efficient content-based 3D model retrieval have therefore become an essential research topic. In this thesis, five subband decomposition methods, include uniform subband decomposition, logarithmic subband decomposition, spherical subband decomposition, octave subband decomposition and complement octave subband decomposition will be employed to divide the 3D spectrum and 3D cepstrum into a number of different features. Thus, 3D spectral features and 3D cepstrum features will be used for 3D model retrieval. In order to get a better retrieval result, the global features and the local features will be combined. In the search section, respond to the user from the best match between query model and match model in the database. Experimental results show the proposed methods produce good performances.
LI, MEI-YI, and 李梅宜. "A 3D head model construction system." Thesis, 1992. http://ndltd.ncl.edu.tw/handle/00624992215632024321.
Full textChen, Chi-Ning, and 陳起寧. "3D computer-aided design web system." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/95619546997101810101.
Full text國立臺灣海洋大學
系統工程暨造船學系
98
In the thesis, we have developed a computer aided design web system “WebCAD”. Users only needs a computer online and a web browser to use this system. This system provides functions of importing/exporting OBJ files of geometry definition, drawing computer graphics on web browser and uploading graphic models to remote database server. We use java programming language and JOGL package for WebCAD development. JOGL package is essentially the OpenGL API especially used for java programming language. Users can interactively operate graphic models using mouse wheel to zoom in/zoom out it, mouse left button to translate it and mouse right button to rotate it. WebCAD also provides some simple functions to edit graphic models. Due to large size of OBJ file, we define a special data structure to improve WebCAD efficiency of data transfer through Internet and graphic demonstration on client computer. Such a data structure packages all graphic model data into a data object. The data object is not only what remote database records but also what WebCAD stores in client computer memory. WebCAD also snapshots new model and upload it to remote web server and uses these snapshots to list user’s uploaded graphic models for easy management. WebCAD can export snapshot of graphic models to image files and save/open data object files. Finally, we test WebCAD efficiency and the results are satisfying.
Lee, Yen-Chin, and 李彥瑾. "Assisted 3D Display System for Endoscopy." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/04546014531040484442.
Full text國立陽明大學
醫學工程研究所
100
Objective:The main purpose of this study is to construct a 3D imaging system for endoscopy, which is through external image processing device that allow the doctor to obtain the depth information while under the monocular imaging system. With this device, medical staffs can use head-mount device (HMD) or 3D monitor to see the three-dimentional image, and his assistants also can use the same devices to see the 3D image of surgical procedure in-situ. Compared with traditional way, the operators can view real-time 3D video image by this device which can assist them to have a better sense of depth during surgical operation. Due to having the sense of depth, the surgery can be operated more precisely and the operation time can be shortened. Method:Our testing video movie was provided by Taipei Veterans General Hospital. The software is written in C++ and OpenGL to get the depth map , then we use our specular-removal algorithm to remove the specular effect. After that, we use OpenGL library to combine original image with depth map to construct the 3D model, which can help us to test our depth map accuracy. After the simulation, we write Verilog code and put our algorithm to Altera DE2-115 FPGA development board and use head-mount device (HMD) to get the stereo image. Result:Through computer simulation, we have proved our algorithm which can efficiently remove the specular noises that inference the depth information based on reflection. Through image processing, we use head-mount device to compare the difference of depth map between the specular lights has been removed and haven’t. It showed that the specular interference can be effectively removed and to make the stereo image more clearly. Conclusion:Through the 3D model generated by the way of this study proposed, we can see that after out specular remove algorithm, the depth-information based on reflection can be more accurate. In our FPGA device, we also can find the image difference with and without the specular lights, and promising clear stereo images can be obtained.
Su, Zhi-Ping, and 蘇治平. "3D Image for Optical Measurement System." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/07438875657971240900.
Full text國立中正大學
機械工程所
94
This paper proposes to develop a fast optical measurement system for measuring large objects. This system, which is composed of a charge coupled device and a digital light process at calibration plane, can transform object measurement into digital, which can be used for further implication. The rationale of this system is based on the mapping function to transform two-dimensional image into three-dimensional image, in order to obtain a higher experimental result. The experiment uses gauss filter and normalized image to increase the quality of image and avoid other noises and non-uniform images. With the high quality image measured, it is easier to find the light plane by using center of area defuzzifier. The rationale of center of area defuzzifier concerns the largest slope as the boundary point. Using this method will calculate the exact boundary point. Finally, the experiment will transforms this point into three dimensional data. The experiment tries to measure various objects empirically. However, owing to the fact that each measurement cannot obtain the entire data, it is necessary to move or switch the object measured in order to achieve the full measurement. Nonetheless, iterative closest point registration method is used in the experiment in order to integrate each portion of the data into an integrated one.
Hu, Ching-Hsin, and 胡金星. "White Light 3D Profile Measurement System." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/71720867553743230880.
Full text中原大學
機械工程學系
88
This study uses a white light as source in a grating projection device, which accompany with a precision stepping motor used to motivate the grating. This study also uses a phase modulation for object’s profile measurement and space perspective method for calibrating the image in the space. These efforts devoted to develop a white light 3D profile measurement system. In measurement and calibration areas, this study uses space perspective method that is based on the technology of full-fileld image capture of 3D profile. A standard rectangular specimen is used to generate a function relationship between space coordinate and CCD image plane. In order to rebuild the 3D profile of object , overlapping and smoothing of image data from two CCD cameras are employed. The function generated by space perspective method, simplifies the calibration of parameters in triangular measurement. Therefore, the positions of measuring devices and light screen need not be accurate. It takes only a two-axis plane to establish a specified useful measuring space while in calibration, which is suitable for random calibration. In order to measure a 3600 3D profile, It uses 4 detectors, which arranged perpendicular to each other. The detectors scan the object’s surface in order to establish a 360 surface profile. The actual measuring results this develop a white light 3D profile measurement system show that measurement accuracy of 0.1 mm for system resolution of 0.1mm/pixel, 0.4 mm for system resolution of 0.2mm/pixel, and 0.1mm for repeated measurement. Therefore this method is suitable for surface measurement applications.
Huang, Po-Hua, and 黃柏華. "Efficient 3D/2D Polygon Metamorphosis System." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/77157766556776198674.
Full text國立成功大學
資訊工程研究所
89
Morphing algorithm has received considerable attention in computer graphics and image processing. Morphing has become a standard technique in movie and entertainment industry. Although computer generated images, which are rendered from true 3D models, are common today, the majority of methods developed so far focuses on the problem of interpolating between 2D images. In this thesis, we present new methods for solving the two main problems of morphing technique─(1) the corresponding problem and (2) the interpolation problem. The algorithms we propose are very easy to be practiced and generate the fine results efficiently. We present a new approach for interpolating the two 2D polygons. We only use the stick structure (linear interpolation of the length and the angle of the stick) and the hierarchical structure to generate the shape result as good as [3]. However, [3] not only use the singular value decomposition (SVD), but also the least square to get the optimal interpolating matrix. So our algorithm is more efficient than [3]. Further, we propose the three most difficult interpolation problems. Our algorithm can solve the three problems easily. We present an efficient approach for generating the correspondence between two homeomorphic 3D polyhedral models. The user can select vertices on the polyhedra to decompose the boundary of each polyhedron into the same number of morphing patches. Further, the user can specify the feature points on the morphing patch pairs to improve the morph. After the morphing patch pairs are be mapped to 2D regular polygons, they are merged and reconstructed to generate a morph. In the main procedures of our approach, we propose an easy mapping method and a foldover-free warping technique. And we also propose a most efficient merging algorithm. The merging can be completed in O(n+k) . 中文摘要..............................................I 英文摘要............................................III 目錄..................................................V 圖表目錄...........................................VIII 第一章 緒論...........................................1 1.1研究動機與目的.....................................1 1.2本論文之大綱內容...................................2 1.3本論文的貢獻.......................................2 第二章 相關工作.......................................4 2.1二維Model變形......................................4 2.2三維Model變形......................................7 第三章 2D Interpolation演算法........................13 3.1基本桿樑架構......................................13 3.2單元2D三角形......................................14 3.2.1 單元三角形的三組桿樑架構變化分析...............15 3.3 多個單元三角形...................................20 3.3.1兩個單元三角形的龜裂問題與解決方法..............23 3.3.2建構階層式架構演算法............................28 3.3.3 多個單元三角形之間的龜裂問題與解決.............30 3.4 實驗結果.........................................34 第四章 3D Interpolation演算法........................46 4.1 3D中的單元三角形架構.............................46 4.2 3D中的多個單元三角形.............................50 4.3實驗結果..........................................55 第五章 Correspondence的解決..........................58 5.1對應演算法的流程架構..............................58 5.2 資料結構的定義...................................59 5.2.1 EdgePtr(簡稱EP)................................61 5.2.2 PairPtr(簡稱PP)................................62 5.3 Shortest Path的建立..............................64 5.4 圈選patch的動作..................................68 5.5 攤平(Flatten or Mapping).........................72 5.6 Warping..........................................77 5.7 Merge加速法......................................83 5.8 Remesh...........................................94 5.9 對應點的3D資訊建立..............................101 5.10 實驗結果.......................................103 第六章 變形芻型系統與應用...........................111 第七章 結論與未來展望...............................118 附錄一..............................................122 參考文獻............................................123
Jun-YiWu and 吳俊毅. "A 3D Interactive Tubular Object System." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/z6974g.
Full text國立成功大學
電腦與通信工程研究所
103
This study is focused on the computer vision application and discuss about the human tubular organ which has been made computer image data and center-line data. We used that to build an elaborate 3D medical image with Marching Cube Algorithm, and develop an interactive system that help people to learn about tubular organ on the computer. Not only about learning, the system can help medical personnel to detect the pathological changes in different ways. Furthermore, the system also makes use of the extrusion technique to build a 3D model for tubular organ. This model have a better visualization effect. It’s helpful for the development of e-learning in our system.
Lin, Wen-Sheng, and 林文勝. "3D Human Portrait Caricature Modeling System." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/3ymd68.
Full textWu, Yi-Chin, and 吳宜瑾. "AR-assisted 3D Object Reconstruction System." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/mvu734.
Full text國立交通大學
多媒體工程研究所
107
Many crucial applications in the fields of filmmaking, game design, education, and cultural preservation -- among others -- involve the modeling, authoring, or editing of 3D objects and scenes. The two major methods of creating 3D models are 1) modeling, using computer software, and 2) reconstruction, generally using high-quality 3D scanners. Scanners of sufficient quality to support the latter method remain unaffordable to the general public. Since the emergence of consumer-grade RGBD cameras, there has been a growing interest in 3D reconstruction systems using depth cameras. However, most such systems are not user-friendly, and require intense efforts and practice if good reconstruction results are to be obtained. In this paper, we propose to increase the accessibility of depth-camera-based 3D reconstruction by assisting its users with augmented reality (AR) technology. Specifically, the proposed approach will allow users to rotate/move a target object freely with their hands and see the object being overlapped with its reconstructing model during the reconstruction process. As well as being more instinctual than conventional reconstruction systems, our proposed system will provide useful hints on complete 3D reconstruction of an object, including the best capturing range; reminder of moving and rotating the object at a steady speed; and which model regions are complex enough to require zooming-in. We evaluated our system via a user study that compared its performance against those of three other state-of-the-art approaches, and found our system outperforms the other approaches. Specifically, the participants rated it highest in usability, understandability, and model satisfaction.
Chen, Yi-Chun, and 陳易群. "2D to 3D Image Conversion System." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/r7d579.
Full text國立臺北科技大學
電腦與通訊研究所
97
The multiview perspectives are captured around a scene by camera array at the same time, and multi-view images can not only keep two dimensions information but also have the depth architecture of a scene. Most of 3D displays support the multi-view contents, while there is a bottleneck of 3D TV system to broadcast the larger storage and bandwidth for end-user. For saving bandwidth, a new TV broadcasting channel transfers third dimensional information i.e. depth map has been utilized. The 3D display renders intermediate image and a depth map to form different views, as well as synthesizes these images into one image; this technology is the renowned DIBR (Depth Image Base Rendering.) In other word, any 2D contents can be transformed into multi-view images according to their corresponding depth map. Our system processes 2D image to capture a depth map of the intermediate view image during 3D contents capturing. We utilize the vanishing point detection and color image segmentation technique to find the objects and deepest point (i.e. vanishing point) in the image, and then we assign depth value by comparing the vanishing point of object with image. After depth map generation, we propose the Vivid-DIBR system to imitate how human eyes see things. On the other hand, this system solves the holes (warping error points) problem by redistributing corresponding depth map. We implement the Vivid-DIBR by cell-based design flow and the 0.18um 1P6M implementation process of Taiwan Semiconductor Manufacturing Company. The design aims to convert 2D images into multi-view images which will be suitable for any interlacing 3D display by adjusting the focal plane location.
Chang, Kuang-Han, and 張光寒. "3D Taiwanese Sign Language Recognition System." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/42730653488975240249.
Full text南台科技大學
電機工程系
95
In order to help the disabled to communicate with other normal people and make it easier for special education staff to learn sign language, we hope to complete a set of sign language recognition system that matches the way we use in Taiwan. The first step is to create a new kind of data glove which meets our requirement. The design is on the basis of micro-controller hardware, which adapts built-in A/D chip (PIC16F877) to measure the changes of gestures. Then those information and data could be transmitted to the computer through the interface of USB2.0 controller chip. We designed a pair of data gloves with USB/RS232 high speed interface, integrating with the software packages, e.g. LabVIEW, EON studio, and 3D studio MAX etc. Neural network algorithm is also applied to program the recognition engine. The recognition results about 26 English alphabet A~Z gestures can be displayed in 3D animation of hands movements on the monitor and play text/voice output in the mean time. With the help of integrating all hardware with software, a complete 3D Taiwan sign language recognition system is implemented to help the hearing disabled to solve the communication problem with the other normal people.
Huang, Shih Hsuan, and 黃士軒. "2D + 3D Animal Makeup Synthesis System." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/df92c6.
Full text國立暨南國際大學
資訊工程學系
107
When I was a child, not more than twenty years ago, it was not easy to record our daily life. I remember once my family wanted to take a family photo. First, you need to buy a camera, and put in a non-reusable negative film. After you take a photo, you can’t modify this photo. And finally, you had to go to a photo studio to have the photos taken and developed. Today, with the progress of technology, smartphones almost replaced cameras in our lives. The impacts of new image technologies are so apparent. When you take a photo with your smartphone, you can immediately adjust photo’s lighting, colors and tones on the screen. Since almost everyone wants to be more attractive, the purpose of the above adjustment is mainly to beautify the photos. Some people may even want to add various animal make-up effects onto their photos by using the beauty and makeup camera applications, such as SNOW and B612. However, these animal make-up effects are two dimensional and just texture mapped. They are not really integrated with the photos. Therefore, in this project, we aim to develop a system to create a personal image with three dimensional and photorealistic animal make-up effects. To achieve this goal, several technologies include mesh warping, human face detection, and 3D face reconstruction are adopted. The experiment results shown that our system can successfully create interesting and vivid 3D animal make-up effects onto user’s portraits.
Chang, Cheng-Kang, and 詹政剛. "3D Virtual Reality Parking Monitoring System." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/10525301427755204144.
Full text健行科技大學
資訊工程所
101
We were using the image recognition and 3D virtual reality technology to correspond the information about the license plate, brand type, parking position and status etc. in real-time parking simulated environment by 3D. First we captured the vehicle images by camera while vehicles entered the parking space, and we turned vehicle features, like plate, appearance, color, and brand type into identifiable data. Then we imported those data in virtual reality scenes to build the initial vehicle model. When vehicles entered the parking space, the interior cameras started filming the movement of vehicles and translated to relative position in 3D virtual reality environment. Therefore the managers could control all situations at entire parking area in 3D dynamics of real-time. In this thesis, the real-time moving of vehicles position conversed to binarize was captured by the cameras image and then restored the coordinates in real parking space through the position valuation and properties of similar triangle to display the newest relative moving state on the 3D virtual reality scenes. By building the 3D virtual reality parking systems the managers were no longer bound by a single camera range, managers will be able to browse the plane directly and switch the different views freely in simulated parking scenes by third-person.
Chen, Kuan-Yi, and 陳冠諭. "3D Image Composition using Kinect System." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/36236013864470282300.
Full text國立交通大學
電機學院電信學程
104
Chroma keying (scene composition) is a popular technique in TV and movie production. Typically, it merges the foreground from one scene with the background from another scene. The foreground is often taken in a virtual studio whose floor, wall and ceiling are painted with a specific green color, so that the background can be easily replaced by another scene. Now, we like to do the same scene composition job on two arbitrary images. In this process, we need to extract objects from the foreground scene. Then, place the object (foreground) on the background scene. In this thesis, we use Microsoft Kinect 2 device to capture the foreground scene. The depth image produced by Kinect 2 facilitates the object extraction process. However, the captured depth map contains occlusion region and noises (missing depth pixels). We use an iterative median filter to remove the holes (missing pixels). In the foreground extraction, we adopt the popular Otsu’s method in the histogram domain. In addition, we adopt a “trimap” description of the object. A depth map is partitioned into 3 areas: background, foreground and unknown (between foreground and background). The unknown area is identified by the Sobel operator, which detects the object boundaries. We replace the background using the alpha channel technique. Because we derive a trimap for the foreground object, the “unknown” area on the final composition is the blended pixels of the foreground and background images. We found the visual results are more vivid and appearing.
Hsiang, Hsien-Wei, and 向賢偉. "A 3D Navigation System Using OpenGL." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/38665113846783167033.
Full text淡江大學
資訊管理學系碩士班
96
3D APIs under Java environment are compared in this paper. Their difference, as well as related technologies of game engines, are discussed, and some points on object-oriented development are proposed. An interactive 3D navigation system is implemented in a manner different to the scene-diagram of Java3D. Our implementation includes the composition of basic objects, the construction of camera objects, the support for external objects, action objects, and structure of scene-diagram. Moreover, users are allowed to add/delete new objects.