To see the other types of publications on this topic, follow the link: 3D analysis.

Dissertations / Theses on the topic '3D analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic '3D analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chykeyuk, Kiryl. "Analysis of 3D echocardiography." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:823cd243-5d48-4ecc-90e7-f56d49145be8.

Full text
Abstract:
Heart disease is the major cause of death in the developed world. Due to its fast, portable, low-cost and harmless way of imaging the heart, echocardiography has become the most frequent tool for diagnosis of cardiac function in clinical routine. However, visual assessment of heart function from echocardiography is challenging, highly operatordependant and is subject to intra- and inter observer errors. Therefore, development of automated methods for echocardiography analysis is important towards accurate assessment of cardiac function. In this thesis we develop new ways to model echocardiography data using Bayesian machine learning methods and concern three problems: (i) wall motion analysis in 2D stress echocardiography, (ii) segmentation of the myocardium in 3D echocardiography, and (iii) standard views extraction from 3D echocardiography. Firstly, we propose and compare four discriminative methods for feature extraction and wall motion classification of 2D stress echocardiography (images of the heart taken at rest and after exercise or pharmalogical stress). The four methods are based on (i) Support Vector Machines, (ii) Relevance Vector Machines, (iii) Lasso algorithm and Regularised Least Squares, (iv) Elastic Net regularisation and Regularised Least Squares. Although all the methods are shown to have superior performance to the state-of-the-art, one conclusion is that good segmentation of the myocardium in echocardiography is key for accurate assessment of cardiac wall motion. We investigate the application of one of the most promising current machine learning techniques, called Decision Random Forests, to segment the myocardium from 3D echocardiograms. We demonstrate that more reliable and ultrasound specific descriptors are needed in order to achieve the best results. Specifically, we introduce two sets of new features to improve the segmentation results: (i) LoCo and GloCo features with a local and a global shape constraint on coupled endoand epicardial boundaries, and (ii) FA features, which use the Feature Asymmetry measure to highlight step-like edges in echocardiographic images. We also reinforce the traditional features such as Haar and Rectangular features by aligning 3D echocardiograms. For that we develop a new registration technique, which is based on aligning centre lines of the left ventricles. We show that with alignment performance is boosted by approximately 15%. Finally, a novel approach to detect planes in 3D images using regression voting is proposed. To the best of our knowledge we are the first to use a one-step regression approach for the task of plane detection in 3D images. We investigate the application to standard views extraction from 3D echocardiography to facilitate efficient clinical inspection of cardiac abnormalities and diseases. We further develop a new method, called the Class- Specific Regression Forest, where class label information is incorporating into the training phase to reinforce the learning from semantically relevant to the problem classes. During testing the votes from irrelevant classes are excluded from voting to maximise the confidence of output predictors. We demonstrate that the Class-Specific Regression Random Forest outperforms the classic Regression Random Forest and produces results comparable to the manual annotations.
APA, Harvard, Vancouver, ISO, and other styles
2

Peppa, Maria Valasia. "Precision analysis of 3D camera." Thesis, KTH, Geodesi och geoinformatik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-131457.

Full text
Abstract:
Three dimensional mapping is becoming an increasingly attractive product nowadays. Many devices like laser scanner or stereo systems provide 3D scene reconstruction. A new type of active sensor, the Time of Flight (ToF) camera obtains direct depth observations (3rd dimensional coordinate) in a high video rate, useful for interactive robotic and navigation applications. The high frame rate combined with the low weight and the compact design of the ToF cameras constitute an alternative solution of the 3D measuring technology. However a deep understanding of the error involved in the ToF camera observations is essential in order to upgrade their accuracy and enhance the ToF camera performance. This thesis work addresses the depth error characteristics of the SR4000 ToF camera and indicates potential error models for compensating the impact. In the beginning of the work the thesis investigates the error sources, their characteristics and how they influence the depth measurements. In the practical part, the work covers the above analysis via experiments. Last, the work proposes simple methods in order to reduce the depth error so that the ToF camera can be used for high accuracy applications.   An overall result of the work indicates that the depth acquired by the Time of Flight (ToF) camera deviates several centimeters, specifically the SR4000 camera provides 35 cm error size for the working range of 1-8 m. After the error compensation the depth offset fluctuates 15cm within the same working range. The error is smaller when the camera is set up close to the test field than when it is further away.
APA, Harvard, Vancouver, ISO, and other styles
3

Amin, Syed Hassan. "Analysis of 3D face reconstruction." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/6163.

Full text
Abstract:
This thesis investigates the long standing problem of 3D reconstruction from a single 2D face image. Face reconstruction from a single 2D face image is an ill posed problem involving estimation of the intrinsic and the extrinsic camera parameters, light parameters, shape parameters and the texture parameters. The proposed approach has many potential applications in the law enforcement, surveillance, medicine, computer games and the entertainment industries. This problem is addressed using an analysis by synthesis framework by reconstructing a 3D face model from identity photographs. The identity photographs are a widely used medium for face identi cation and can be found on identity cards and passports. The novel contribution of this thesis is a new technique for creating 3D face models from a single 2D face image. The proposed method uses the improved dense 3D correspondence obtained using rigid and non-rigid registration techniques. The existing reconstruction methods use the optical ow method for establishing 3D correspondence. The resulting 3D face database is used to create a statistical shape model. The existing reconstruction algorithms recover shape by optimizing over all the parameters simultaneously. The proposed algorithm simplifies the reconstruction problem by using a step wise approach thus reducing the dimension of the parameter space and simplifying the opti- mization problem. In the alignment step, a generic 3D face is aligned with the given 2D face image by using anatomical landmarks. The texture is then warped onto the 3D model by using the spatial alignment obtained previously. The 3D shape is then recovered by optimizing over the shape parameters while matching a texture mapped model to the target image. There are a number of advantages of this approach. Firstly, it simpli es the optimization requirements and makes the optimization more robust. Second, there is no need to accurately recover the illumination parameters. Thirdly, there is no need for recovering the texture parameters by using a texture synthesis approach. Fourthly, quantitative analysis is used for improving the quality of reconstruction by improving the cost function. Previous methods use qualitative methods such as visual analysis, and face recognition rates for evaluating reconstruction accuracy. The improvement in the performance of the cost function occurs as a result of improvement in the feature space comprising the landmark and intensity features. Previously, the feature space has not been evaluated with respect to reconstruction accuracy thus leading to inaccurate assumptions about its behaviour. The proposed approach simpli es the reconstruction problem by using only identity images, rather than placing eff ort on overcoming the pose, illumination and expression (PIE) variations. This makes sense, as frontal face images under standard illumination conditions are widely available and could be utilized for accurate reconstruction. The reconstructed 3D models with texture can then be used for overcoming the PIE variations.
APA, Harvard, Vancouver, ISO, and other styles
4

Deighton, M. J. "3D texture analysis in seismic data." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/842764/.

Full text
Abstract:
The use of hydrocarbons is ubiquitous in modern society, from fuel to raw materials. Seismic surveys now routinely produce large, volumetric representations of the Earth's crust. Human interpretation of these surveys plays an important part in locating oil and gas reservoirs, however it is a lengthy and time consuming process. Methods that provide semi-automated aid to the interpreter are highly sought after. In this research, texture is identified as a major cue to interpretation. A local gradient density method is then employed for the first time with seismic data to provide volumetric texture analysis. Extensive experiments are undertaken to determine parameter choices that provide good separation of seismic texture classes according to the Bhattacharya distance. A framework is then proposed to highlight regions of interest in a survey with high confidence based on texture queries by an interpreter. The interpretation task of seismic facies analysis is then considered and its equivalence with segmentation is established. Since the facies units may take a range of orientations within the survey, sensitivity of the analysis to rotation is considered. As a result, new methods based on alternative gradient estimation kernels and data realignment are proposed. The feature based method with alternative kernels is shown to provide the best performance. Achieving high texture label confidence requires large local windows and is in direct conflict with the need for small windows to identify fine detail. It is shown that smaller windows may be employed to achieve finer detail at the expense of label confidence. A probabilistic relaxation scheme is then described that recovers the label confidence whilst constraining texture boundaries to be smooth at the smallest scale. Testing with synthetic data shows reductions in error rate by up to a factor of 2. Experiments with seismic data indicate that more detailed structure can be identified using this approach.
APA, Harvard, Vancouver, ISO, and other styles
5

Rajpoot, Kashif. "Multi-view 3D Echocardiographic image analysis." Thesis, University of Oxford, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.510207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Spentzos, Agis. "CFD analysis of 3D dynamic stall." Thesis, University of Glasgow, 2005. http://theses.gla.ac.uk/1855/.

Full text
Abstract:
Focusing on helicopter aerodynamics, it is known that the aerodynamic performance of the retreating side of a rotor disk is mainly dictated by the stall characteristics of the blade. Stall under dynamic conditions (Dynamic Stall) is the dominant phenomenon encountered on heavily loaded fast-flying rotors, resulting in an extra lift and excessive pitching moments. Dynamic stall (DS) can be idealised as the pitching motion of a finite wing and this is the focus of the present work which includes three main stages. At first, comparisons between available experimental data with CFD simulations were performed for 3D DS cases. This work is the first detailed CFD study of 3D Dynamic Stall and has produced results indicating that DS can be predicted and analysed using CFD. The CFD results were validated against all known experimental investigations. In addition, a comprehensive set of CFD results was generated and used to enhance our understanding of 3D DS. Straight, tapered and swept-tip wings of various aspect ratios were used at a range of Reynolds and Mach numbers and flow conditions. For all cases where experimental data were available effort was put to obtain the original data and process these in exactly the same ways as the CFD results. Special care was put to represent exactly the motion of the lifting surfaces, its geometry and the boundary conditions of the problem. Secondly, the evolution of the Ω-shaped DS vortex observed in experimental works as well as its interaction with the tip vortices were investigated. Both pitching and pitching/rotating blade conditions were considered. Finally, the potential of training a Neural network as a model for DS was assessed in an attempt to reduce the required CPU time for modelling 3D DS. Neural networks have a proven track record in applications involving pattern recognition but so far have seen little application in unsteady aerodynamics. In this work, two different NN models were developed and assessed in a variety of conditions involving DS. Both experimental and CFD data were used during these investigations. The dependence of the quality of the predictions of the NN on the choice of the training data was then assessed and thoughts towards the correct strategy behind this choice were laid out.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Junjie. "3D laser scanner development and analysis." Thesis, Aberystwyth University, 2013. http://hdl.handle.net/2160/b3a1beca-3d92-48bc-945e-2e50b3e7755a.

Full text
Abstract:
This PhD project is a collaboration between Smart Light Devices, Ltd. in Aberdeen and Aberystwyth University on the development of such 3D laser scanners with an ultimate aim to inspect the underwater oil and gas pipes or structure. At the end of this project, a workable and full functional 3D laser scanner is to be developed. This PhD project puts a particular emphasis on the engineering and implementation of the scanner according to real applications’ requirements. Our 3D laser scanner is based on the principle of triangulation and its high accuracy over a short range scanning. Accurate 3D data can be obtained from a triangle between the scanner, camera lens, laser source, and the object being scanned. Once the distance between the scanner camera lens and laser source (stereo baseline) is known and the laser projection angle can be measured by the goniometer, all the X, Y,Z coordinates of the object surface can be obtained through trigonometry. This 3D laser scanner development involves a lot of issues and tasks including image noise removal, laser peak detection, corner detection, camera calibration and 3D reconstruction. These issues and tasks have been addressed, analysed and improved during the PhD period. Firstly, the Sparse Code Shrinkage (SCS) image de-noise is implemented, since it is one of the most suitable de-noising methods for our laser images with dark background and white laser stripe. Secondly, there are already plenty of methods for corner and laser peak detection, it is necessary to compare and evaluate which is the most suitable for our 3D laser scanner. Thus, comparative studies are carried out and their results are presented in this thesis. Thirdly, our scanner is based on laser triangulation, in this case, laser projection angle α and baseline distance D from the centre of the camera lens to laser source plays a crucial role in 3D reconstruction. However, these two parameters are hard to measure directly, and there are no particular tools designed for this purpose. Thus, a new approach is proposed in this thesis to estimate them which combines camera calibration results with the precise linear stage. Fourthly, it is very expensive to customize an accurate positional pattern for camera calibration, due to budget limit, this pattern is printed by a printer or even painted on a paper or white board which is inaccurate and contains errors in absolute distance and location. An iterative camera calibration method is proposed. It can compensate up to 10% error and the calibration parameters remain stable. Finally, in the underwater applications, the light travel angle is changed from water to air which makes the normal calibration method less accurate. Hence, a new approach is proposed to compensate between the estimate and real distance in 3D reconstruction with normal calibration parameters. Experimental results show the proposed methods reduce the distance error in 3D down to ±0.2mm underwater. Overall, the developed scanning systems have been successfully applied in several real scanning and 3D modelling projects such as mooring chain, underwater pipeline surface and reducer. Positive feedback has been received from these projects, the scanning results satisfy the resolution and accuracy requirements.
APA, Harvard, Vancouver, ISO, and other styles
8

Thompson, Darren. "3D image analysis of foot wounds." Thesis, Ulster University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.646858.

Full text
Abstract:
Foot wounds are a debilitating and potentially fatal consequence of diabetes. Assessment of foot wounds in clinical or research settings is often based on subjective human judgement which does not involve quantitative measurement. When measurement is conducted, it takes the form of ruler-based estimations of length and width to approximate perimeter or area. To monitor wound healing and make informed treatment decisions, clinicians require accurate and appropriate measurements of wound parameters. Effective wound assessment requires imaging and software techniques which enable objective identification of wound tissues and three-dimensional measurements of wound size. Pilot classification studies were carried out using a selection of six stock wound images. Ground truth was provided by a specialist practitioner in podiatry. Three supervised classifiers were compared. Maximum likelihood was found to be the most suitable for wound classification. Performance of the supervised Maximum likelihood (MLC), unsupervised Expectation Maximisation (EM) algorithm and a hybrid MLC-EM method were compared. No method was found to perform significantly better than others. Context classification was implemented via probabilistic relaxation labelling. It was found that classification accuracy was typically improved by 0.5 - 1.5 %. A method of including depth information in the classification process was proposed and evaluated. Simulated 3D wound volumes were imaged and combined with simulated tissue colours sampled from real images. Classification using depth improved accuracy at low weightings when included in the Maximum likelihood classifier. To facilitate the further development and evaluation of novel wound assessment algorithms, a set of clinical foot wound data was imaged using 3D stereophotogrammetry. A group of clinicians assessed the data to identify the tissues contained within each wound image. The level of agreement between them was evaluated. Supervised, unsupervised and hybrid classification algorithms were also used to classify the data and the results were evaluated by comparison to the group of clinicians. Novel methods of measuring the volume and surface area of wounds were developed and validated using simulated models before being applied to wound data. The results of tissue classification were plotted against the results of volume measurement in order to observe any trends in the healing process. Supervised Maximum Likelihood classification was found to produce results which agreed with clinicians to approximately the same level as they agreed with each other, indicating that automated classification may have a future role in wound research and clinical diagnosis. The supervised method resulted in agreement with clinicians of 75.5%, which was significantly higher than agreement for unsupervised or hybrid methods, at 65.9% and 64.6% respectively. The inclusion of tissue depth in the classification progress produced some positive results. The surface area and volume measurement methods were found to be accurate for all but the smallest of wound sizes and capable of tracking changes in real wounds.
APA, Harvard, Vancouver, ISO, and other styles
9

KING-NYGREN, ELIAS. "Analysis of Complex 3D-Concrete Casting." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299789.

Full text
Abstract:
Concrete is the second most used material in the world and is primarily used within the construction industry. It is however also used for making decorative and functional smaller products within various industries. Manufacturing with concrete can be done with different manufacturing techniques, the most common technique being concrete casting in molds. This project was conducted at Arclight AB in Stockholm, a company on the verge of starting production of molds for casting concrete products. With many different manufacturing techniques at their disposal, it is however difficult for them to know which manufacturing technique should be used for which type of mold. The goal of this project is to compare the available manufacturing techniques at Arclight and see which are most suitable for mold manufacturing. The background research and preparation resulted in three segments of the casting process which needed to be analyzed: choice of concrete, choice post-processing technique, and choice of manufacturing technique. Results from the trails of these three segments gave invaluable information for the project. Concrete trails resulted in a recommendation of a concrete with high compression strength and high water content to make the concrete viscous and flow easily into the mold. Post-processing trials resulted in different optimal post-processing techniques based on the mold material and manufacturing technique. Manufacturing trails gave in-depth information on processing larger molds and the potential problems associated with casting complex large concrete products. The final result of the project is a spreadsheet which recommends an optimal manufacturing technique based on the geometry type and number of products to be cast. Maximum cost per product, maximum machine time for manufacturing and maximum total production time for the concrete products are also stated to find the optimal manufacturing technique for each specific concrete casting project. Before using this spreadsheet as a basis for manufacturing, it should be formatted for easier use. Additional tests with applying epoxy and polyurethane resin for post-processing molds should be conducted, in addition to testing materials for manufacture of master molds for vacuum forming.
Betong är det näst mest använda råmaterialet i världen och används primärt inom byggindustrin. Det används även för tillverkning av estetiska och funktionella mindre produkter inom andra industrier. Betongprodukter kan tillverkas med flera olika tillverkningstekniker, där den vanligaste är gjutning av betong i gjutformar. Detta projekt var utfört hos Arclight AB i Stockholm, ett företag som är i början av att starta produktion av gjutformar för gjutning av betongprodukter. Med så många olika tillverkningstekniker hos företaget är det svårt att veta vilken tillverkningsteknik som är bäst lämpad för vilken typ av gjutform. Målet med detta projekt är att jämföra de olika tillverkningstekniker Arclight har och se vilka är mest lämpade för tillverkning av gjutformar. Bakgrundsforskningen och förberedandet resulterade i tre segment av gjutprocessen som behövde analyseras; val av betong, val av ytbehandlingsteknik, och val av tillverkningsteknik. Testerna inom dessa tre segment gav ovärderlig information för projektet. Resultatet av betongtesterna var en rekommendation av betong med hög tryckhållfasthet och en stor mängd vatten i betongen för lättare hällning i gjutformen. Resultatet av ytbehandlingstesterna var olika optimala ytbehandlingar beroende på material för gjutformen, samt tillverkningsteknik. Resultatet av tillverkningstesterna gav information om stora gjutformar bäst hanteras och eventuella problem associerade med att gjuta stora komplexa betongprodukter. Slutgiltiga resultatet av projektet är ett kalkylblad vilket rekommenderar optimala tillverkningsmetoden baserat på geometritypen av produkten som ska gjutas samt antalet produkter att tillverka. Maximal kostnad per produkt, maximal maskintid för tillverkning och maximal total tillverkningstid för produktion av betongprodukterna bestäms även för att finna optimala tillverkningstekniken för varje specifikt betonggjutningsprojekt. Innan detta kalkylark används för tillverkning borde det formateras så det är mer användarvänligt. Ytterligare ytbehandlingstester med epoxyresin och polyuretanresin bör göras på gjutformar, samt även att testa material för tillverkning av formverktyg för vakumforming.
APA, Harvard, Vancouver, ISO, and other styles
10

Hu, Guosheng. "Face analysis using 3D morphable models." Thesis, University of Surrey, 2015. http://epubs.surrey.ac.uk/808011/.

Full text
Abstract:
Face analysis aims to extract valuable information from facial images. One effective approach for face analysis is the analysis by synthesis. Accordingly, a new face image synthesised by inferring semantic knowledge from input images. To perform analysis by synthesis, a genera- tive model, which parameterises the sources of facial variations, is needed. A 3D Morphable Model (3DMM) is commonly used for this purpose. 3DMMs have been widely used for face analysis because the intrinsic properties of 3D faces provide an ideal representation that is immune to intra-personal variations such as pose and illumination. Given a single facial input image, a 3DMM can recover 3D face (shape and texture) and scene properties (pose and illumination) via a fitting process. However, fitting the model to the input image remains a challenging problem. One contribution of this thesis is a novel fitting method: Efficient Stepwise Optimisation (ESO). ESO optimises sequentially all the parameters (pose, shape, light direction, light strength and texture parameters) in separate steps. A perspective camera and Phong reflectance model are used to model the geometric projection and illumination respectively. Linear methods that are adapted to camera and illumination models are proposed. This generates closed-form solu- tions for these parameters, leading to an accurate and efficient fitting. Another contribution is an albedo based 3D morphable model (AB3DMM). One difficulty of 3DMM fitting is to recover the illumination of the 2D image because the proportion of the albedo and shading contributions in a pixel intensity is ambiguous. Unlike traditional methods, the AB3DMM removes the illumination component from the input image using illumination normalisation methods in a preprocessing step. This image can then be used as input to the AB3DMM fitting that does not need to handle the lighting parameters. Thus, the fitting of the AB3DMM becomes easier and more accurate. Based on AB3DMM and ESO, this study proposes a fully automatic face recognition (AFR) system. Unlike the existing 3DMM methods which assume the facial landmarks are known, our AFR automatically detects the landmarks that are used to initialise our fitting algorithms. Our AFR supports two types of feature extraction: holistic and local features. Experimental results show our AFR outperforms state-of-the-art face recognition methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Kern, Simon. "Sensitivity Analysis in 3D Turbine CFD." Thesis, KTH, Mekanik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210821.

Full text
Abstract:
A better understanding of turbine performance and its sensitivity to variations in the inletboundary conditions is crucial in the quest of further improving the efficiency of aero engines.Within the research efforts to reach this goal, a high-pressure turbine test rig has been designedby Rolls-Royce Deutschland in cooperation with the Deutsches Zentrum für Luft- und Raumfahrt(DLR), the German Aerospace Center. The scope of the test rig is high-precision measurement ofaerodynamic efficiency including the effects of film cooling and secondary air flows as well as theimprovement of numerical prediction tools, especially 3D Computational Fluid Dynamics (CFD).A sensitivity analysis of the test rig based on detailed 3D CFD computations was carried outwith the aim to quantify the influence of inlet boundary condition variations occurring in the testrig on the outlet capacity of the first stage nozzle guide vane (NGV) and the turbine efficiency.The analysis considered variations of the cooling and rimseal leakage mass flow rates as well asfluctuations in the inlet distributions of total temperature and pressure. The influence of anincreased rotor tip clearance was also studied.This thesis covers the creation, calibration and validation of the steady state 3D CFD modelof the full turbine domain. All relevant geometrical details of the blades, walls and the rimsealcavities are included with the exception of the film cooling holes that are replaced by a volumesource term based cooling strip model to reduce the computational cost of the analysis. Thehigh-fidelity CFD computation is run only on a sample of parameter combinations spread overthe entire input parameter space determined using the optimal latin hypercube technique. Thesubsequent sensitivity analysis is based on a Kriging response surface model fit to the sampledata. The results are discussed with regard to the planned experimental campaign on the test rigand general conclusions concerning the impacts of the studied parameters on turbine performanceare deduced.
APA, Harvard, Vancouver, ISO, and other styles
12

Trapp, Matthias. "Analysis and exploration of virtual 3D city models using 3D information lenses." Master's thesis, Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2008/1393/.

Full text
Abstract:
This thesis addresses real-time rendering techniques for 3D information lenses based on the focus & context metaphor. It analyzes, conceives, implements, and reviews its applicability to objects and structures of virtual 3D city models. In contrast to digital terrain models, the application of focus & context visualization to virtual 3D city models is barely researched. However, the purposeful visualization of contextual data of is extreme importance for the interactive exploration and analysis of this field. Programmable hardware enables the implementation of new lens techniques, that allow the augmentation of the perceptive and cognitive quality of the visualization compared to classical perspective projections. A set of 3D information lenses is integrated into a 3D scene-graph system: • Occlusion lenses modify the appearance of virtual 3D city model objects to resolve their occlusion and consequently facilitate the navigation. • Best-view lenses display city model objects in a priority-based manner and mediate their meta information. Thus, they support exploration and navigation of virtual 3D city models. • Color and deformation lenses modify the appearance and geometry of 3D city models to facilitate their perception. The presented techniques for 3D information lenses and their application to virtual 3D city models clarify their potential for interactive visualization and form a base for further development.
Diese Diplomarbeit behandelt echtzeitfähige Renderingverfahren für 3D Informationslinsen, die auf der Fokus-&-Kontext-Metapher basieren. Im folgenden werden ihre Anwendbarkeit auf Objekte und Strukturen von virtuellen 3D-Stadtmodellen analysiert, konzipiert, implementiert und bewertet. Die Focus-&-Kontext-Visualisierung für virtuelle 3D-Stadtmodelle ist im Gegensatz zum Anwendungsbereich der 3D Geländemodelle kaum untersucht. Hier jedoch ist eine gezielte Visualisierung von kontextbezogenen Daten zu Objekten von großer Bedeutung für die interaktive Exploration und Analyse. Programmierbare Computerhardware erlaubt die Umsetzung neuer Linsen-Techniken, welche die Steigerung der perzeptorischen und kognitiven Qualität der Visualisierung im Vergleich zu klassischen perspektivischen Projektionen zum Ziel hat. Für eine Auswahl von 3D-Informationslinsen wird die Integration in ein 3D-Szenengraph-System durchgeführt: • Verdeckungslinsen modifizieren die Gestaltung von virtuellen 3D-Stadtmodell- Objekten, um deren Verdeckungen aufzulösen und somit die Navigation zu erleichtern. • Best-View Linsen zeigen Stadtmodell-Objekte in einer prioritätsdefinierten Weise und vermitteln Meta-Informationen virtueller 3D-Stadtmodelle. Sie unterstützen dadurch deren Exploration und Navigation. • Farb- und Deformationslinsen modifizieren die Gestaltung und die Geometrie von 3D-Stadtmodell-Bereichen, um deren Wahrnehmung zu steigern. Die in dieser Arbeit präsentierten Techniken für 3D Informationslinsen und die Anwendung auf virtuelle 3D Stadt-Modelle verdeutlichen deren Potenzial in der interaktiven Visualisierung und bilden eine Basis für Weiterentwicklungen.
APA, Harvard, Vancouver, ISO, and other styles
13

Coban, Sophia. "Practical approaches to reconstruction and analysis for 3D and dynamic 3D computed tomography." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/practical-approaches-to-reconstruction-and-analysis-for-3d-and-dynamic-3d-computed-tomography(f34a2617-09f9-4c4e-9669-f86f6cf2bce5).html.

Full text
Abstract:
The problem of reconstructing an image from a set of tomographic data is not new, nor is it lacking attention. However there is still a distinct gap between the mathematicians and the experimental scientists working in the computed tomography (CT) imaging community. One of the aims in this thesis is to bridge this gap with mathematical reconstruction algorithms and analysis approaches applied to practical CT problems. The thesis begins with an extensive analysis for assessing the suitability of reconstruction algorithms for a given problem. The paper presented examines the idea of extracting physical information from a reconstructed sample and comparing against the known sample characteristics to determine the accuracy of a reconstructed volume. Various test cases are studied, which are relevant to both mathematicians and experimental scientists. These include the variance in quality of reconstructed volume as the dose is reduced or the implementation of the level set evolution method, used as part of a simultaneous reconstruction and segmentation technique. The work shows that the assessment of physical attributes results in more accurate conclusions. Furthermore, this approach allows for further analysis into interesting questions in CT. This theme is continued throughout the thesis. Recent results in compressive sensing (CS) gained attention in the CT community as they indicate the possibility of obtaining an accurate reconstruction of a sparse image from severely limited or reduced amount of measured data. Literature produced so far has not shown that CS directly guarantees a successful recovery in X-ray CT, and it is still unclear under which conditions a successful sparsity regularized reconstruction can be achieved. The work presented in the thesis aims to answer this question in a practical setting, and seeks to establish a direct connection between the success of sparsity regularization methods and the sparsity level of the image, which is similar to CS. Using this connection, one can determine the sufficient amount of measurements to collect from just the sparsity of an image. A link was found in a previous study using simulated data, and the work is repeated here with experimental data, where the sparsity level of the scanned object varies. The preliminary work presented here verifies the results from simulated data, showing an "almost-linear" relationship between the sparsity of the image and the sufficient amount of data for a successful sparsity regularized reconstruction. Several unexplained artefacts are noted in the literature as the `partial volume', the 'exponential edge gradient' or the 'penumbra' effect, with no clear explanation for their cause, or established techniques to remove them. The work presented in this paper shows that these artefacts are due to a non-linearity in the measured data, which comes from either the set up of the system, the scattering of rays or the dependency of linear attenuation on wavelength in the polychromatic case. However, even in monochromatic CT systems, the non-linearity effect can be detected. The paper shows that in some cases, the non-linearity effect is too large to ignore, and the reconstruction problem should be adapted to solve a non-linear problem. We derive this non-linear problem and solve it using a numerical optimization technique for both simulatedand real, gamma-ray data. When compared to reconstructions obtained using the standard linear model, the non-linear reconstructed images show clear improvements in that the non-linear effect is largely eliminated. The thesis is finished with a highlight article in the special issue of Solid Earth, named "Pore-scale tomography & imaging - applications, techniques and recommended practice". The paper presents a major technical advancement in a dynamic 3D CT data acquisition, where the latest hardware and optimal data acquisition plan are applied and as a result, ultra fast 3D volume acquisition was made possible. The experiment comprised of fast, free-falling water-saline drops traveling through a pack of rock grains with varying porosities. The imaging work was enhanced by the use of iterative methods and physical quantification analysis performed. The data acquisition and imaging work is the first in the field to capture a free falling drop and the imaging work clearly shows the fluid interaction with speed, gravity and more importantly, the inter- and intra-grain fluid transfers.
APA, Harvard, Vancouver, ISO, and other styles
14

Petrov, Anton Igorevich. "RNA 3D Motifs: Identification, Clustering, and Analysis." Bowling Green State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1333929629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kucuk, Can. "3d Marker Tracking For Human Gait Analysis." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606941/index.pdf.

Full text
Abstract:
This thesis focuses on 3D marker tracking for human gait analysis. In KISS Gait Analysis System at METU, a subject'
s gait is recorded with 6 cameras while 13 reflective markers are attached at appropriate locations on his/her legs and feet. These images are processed to extract 2 dimensional (2D) coordinates of the markers in each camera. The 3 dimensional (3D) coordinates of the markers are obtained by processing the 2D coordinates of the markers with linearization and calibration algorithms. Then 3D trajectories of the markers are formed using the 3D coordinates of the markers. In this study, software which takes the 2D coordinates of markers in each camera and processes them to form the 3D trajectories of the markers is developed. Kalman Filter is used in formation of 3D trajectories. The results are found to be satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
16

(unal), Kutlu Ozge. "Computational 3d Fracture Analysis In Axisymmetric Media." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609872/index.pdf.

Full text
Abstract:
In this study finite element modeling of three dimensional elliptic and semielliptic cracks in a hollow cylinder is considered. Three dimensional crack and cylinder are modeled by using finite element analysis program ANSYS. The main objectives of this study are as follows. First, Ansys Parametric Design Language (APDL) codes are developed to facilitate modeling of different types of cracks in cylinders. Second, by using these codes the effect of some parameters of the problem like crack location, cylinder&rsquo
s radius to thickness ratio (R/t), the crack geometry ratio (a/c) and crack minor axis to cylinder thickness ratio (a/t) on stress intensity factors for surface and internal cracks are examined. Mechanical and thermal loading cases are considered. Displacement Correlation Technique (DCT) is used to obtain Stress Intensity Factors.
APA, Harvard, Vancouver, ISO, and other styles
17

Nielsen, Paul. "3D CFD-analysis of conceptual bow wings." Thesis, KTH, Marina system, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-31072.

Full text
Abstract:
As a small step towards their long-term vision of one day producing emission free vessels, Wallenius em-ployed, in 2009, Mårten Silvanius to carry out his master thesis for them in which he studied five different concepts to reduce the overall fuel consumption using wind powered systems. The vessel on which his study was performed is the 230 m LCTC vessel M/V Fedora. One of the concepts studied was the bow wing which is thought to generate enough force in the ship direction to profitably reduce the overall wind resistance. His calculations showed that the wing would be the preferred method of the different concepts studied since it was determined cheapest to build, had good payback, had good global drag reducing ef-fects and had a predicted performance of a reduction in fuel cost between 3-5% on a worldwide route.This thesis is conducted mainly to verify the results of Silvanius numerical study. The method chosen is to perform a fully viscous 3-D CFD study on the entire flow around the above water portion of the ship in full scale. A 3-D model is created and the wing is placed using suggestions given by Silvanius.One major limitation in this project was the computational capacity available at the time this thesis was conducted. In order to run some of the viscous grids created the grids had to be severely coarsened. This had a negative impact on the reliability on some of the results.Since it has been difficult to obtain satisfactory solutions, no work has been done to optimize the shape and position of the wing.Nevertheless, one it has been shown that the wing does in fact affect the resistance in a positive way, however nowhere near as much as predicted by Silvanius. This effect needs to be further determined through further calculations, both using CFD and also through experimental wind tunnel testing where alternatives to the wing profile should be tested, e.g. replacing the wing with a vortex generator to further delay the point of separation.
APA, Harvard, Vancouver, ISO, and other styles
18

Gooding, Mark. "3D ultrasound image analysis in assisted reproduction." Thesis, University of Oxford, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.408695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wall, Mostyn Leonard Thomas. "3D seismic analysis of the Silverpit structure." Thesis, Cardiff University, 2008. http://orca.cf.ac.uk/55061/.

Full text
Abstract:
This thesis uses industry 3D seismic reflection datasets to investigate the Silverpit structure, a proposed impact crater located in the southern North Sea, UK. The principal aim of this thesis is to investigate the origin of the Silverpit structure. Research has focused on constraining the age of the Silverpit structure, investigation into regional magmatic activity in the southern North Sea study area and structural analysis of the Silverpit structure. The Silverpit structure is a multi-ringed circular structure 20 km in diameter found within Cretaceous and Eocene age marine sediments. The Silverpit structure is composed of a 3 km diameter excavated cavity is surrounded by a series of concentric listric faults. The outer rings of the structure are composed of extensional grabens and concentric folds. Within the excavated cavity, a series of localised uplifted reflections, termed the central uplift, can be identified. A boundary marks a common upper limit of deformation. Undeformed reflections above this boundary have parallel onlapping geometry onto the underlying cavity and are an indication of instantaneous creation of accommodation space. This boundary between undeformed and faulted reflections has been interpreted to be the crater floor and has been dated to be Middle Eocene in age. A tertiary dyke swarm, 54 Ma old, has been identified and mapped 20 km to the North of the Silverpit structure. The dykes are characterised by a linear seismic disturbance and linear coalesced depressions at the upper limit of the seismic disturbance. The depressions above the dyke tips formed during the release of volatiles from the intruding magma. The dykes and coalesced depressions are new Earth analogues for Martian pit chain craters. No magnetic anomaly can be identified over the Silverpit structure, ruling out an igneous origin. The age of the Silverpit structure is older than the onset of regional folding, therefore ruling out a folding/salt withdrawal origin. The circular Silverpit structure is unrelated to the underlying elongate salt geometry. Any fault growth associated with salt movement would trace the underlying salt body we would therefore expect any faults related to salt movement to be elongate, not circular. The morphology of the Silverpit structure is characteristic of an impact crater. The features mapped, central uplift, excavated crater, multi rings and folds, are all diagnostic features of an impact crater. Bolide impact is the most likely origin for the Silverpit structure as all the alternative origins can be ruled out. Importantly further diagnostic evidence is still needed to confirm the Silverpit structure as an impact crater. Until such evidence is found the Silverpit structure is classified as a probable impact crater.
APA, Harvard, Vancouver, ISO, and other styles
20

Sardouk, Khalil. "Analysis of dimensional control in 3D printing." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Barrero, Bilbao Alejandro. "Enhanced nonlinear analysis of 3D concrete structures." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/45353.

Full text
Abstract:
Although numerical simulation of concrete has a significant background in the framework of simplified one- and two-dimensional elements, a full triaxial description of the structural behaviour of this material is still subject to active research. High fidelity modelling has only been enabled once the required computational capacity achieved an appropriate threshold, and it is precisely of such computational nature that there are diverse drawbacks the material model has to overcome. For concrete, an existing model combining plasticity and isotropic damage is chosen in this work, and this choice over multi-surface plasticity is duly justified. Additionally, an extension to anisotropic damage is proposed. Focus is set on a series of algorithmic enhancements that significantly increase robustness in stress evaluation, in particular from stress states that pathologically associate to a singular Jacobian matrix and stress-returns that lead towards sensitive areas of the failure surface in principal stress space, where plastic flow is undefined. Reinforcing steel is modelled as embedded bars inside the corresponding concrete parent elements, with solely axial stiffness. An arbitrary orientation inside the concrete elements is allowed but otherwise the discretised bars share the parent element morphology, order and degrees of freedom, resulting in a perfect bond interaction. An improved and systematic linearising procedure is presented to track the intersections of each bar segment with its embedding parent element, which can be readily applied to any element type and order. This facilitates an accurate calculation of this constituent’s contribution to the parent element’s stiffness matrix and nodal force vector. The robustness of the enhanced material model is verified by means of numerical tests, highlighting the convergence ratio, and validation ensues via simulations of established benchmark tests. Finally, some case studies are presented to illustrate the performance of the model at structural level, with insight into various issues of computational nature.
APA, Harvard, Vancouver, ISO, and other styles
22

KATRAGADDA, SRIRAMAPRASAD. "FINITE ELEMENT ANALYSIS OF 3D CONTACT PROBLEMS." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1123812018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Clement, Stephen J. "Sparse shape modelling for 3D face analysis." Thesis, University of York, 2014. http://etheses.whiterose.ac.uk/8248/.

Full text
Abstract:
This thesis describes a new method for localising anthropometric landmark points on 3D face scans. The points are localised by fitting a sparse shape model to a set of candidate landmarks. The candidates are found using a feature detector that is designed using a data driven methodology, this approach also informs the choice of landmarks for the shape model. The fitting procedure is developed to be robust to missing landmark data and spurious candidates. The feature detector and landmark choice is determined by the performance of different local surface descriptions on the face. A number of criteria are defined for a good landmark point and good feature detector. These inform a framework for measuring the performance of various surface descriptions and the choice of parameter values in the surface description generation. Two types of surface description are tested: curvature and spin images. These descriptions, in many ways, represent many aspects of the two most common approaches to local surface description. Using the data driven design process for surface description and landmark choice, a feature detector is developed using spin images. As spin images are a rich surface description, we are able to perform detection and candidate landmark labelling in a single step. A feature detector is developed based on linear discriminant analysis (LDA). This is compared to a simpler detector used in the landmark and surface description selection process. A sparse shape model is constructed using ground truth landmark data. This sparse shape model contains only the landmark point locations and relative positional variation. To localise landmarks, this model is fitted to the candidate landmarks using a RANSAC style algorithm and a novel model fitting algorithm. The results of landmark localisation show that the shape model approach is beneficial over template alignment approaches. Even with heavily contaminated candidate data, we are able to achieve good localisation for most landmarks.
APA, Harvard, Vancouver, ISO, and other styles
24

Madrigali, Andrea. "Analysis of Local Search Methods for 3D Data." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Find full text
Abstract:
In questa tesi sono stati analizzati alcuni metodi di ricerca per dati 3D. Viene illustrata una panoramica generale sul campo della Computer Vision, sullo stato dell’arte dei sensori per l’acquisizione e su alcuni dei formati utilizzati per la descrizione di dati 3D. In seguito è stato fatto un approfondimento sulla 3D Object Recognition dove, oltre ad essere descritto l’intero processo di matching tra Local Features, è stata fatta una focalizzazione sulla fase di detection dei punti salienti. In particolare è stato analizzato un Learned Keypoint detector, basato su tecniche di apprendimento di machine learning. Quest ultimo viene illustrato con l’implementazione di due algoritmi di ricerca di vicini: uno esauriente (K-d tree) e uno approssimato (Radial Search). Sono state riportate infine alcune valutazioni sperimentali in termini di efficienza e velocità del detector implementato con diversi metodi di ricerca, mostrando l’effettivo miglioramento di performance senza una considerabile perdita di accuratezza con la ricerca approssimata.
APA, Harvard, Vancouver, ISO, and other styles
25

Müller, Ralph Müller Ralph Müller Ralph. "3D assessment and analysis of trabecular bone architecture /." Zürich, 1994. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=10815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Man, Ka Ho. "Fabrication and analysis of 3D colloidal photonic crystals /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?PHYS%202006%20MAN.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Feng, Huan. "3D-models of railway track for dynamic analysis." Thesis, KTH, Transportvetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-52619.

Full text
Abstract:
In recent decades, railway transport infrastructures have been regaining their importance due to their efficiency and environmentally friendly technologies. This has led to increasing train speeds, higher axle loads and more frequent train usage. These improved service provisions have however brought new challenges to traditional railway track engineering, especially to track geotechnical dynamics. These challenges demanded for a better understanding of the track dynamics. Due to the large cost and available load conditions limitation, experimental investigation is not always the best choice for the dynamic effect study of railway track structure. Comparatively speaking, an accurate mathematical modeling and numerical solution of the dynamic interaction of the track structural components reveals distinct advantage for understanding the response behavior of the track structure. The purpose of this thesis is to study the influence of design parameters on dynamic response of the railway track structure by implementing Finite Element Method (FEM). According to the complexity, different railway track systems have been simulated, including: Beam on discrete support model, Discretely support track including ballast mass model and Rail on sleeper on continuum model. The rail and sleeper have been modeled by Euler-Bernoulli beam element. Spring and dashpot has been used for the simulation of railpads and the connection between the sleeper and ballast ground. Track components have been studied separately and comparisons have been made between different models. The finite element analysis is divided into three categories: eigenvalue analysis, dynamic analysis and general static analysis. The eigenfrequencies and corresponding vibration modes were extracted from all the models. The main part of the finite element modeling involves the steady-state dynamic analysis, in which receptance functions were obtained and used as the criterion for evaluating the dynamic properties of track components. Dynamic explicit analysis has been used for the simulation of a moving load, and the train speed effect has been studied. The displacement of the trackbed has been evaluated and compared to the measurement taken in Sweden in the static analysis.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Jing. "Reconstruction and Analysis of 3D Individualized Facial Expressions." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32588.

Full text
Abstract:
This thesis proposes a new way to analyze facial expressions through 3D scanned faces of real-life people. The expression analysis is based on learning the facial motion vectors that are the differences between a neutral face and a face with an expression. There are several expression analysis based on real-life face database such as 2D image-based Cohn-Kanade AU-Coded Facial Expression Database and Binghamton University 3D Facial Expression Database. To handle large pose variations and increase the general understanding of facial behavior, 2D image-based expression database is not enough. The Binghamton University 3D Facial Expression Database is mainly used for facial expression recognition and it is difficult to compare, resolve, and extend the problems related detailed 3D facial expression analysis. Our work aims to find a new and an intuitively way of visualizing the detailed point by point movements of 3D face model for a facial expression. In our work, we have created our own 3D facial expression database on a detailed level, which each expression model has been processed to have the same structure to compare differences between different people for a given expression. The first step is to obtain same structured but individually shaped face models. All the head models are recreated by deforming a generic model to adapt a laser-scanned individualized face shape in both coarse level and fine level. We repeat this recreation method on different human subjects to establish a database. The second step is expression cloning. The motion vectors are obtained by subtracting two head models with/without expression. The extracted facial motion vectors are applied onto a different human subject’s neutral face. Facial expression cloning is proved to be robust and fast as well as easy to use. The last step is about analyzing the facial motion vectors obtained from the second step. First we transferred several human subjects’ expressions on a single human neutral face. Then the analysis is done to compare different expression pairs in two main regions: the whole face surface analysis and facial muscle analysis. Through our work where smiling has been chosen for the experiment, we find our approach to analysis through face scanning a good way to visualize how differently people move their facial muscles for the same expression. People smile in a similar manner moving their mouths and cheeks in similar orientations, but each person shows her/his own unique way of moving. The difference between individual smiles is the differences of movements they make.
APA, Harvard, Vancouver, ISO, and other styles
29

Hedberg, Christer. "Analysis of 3D viewing experience viasubjective evaluation methods." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142041.

Full text
Abstract:
3D video technology has in recent years become immensely popular given the success 3D movies have had at the cinemas. More and more people have therefore requested to be able to experience this 3D sensation beyond the cinematic screens, and at the comfort of one’s home in front of the TV. Leading television manufacturers have responded by producing modern television sets capable of rendering 3D video, and several TV channels around the world have started to broadcast programs and movies in 3D. Since it is not yet clear what constitutes the overall visual experience of 3DTV, the purpose of this master ’s thesis project is to explore through subjective testing methods the relevance that some aspects might have concerning the 3DTV viewing experience. Another aim is to advance the standardization work regarding subjective evaluation methods for stereoscopic 3D videos. Key 3D terminology and concepts are presented, as well as subjective evaluation methodologies. Stereoscopic 3D video sequences were produced; a customization of a rating-capable video player was made; 3D viewing-and-voting experiments with test subjects were carried out; different attributes were incorporated into the experiments and evaluated such as video quality, visual discomfort, sense of presence and viewing distance; experiment data were collected and analyzed. The experiment results indicated that viewers in general showed an inclination to vote similarly for the different attributes that were examined. This in turn showed that video sequences with the characteristics presented in the experiments, mostly coding related distortions, could be assessed with subjective evaluation methods focusing solely on one rating scale concerning the general video quality, which would give a good understanding of the 3D video quality experience.
3D-videoteknologi har de senaste åren blivit oerhört populärt i och med framgångarna som 3D-filmer har rönt på biograferna. Fler och fler människor har därmed efterfrågat att kunna erfara denna 3Dupplevelse bortom bioduken, och inne i hemmets lugna vrå framför TV:n. Ledande TV-tillverkare har i sin tur svarat med att producera moderna TV-apparater kapabla att kunna återge video i 3D, och flera TV-kanaler runt om i världen har även börjat sända program och filmer i 3D. Eftersom det ännu inte är helt klarlagt vad som utgör den samlade visuella upplevelsen av 3DTV, är syftet med detta examensarbete att genom subjektiva testmetoder utforska relevansen som vissa aspekter kan ha när det gäller 3DTV-upplevelsen. Ett annat syfte är att främja standardiseringsarbetet med avseende på subjektiva utvärderingsmetoder för stereoskopiska 3D-sekvenser. Nyckelterminologi och begrepp inom 3D presenteras, liksom subjektiva utvärderingsmetoder. Stereoskopiska 3D-videosekvenser producerades; en modifikation av en röstningskapabel videospelare gjordes; 3D visnings- och röstningsexperiment med testpersoner genomfördes; olika attribut infördes i experimenten och evaluerades såsom videokvalitet, visuellt obehag, närvarokänsla och visningsavstånd; data insamlades och analyserades. Resultaten av experimenten indikerade att tittarna i allmänhet visade en benägenhet att rösta på samma sätt för de olika attribut som granskats. Detta visade i sin tur att videosekvenser med de egenskaper som presenterades i experimenten, mestadels kodningsrelaterade distorsioner, kunde evalueras via subjektiva utvärderingsmetoder som enbart fokuserar på en skala gällande den allmänna videokvaliteten, som därmed skulle ge en god förståelse för 3Dvideoupplevelsen.
APA, Harvard, Vancouver, ISO, and other styles
30

El, Mallahi Ahmed. "Automated 3D object analysis by digital holographic microscopy." Doctoral thesis, Universite Libre de Bruxelles, 2013. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209489.

Full text
Abstract:
The main objective of this thesis is the development of new processing techniques for digital holograms. The present work is part of the HoloFlow project that intends to integrate the DHM technology for the monitoring of water quality. Different tools for an automated analysis of digital holograms have been developed to detect, refocus and classify particles in continuous fluid flows. A detailed study of the refocusing criterion permits to determine its dependencies and to quantify its robustness. An automated detection procedure has been developed to determine automatically the 3D positions of organisms flowing in the experiment volume. Two detection techniques are proposed: a usual method based on a global threshold and a new robust and generic method based on propagation matrices, allowing to considerably increase the amount of detected organisms (up to 95 %) and the reliability of the detection. To handle the case of aggregates of particles commonly encountered when working with large concentrations, a new separation procedure, based on a complete analysis of the evolution of the focus planes, has been proposed. This method allows the separation aggregates up to an overlapping area of around 80 %. These processing tools have been used to classify organisms where the use of the full interferometric information of species enables high classifier performances to be reached (higher than 93 %).
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
31

Zhao, Xi. "3D face analysis : landmarking, expression recognition and beyond." Phd thesis, Ecole Centrale de Lyon, 2010. http://tel.archives-ouvertes.fr/tel-00599660.

Full text
Abstract:
This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Tak Sing. "Meshing and substructuring of 3D stress analysis models." Thesis, Queen's University Belfast, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Xu, Cheng. "Enhancement and performance analysis for 3D beamforming systems." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16630.

Full text
Abstract:
This thesis is about the researching for 5th generation (5G) communication system, which focus on the improvement of 3D beamforming technology in the antenna array using in the Full Dimension Multiple-Input Multiple-Output (FD-MIMO) system and Millimeter-wave (mm-wave) system. When the 3D beamforming technology has been used in 5G communication system, the beam needs a weighting matrix to direct the beam to cover the UEs, but some compromises should be considered. If the narrow beams are used to transmit signals, then more energy is focused in the desired direction, but this has a restricted coverage area to a single or few User Equipments (UEs). If the BS covers multiple UEs, then multiple beams need to be steered towards more groups of UEs, but there is more interference between these beams from their side lobes when they are transmitted at same time. These challenges are waiting to be solved, which are about interference between each beam when the 3D beamforming technology is used. Therefore, there needs to be one method to decrease the generated interference between each beam through directing the side lobe beams and nulls to minimize interference in the 3D beamforming system. Simultaneously, energy needs to be directed towards the desired direction. If it has been decided that one beam should covera cluster of UEs, then there will be a range of received Signal to Interference plus Noise Ratio (SINR) depending on the location of the UEs relative to the direction of the main beam. If the beam is directed towards a group of UEs then there needs be a clustering method to cluster the UEs. In order to cover multiple UEs, an improved K-means clustering algorithm is used to cluster the multiple UEs into different groups, which is based on the cosine distance. Itcan decrease the number of beams when multiple UEs need be covered by multiple beams at same time. Moreover, a new method has been developed to calculate the weighting matrix for beamforming. It can adjust the values of weighting matrix according to the UEs' location and direct the main beam in a desired direction whilst minimizing its side lobes in other undesired directions. Then the minimum side lobe beamforming system only needs to know the UEs' location and can be used to estimate the Channel State Information (CSI) of UEs. Therefore, the scheme also shows lower complexity when compared to the beamforming methods with pre-coding. In order to test the improved K-means clustering algorithm and the new weighting method that can enhance the performance for 3D beamforming system, the two simulation systems are simulated to show the results such as 3D beamforming LTE system and mm-wave system.
APA, Harvard, Vancouver, ISO, and other styles
34

Albataineh, Nermeen. "SLOPE STABILITY ANALYSIS USING 2D AND 3D METHODS." University of Akron / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=akron1153719372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Yu, En. "Social Network Analysis Applied to Ontology 3D Visualization." Miami University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=miami1206497854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Bao, Guanqun. "Road Distress Analysis using 2D and 3D Information." University of Toledo / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1289874675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Steiner, Alexis K. "3D Digitization and Wear Analysis of Sauropod Teeth." Bowling Green State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1525990888624381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Petricci, Davide. "Analysis of asphalt surface textures using 3D techniques." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5409/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ramírez, Jiménez Guillermo. "Electric sustainability analysis for concrete 3D printing machine." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258928.

Full text
Abstract:
Nowadays, manufacturing technologies become more and more aware of efficiency and sustainability. One of them is the so called 3D printing. While 3D printing is often linked to plastic, the truth is there are many other materials that are being tested which could have several improvements over plastics.One of these options is stone or concrete, which is more suitable the architecture and artistic fields. However, due to its nature, this new technology involves the use of new techniques when compared to the more commonly used 3D printers. This implies that it could interesting to know how much energy efficient these techniques are and how can they be improved in future revisions.This thesis is an attempt to disclose and analyze the different devices that make up one of these printers and with this information, build a model that accurately describes its behavior.For this purpose, the power is measured at many points and later it is analyzed and fitted to a predefined function. After the fitting has been done, an error is calculated to show how accurate the model is when compared to the original data.It was found that many of these devices produce power spikes due to its nonlinear behavior. This behavior is usually related to switching, and can avoided with different devices.Finally, some advice is given focused on future research and revisions, which could be helpful for safety, efficiency and quality.
Numera blir tillverkningstekniken alltmer medveten om effektivitet och hållbarhet. En av dem är den så kallade 3D­utskriften. Medan 3D­utskrift ofta är kopplad till plast, är verkligheten att det finns många andra material som testas, vilket kan ha flera förbättringar över plast.Ett av dessa alternativ är sten eller betong, vilket är mer lämpligt inom arkitektur och konstnärliga fält. På grund av sin natur inbegriper denna nya teknik användningen av nya tekniker jämfört med de vanligare 3D­skrivarna. Detta innebär att det kan vara intressant att veta hur mycket mer energieffektiva dessa tekniker är och hur de kan förbättras i framtida revisioner.Denna avhandling är ett försök att studera och analysera de olika enheter som utgör en av dessa skrivare och med denna information, bygga en modell som exakt beskriver dess beteende.För detta ändamål mäts effekten på många punkter och senare analyseras och anpassas den till en fördefinierad funktion. Efter anpassning har gjorts beräknas felet för att visa hur exakt modellen är jämfört med originaldata.Det visade sig att många av dessa enheter producerar spännings­spikar på grund av dess olinjära beteende. Detta beteende är vanligtvis relaterat till omkoppling och kan undvikas med olika enheter.Slutligen ges några råd om framtida forskning och revideringar, vilket kan vara till hjälp för säkerhet, effektivitet och kvalitet.
APA, Harvard, Vancouver, ISO, and other styles
40

FAROKHI, NEJAD ALI. "MultiBody Dynamic Analysis of a 3D Synchronizer Model." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2730618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Xu, Fenglian. "Analysing 3D images stacks and extracting curvilinear features." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ekberg, Fredrik. "An approach for representing complex 3D objects in GIS applied to 3D properties." Thesis, University of Gävle, Department of Technology and Built Environment, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-139.

Full text
Abstract:

The main problem that is addressed in this thesis is how to represent complex three-dimensional objects in GIS in order to render a more realistic representation of the real world. The goal is to present an approach for representing complex 3D objects in GIS. This is achieved by using commercial GIS (ArcGIS), applied to 3D properties. In order to get a clear overview of the state-of-the-art of 3D GIS and the current 3D cadastral situation a literature study was carried out. Based on this overview it can be concluded that 3D GIS still is in its initial phase. Current 3D GIS developments are mainly in the area of visualisation and animation, and almost nothing in the area of spatial analysis and attribute handling. Furthermore, the literature study reveals that no complete solution has been introduced that solves the problems involved in 3D cadastral registration. In several countries (e.g. Sweden, Denmark, Norway, Netherlands, Israel, and Australia) 3D properties exists in a juridical framework, but technical issues such as how to represent, store, and visualize 3D properties has not yet been solved. Some countries (Sweden, Norway, and Australia) visualize the footprints of 3D property units in a base map. This approach partly solves some technical issues, but can only represent 3D objects in a 2.5D environment. Therefore, research in how to represent complex objects in GIS as ‘true’ 3D objects is of great need.

This thesis will emphasize MultiPatch as a geographic representation method to represent complex 3D objects in GIS. A case study will demonstrate that complex objects can be visualized and analysed in a commercial GIS, in this case ArcGIS. Most commercial GIS software available on the market applies a 2.5D approach to represent 3D objects. The 2.5D approach has limitations for representing complex objects. There is therefore a need of finding new approaches to represent complex objects within GIS. The result shows that MultiPatch is not an answer to all the problems within 3D GIS but a solution to some of the problems. It still requires a lot of research in the field of 3D GIS, especially in development of spatial analysis capabilities.


Det huvudsakliga problemet i denna uppsats är hur komplexa tre-dimensionella objekt kan representeras i GIS för att återge verkligheten mer realistiskt. Målet är att presentera ett tillvägagångssätt för att representera komplexa 3D-objekt i GIS. Detta har uppnåtts genom att använda ett kommersiellt GIS tillämpat på 3D-fastigheter. En litteraturstudie har genomförts för att erhålla en klar översikt över det senaste inom 3D-GIS och över den aktuella situationen inom 3D-fastigheter. Grundat på översikten kan slutsatsen dras att 3D-GIS bara är i sin begynnelsefas. Den aktuella utvecklingen inom 3D-GIS har huvudsakligen fokuserat på visualisering och animering och nästan ingenting inom rumsliga analysmetoder och hantering av attribut. Litteraturstudien visar också att ingen fullständig lösning för de problem som finns inom 3D-fastighetsregistrering har introducerats. I flera länder, t.ex. Sverige, Danmark, Norge, Nederländerna, Israel och Australien, existerar 3D-fastigheter idag i juridiska termer, men de tekniska problemen som t.ex. hur 3D-fastigheter ska representeras, lagras och visualiseras har inte ännu lösts. Vissa länder (Sverige, Norge och Australien) visualiserar idag en projektion av 3D-fastigheterna på en fastighetskarta. Den här metoden löser endast några av de tekniska problemen och kan endast representera 3D-objekt i en 2,5D-miljö. Därför är forskning inom hur komplexa objekt kan representeras i GIS som s.k. ”sann” 3D av betydelse.

Den här uppsatsen framhäver MultiPatch som en datatyp för att representera komplexa 3D-objekt i GIS. En fallstudie visar att komplexa objekt kan visualiseras och analyseras i ett kommersiellt GIS, i det här fallet ArcGIS. De flesta kommersiella GIS som är tillgängliga på marknaden använder 2,5D-metoden för att representera 3D-objekt. 2,5D-metoden har vissa begränsningar för att representera komplexa objekt och därför finns det ett behov att finna nya tillvägagångssätt för att representera komplexa objekt inom GIS. Resultaten kommer att visa att MultiPatch inte är någon fullständig lösning till alla problem inom 3D-GIS men en lösning på några av problemen. Det krävs fortfarande mycket forskning inom 3D-GIS, särskilt inom utveckling av rumsliga analysmetoder.

APA, Harvard, Vancouver, ISO, and other styles
43

Dahlin, Johan. "3D Modeling of Indoor Environments." Thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93999.

Full text
Abstract:
With the aid of modern sensors it is possible to create models of buildings. These sensorstypically generate 3D point clouds and in order to increase interpretability and usability,these point clouds are often translated into 3D models.In this thesis a way of translating a 3D point cloud into a 3D model is presented. The basicfunctionality is implemented using Matlab. The geometric model consists of floors, wallsand ceilings. In addition, doors and windows are automatically identified and integrated intothe model. The resulting model also has an explicit representation of the topology betweenentities of the model. The topology is represented as a graph, and to do this GraphML isused. The graph is opened in a graph editing program called yEd.The result is a 3D model that can be plotted in Matlab and a graph describing the connectivitybetween entities. The GraphML file is automatically generated in Matlab. An interfacebetween Matlab and yEd allows the user to choose which rooms should be plotted.
APA, Harvard, Vancouver, ISO, and other styles
44

Xu, Zhifei. "Tensorial analysis of multilayer printed circuit boards : computations and basics for multiphysics analysis." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR003.

Full text
Abstract:
Les cartes électroniques modernes nécessitent des analyses avancées d’intégrité du signal (IS), d’intégrité de puissance (IP), et de compatibilité électromagnétique (CEM). Face aux complexités des circuits imprimés, les méthodes de calcul classiques ne permettent ni de poser le problème ni de l’analyser théoriquement. Cependant, l’analyse tensorielle des réseaux (ATR) basée sur la méthode de Kron complétée du modèle de Branin (KB) laisse entrevoir une capacité d’analyse plus avancée des PCB. L’ATR appliquée à l’espace des mailles conduit à une modélisation compacte et une expression lagrangienne directe des circuits imprimés. Cette thèse présente une approche sous l’ATR appliquée aux circuits imprimés d’IS, IP, CEM et multiphysique des circuits imprimés multicouches. Après la description de l’état de l’art, la méthodologie de base de l’approche par l’ATR est décrite à l’aide de la formulation des métriques tensorielles dans le domaine des fréquences. Après définition des éléments primitifs nécessaires pour les structures des circuits imprimés et l’introduction analytique de la méthode KB, le modèle sous l’ATR et des analyses de sensibilité sont comparées avec des simulations « 3D » en utilisant des outils commerciaux et des mesures expérimentales du régime continu jusqu’à des fréquences de quelques gigahertz. Ensuite, le modèle des circuits multicouches est originellement traduit totalement dans le domaine temporel (DT) en définissant les opérateurs temporels appropriés aux éléments dits « primitifs » sous l’ATR. La pertinence du modèle ATR en DT est vérifiée par des comparaisons avec des simulations é3D » et des mesures de circuits multicouches en prenant en compte des signaux de débits de l’ordre du gigabit par seconde. Dans la partie suivante, des modèles innovants élaborés via l’ATR pour la CEM en modes rayonnés des cartes multicouches sont étudiés en considérant des couplages entre champs électromagnétiques et cartes multicouches. La modèle élaboré sous l’ATR pour la CEM des modes rayonnés est validé avec un scénario composé de circuits imprimés multicouches avec une ligne d’interconnexion en forme de « Z » agressés par des rayonnements électromagnétiques émis dans différentes directions de propagation, et aussi avec un couplage rayonné entre un circuit microruban avec une ligne en forme de « I » et un circuit multicouche. Puis, une analyse multiphysique complètement originale d’un circuit multicouche sous agression de cycle thermique est développé toujours sous le formalisme de l’ATR en traitant des phénoménologies électromécaniques. Après avoir formulé l’expression des sous-systèmes monophysiques, la métrique multiphysique du circuit multicouche sous agression de cycle thermique est élaborée. La faisabilité de cette analyse multiphysique est vérifiée à l’aide d’une preuve de concept d’un circuit à quatre couches. La dernière partie de cette thèse est consacrée à la CEM en mode conduit d’un circuit imprimé composé d’interconnexions multicouches, de composants passifs et de circuits intégrés comme composants actifs. Il est démontré que l’approche par l’ATR permet d’hybrider des modèles analytiques, numériques, et les standards IC-EMC et IBIS afin de réaliser une analyse pertinente de la CEM des cartes multicouches. Ce modèle typiquement système permet de prédire des niveaux de bruits émis par des perturbations CEM liées aux courants de perturbation induits par des circuits intégrés, via une matrice impédance de transfert dans les domaines des fréquences et du temps
The modern electronic printed circuit boards (PCBs) require challenging signal integrity (SI), power integrity (PI) and electromagnetic compatibility (EMC) analyses. The PCB analysis conventional computational methods do not allow to pose and to analyse theoretically most of problems. However, the Kron’s method completed by Branin’s one based tensorial analysis of networks (TAN) promises a complex PCB analyses possibility. The TAN formalism applied to mesh space allows the PCB compact modeling and direct Lagrangian expression. This thesis introduces multilayer PCBs SI, PI, EMC, and Multiphysic TAN approaches. After the state-of-the-art description, the TAN modelling basic methodology by the way of tensorial metric formulation applied to PCB analysis in the frequency domain is developed. After the definitions of primitive elements necessary to investigate the PCB structure and the KB method introduction, the TAN model is validated from DC to some gigahertz with commercial tool « 3D » EM full-wave simulations and experimental measurements added by sensitivity analyses. Then, the multilayer PCB TAN is originally translated into innovative direct time-domain (TD) model by defining the primitive element appropriate TD operators. The TD TAN model efficiency is verified with multilayer PCB 3D simulation and measuremet comparisons by considering multigigabits-per-second high-speed signals. In the next part, original multilayer PCB radiated EMC TAN models are investigated via EM field coupling onto the PCBs. The radiated EMC model is validated with a scenario consisted of « Z »-shape multilayer PCB aggressed by radiated EM plane waves in different propagation directions and radiated coupling between multilayer and « I »-shape line microstrip PCBs. Then, a completely original Multiphysics TAN of multilayer PCB under thermal cycle aggression is developed by dealing with electrothermomechanical phenomena. After formulating monophysics subsystem TAN expression, the Multiphysics metrics of multilayer PCB under thermal cycle aggression id elaborated. The TAN Multiphysics analysis feasibility is verified with a four-layer proof-of-concept. The last part of this thesis is devoted to conducted EMC TAN of PCB system comprised of multilayer interconnects, passive components and active integrated circuit (IC) elements. It is shown that the TAN approach enables to hybridize the analytical, numerical, IC-EMC and IBIS standard models to perform a multilayer PCB EMC relevant analysis. This system level model allows to compute the EMC noises induced by IC perturbation currents with an innovative transfer matrix impedance in both frequency and time domains
APA, Harvard, Vancouver, ISO, and other styles
45

Tersi, Luca <1981&gt. "Methodological improvement of 3D fluoroscopic analysis for the robust quantification of 3D kinematics of human joints." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3607/1/tersi_luca_tesi.pdf.

Full text
Abstract:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
APA, Harvard, Vancouver, ISO, and other styles
46

Tersi, Luca <1981&gt. "Methodological improvement of 3D fluoroscopic analysis for the robust quantification of 3D kinematics of human joints." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3607/.

Full text
Abstract:
3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.
APA, Harvard, Vancouver, ISO, and other styles
47

Pinto, Sílvia Cristina Dias. "Análise de formas 3D usando wavelets 1D, 2D e 3D." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-02052007-085441/.

Full text
Abstract:
Este trabalho apresenta novos métodos para análise de formas tridimensionais dentro do contexto de visão computacional, destacando-se o uso das transformadas wavelets 1D, 2D e 3D, as quais proporcionam uma análise multi-escala das formas estudadas. As formas analisadas se dividem em três tipos diferentes, dependendo da sua representação matemática: f(t)=(x(t),y(t),z(t)), f(x,y)=z e f(x,y,z)=w. Cada tipo de forma é analisado por um método melhor adaptado. Primeiramente, tais formas passam por uma rotina de pré-processamento e, em seguida, pela caracterização por meio da aplicação das transformadas wavelet 1D, 2D e 3D para as respectivas formas. Esta aplicação nos permite extrair características que sejam invariantes à rotação e translação, levando em consideração alguns conceitos matemáticos da geometria diferencial. Destaca-se também neste trabalho a não obrigatoriedade de parametrização das formas. Os resultados obtidos a partir de formas extraídas de imagens médicas e dados biológicos, que justificam este trabalho, são apresentados.
This work presents new methods for three-dimensional shape analysis in the context of computational vision, being emphasized the use of 1D, 2D and 3D wavelet transforms, which provide a multiscale analysis of the studied shapes. The analyzed shapes are divided in three different types depending on their representation: f(t)=(x(t),y(t),z(t)), f(x,y)=z and f(x,y,z)=w. Each type of shape is analyzed by a more suitable method. Firstly, such shapes undergo a pre-processing procedure followed by the characterization using the 1D, 2D or 3D wavelet transform, depending on its representation. This application allows to extract features that are rotation- and translation-invariant, based on some mathematical concepts of differential geometry. In this work, we emphasize that it is not necessary to use the parameterized version of the 2D and 3D shapes. The experimental results obtained from shapes extracted from medical and biological images, that corroborate the introduced methods, are presented.
APA, Harvard, Vancouver, ISO, and other styles
48

Halajová, Andrea. "Analýza únikových tras v 3D modelu budovy." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2016. http://www.nusl.cz/ntk/nusl-390171.

Full text
Abstract:
This diploma thesis is dealing about the Analysis of escape routes in 3D Building model. At first 3D BIM model, in format IFC, is extracted into the GIS software ArcGiS. From the model is created topology network, which representes the rooms and their connections. Based on this model is created the network analysis for building's escape routes. Results are 5 graphical representation of networks, web visualization and time required to exit from each room of the building.
APA, Harvard, Vancouver, ISO, and other styles
49

Thyagaraj, Suraj. "Dynamic System Analysis of 3D Ultrasonic Neuro-Navigation System." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1967797551&sid=3&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Dool, Carly Jade. "Immunohistochemical and 3D analysis of the human fetal palate." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58725.

Full text
Abstract:
Objectives: Hard palate development occurs between 7-12 weeks post conception with the fusion of the epithelial lined maxillary prominences creating a midline epithelial seam. The failure of fusion or seam removal in hard palate leads to cleft palate or cyst formation. The mechanism of soft palate formation is less well defined. Evidence exists supporting both fusion and the alternative mechanism of merging. The aim of this study is to densely sample the late embryonic-early fetal period between 54-84days post-conception to determine the mechanism and timing of palate closure. Methods: 28 human fetal heads aged 54-74days were serially sectioned and subjected to immunohistochemistry. Several archival specimens had the coverslips removed and were used for IHC. Seven fetal heads aged 67-84days underwent MRI and microCT with phosphotungstic acid contrast agent. Qualitative analysis of 3-dimensional shape changes during palatal development was completed using a 3D slicer program. Results: We confirm the presence of an epithelial seam extending throughout the soft palates in 57-day specimens suggesting fusion. Cytokeratin antibody staining confirmed the epithelial character of the cells in the midline seam and that there was no difference in the intensity of staining in the endodermal versus ectodermally-derived epithelium. There was surprisingly no E-Cadherin antibody staining in the midline seam although positive signal was found in the dental lamina and dorsal surface of the tongue. MF-20 antibody staining identified the facial musculature including palatine muscles controlling movement of the soft palate. The midline seam in the soft palate is rapidly degraded prior to 64days, however MRI and PTA-microCT imaging revealed the hard palate midline seam is almost completely intact until at least 84-days. Conclusions: The epithelial character of the midline seam in the hard and soft palate has similar cytokeratin antibody staining. There is a difference in E-Cadherin staining which, may be tied to epithelial-mesenchymal transformation that takes place in the midline epithelial seam. The 3D-imaging results were superior with PTA staining and revealed the complex anatomy of the oropharynx. PTA staining can be used in the future to comprehensively document the fusion process as well as muscular development in the soft palate.
Dentistry, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography