To see the other types of publications on this topic, follow the link: Standards 3D.

Dissertations / Theses on the topic 'Standards 3D'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 dissertations / theses for your research on the topic 'Standards 3D.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gharechaie, Arman Tommy, and Omid Darab. "Achieving New Standards in Prosthetic Socket Manufacturing." Thesis, Mälardalens högskola, Innovation och produktrealisering, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-45231.

Full text
Abstract:
Preface: The research about product development of a prosthetic socket was conducted by two students from Mälardalen University, department of Innovation, Design, and Technology. Background: The most recent public survey shows that an estimated 5 million people in China are amputees, out of which a significantly large portion are below-elbow amputees. Sockets sold to below-elbow amputees are equipped with only two surface electromyography sensors, has low comfortability, has problems with perspiration, and a high weight. The current standard for socket manufacturing has not changed in decades. Research Questions: The following research questions have determined the direction of the research: (1) What measurable factors contribute to a convenient and ergonomic feature design in prosthetic socket from the end-user’s perspective? (2) How can the weight and functionality be improved to achieve a prosthetic socket more suited to the end-user, with respect to the existing prosthetic socket? (3) Which material and manufacturing method is suitable for producing cost-effective and customized prosthetic sockets? Research Method: The research was guided by the 5th edition of Product Design and Development by Ulrich & Eppinger (2012) where the product development process described in five of the six phases from planning to test and refinement were utilized. The data collection and analysis techniques performed in this research was guided by Research Methods for Students, Academics and Professionals by Williamson & Bow (2002). Interviews were conducted with five different stakeholders to find specifications of requirements and concretize subjectivism of what defines quality and ergonomics. Implementation: Currently, below-elbow amputees order sockets from orthopedic clinics. The socket was identified as a product of Ottobock. Investigations were made to find optimal solutions to the specification of requirements. Results: The development of a socket concept was designed for additive manufacturing using a multi-jet fusion printer. Analysis: This concept had significant improvements to parameters: higher grade of customizability, 30 % reduced weight, 48 % cost reduction, a new production workflow with 93,5 % automation, and a 69 % reduction in manual work hours. Conclusions: The data of the research strongly indicate existing potentials in enhancing socket design techniques and outputs by implementation of additive manufacturing processes. This can prove to be beneficial for achieving more competitive prosthetics and associated services.
Förord: Denna forskning om produktutvecklingsprocessen av en armprotes genomfördes av två studenter från Mälardalens universitet, avdelningen för innovation, design och teknik. Bakgrund: Den senaste offentliga undersökningen visar att cirka 5 miljoner människor i Kina är amputerade, varav en betydligt stor del är under-armbågsamputerade. Armproteser som säljs till underarmsamputerade individer är utrustade med endast två yt-elektromyografiska sensorer, har låg komfort, har problem med perspiration och hög vikt. Den nuvarande standarden för armproteser har inte förändrats under årtionden. Forskningsfrågor: Följande forskningsfrågor har bestämt riktningen för forskningen: (1) Vilka mätbara faktorer bidrar till en praktisk och ergonomisk funktionsdesign i underarmsproteser ur slutanvändarens perspektiv? (2) Hur kan vikten och funktionaliteten förbättras för att åstadkomma en underarmsprotes som är bättre anpassad för slutanvändaren med avseende på den befintliga underarmsprotesen? (3) Vilket material och tillverkningsmetod är lämpligt för att producera kostnadseffektiva och anpassade underarmsproteser? Forskningsmetod: Forskningsmetoden styrdes av den femte upplagan av Product Design and Development av Ulrich & Eppinger (2012) där produktutvecklingsprocessen är uppdelad i sex faser. I denna forskning användes de fem första faserna från planering till testning och justering. Tekniker för datainsamling och analys som användes i denna forskning styrdes av Research Methods for Students, Academics and Professionals av Williamson & Bow (2002). Intervjuer genomfördes med fem olika intressenter för att hitta kravspecifikationer och för att konkretisera subjektivitet för vad som definierar kvalitet och ergonomi. Implementering:  Underarmsamputerade individer beställer för närvarande armproteser från ortopediska kliniker. Armprotesen identifierades som en produkt av Ottobock. Undersökningar gjordes för att hitta optimala lösningar för kravspecifikationen. Resultat: Konceptutvecklingen av en armprotes utformades för additiv tillverkning med hjälp av en multi-jet-fusion-skrivare. Analys: Det här konceptet hade betydande förbättringar av parametrar: högre grad av anpassningsbarhet, 30 % minskad vikt, 48 % kostnadsreduktion, ett nytt produktionsflöde med 93,5 % automatisering och en 69 % minskning av manuella arbetstider. Slutsatser: Data från denna forskning indikerar att det finns starkt potential för att förbättra designtekniker och utgångar av underarmsproteser genom implementering av additiva tillverkningsprocesser. Detta kan visa sig vara fördelaktigt för att uppnå mer konkurrenskraftiga proteser och tillhörande tjänster.
APA, Harvard, Vancouver, ISO, and other styles
2

Apaydin, Ozan. "Networked humanoid animation driven by human voice using extensible 3D (X3D), H-Anim and JAVA speech open standards." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://handle.dtic.mil/100.2/ADA401793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Winterbottom, Marc. "Individual Differences in the Use of Remote Vision Stereoscopic Displays." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1433453135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jaillot, Vincent. "3D, temporal and documented cities : formalization, visualization and navigation." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSE2026.

Full text
Abstract:
L'étude et la compréhension de l'évolution des villes est un enjeu sociétal important, notamment pour améliorer la qualité de vie dans une ville toujours plus dense. Le numérique et en particulier les modèles 3D de villes peuvent être des éléments de réponse. Leur manipulation est parfois rendue complexe par la prise en compte de leurs dimensions thématique, géométrique et topologique ainsi que de leur structuration hiérarchique
The study and understanding of cities evolution is an important societal issue, particularly for improving the quality of life in an increasingly dense city. Digital technology and in particular 3D city models can be part of the answer. Their manipulation is however sometimes complex due to their thematic, geometric, topological dimensions and hierarchical structure.In this thesis, we focus on the integration of the temporal dimension and in the enrichment with multimedia documents of these 3D models of the city, in an objective of visualization and navigation on the web. Moreover, we take a particular interest in interoperability (based on standards), reusability (with a shared software architecture and open source components) and reproducibility (to make our experiments durable).Our first contribution is a formalization of the temporal dimension of cities for interactive navigation and visualization on the web. For this, we propose a conceptual model of existing standards for the visualization of cities on the web, which we extend with a formalization of the temporal dimension. We also propose a logical model and a technical specification of these proposals.Our second contribution allows the integration of multimedia documents into city models for spatial, temporal and thematic visualization and navigation on the web. We propose a conceptual model for the integration of heterogeneous and multidimensional geospatial data. We then use it for the integration of multimedia documents and 3D city models.Finally, this thesis took place in a multidisciplinary context via the Fab-Pat project of the LabEx IMU, which focuses on cultural heritage sharing and shaping. In this framework, a contribution combining social sciences and computer science has allowed the design of DHAL, a methodology for the comparative analysis of devices for sharing heritage via digital technology. Dans cette thèse, nous nous intéressons à l'intégration de la dimension temporelle et à l'enrichissement avec des documents multimédia de ces modèles 3D de la ville, dans un objectif de visualisation et de navigation sur le web. Nous portons un intérêt particulier à l'intéropérabilité (en s'appuyant sur des standards), à la réutilisabilité (avec une architecture logicielle partagée et des composants open source) et à la reproductibilité (permettant de rendre nos expérimentations pérennes).Notre première contribution est une formalisation de la dimension temporelle des villes pour une navigation et visualisation interactive sur le web. Pour cela, nous proposons un modèle conceptuel des standards existants pour la visualisation de villes sur le web, que nous étendons avec une formalisation de la dimension temporelle. Nous proposons également un modèle logique et une spécification technique de ces propositions.Notre deuxième contribution permet d'intégrer des documents multimédias aux modèles de villes pour une visualisation et une navigation spatiale, temporelle et thématique sur le web. Nous proposons un modèle conceptuel pour l'intégration de données géospatiales hétérogènes et multidimensions. Nous l'utilisons ensuite pour l'intégration de documents multimédias et de modèles 3D de villes.Enfin, cette thèse s'est déroulée dans un contexte pluridisciplinaire via le projet Fab-Pat, du LabEx IMU, qui s'intéresse au partage de la fabrique du patrimoine. Dans ce cadre, une contribution mêlant sciences sociales et informatique a permis de concevoir DHAL, une méthodologie pour l’analyse comparative de dispositifs pour le partage du patrimoine via le numérique
APA, Harvard, Vancouver, ISO, and other styles
5

Pragarauskaitė, Julija. "3D objektų eksportavimas į M3G formatą, naudojamą 3D grafikai mobiliuosiuose telefonuose." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20090908_201759-77547.

Full text
Abstract:
Šiame magistro darbe nagrinėjamas M3G standartas, skirtas 3D grafikai mobiliuose telefonuose, tiriamos 3D modelių eksportavimo į M3G formatą galimybės bei M3G standarto suderinamumas. Apžvelgiamos 3D modeliavimo bei eksportavimo priemonės bei pasirinkta modeliavimo priemonė Blender, įgalinanti pasiekti 3D modelio duomenis naudojant Python skriptus. Sukurta eksportavimo iš Blender modeliavimo priemonės į M3G formatą schema. Schema realizuojama suprogramavus eksportavimo priemonę vartojant Python bei Java programavimo kalbas. Eksportavimo priemonė veikia kaip papildomas įrankis Blender modeliavimo priemonėje ir gali būti iškviesta iš pagrindinio meniu. Python programavimo kalba eksportavimo priemonėje naudojama nuskaityti 3D scenos duomenis ir juos išsaugoti XML formatu. Ekportavimo priemonėje Java programavimo kalba nuskaitomas bei apdorojamas XML failas, sudaromas 3D elementų hierarchinis medis, sukuriamas M3G failas, sukuriami duomenų masyvai bei sektoriai, kuriuose saugomi 3D scenos duomenys ir į M3G failą eksportuojami 3D scenos duomenys. Eksportavimo priemonė turi galimybę įkrauti eksportuotą M3G failą į mobiliąją aplikaciją ir atidaryti ją Java emuliatoriuje – mobiliajame telefone. Eksportavimo priemonė palyginta su kitomis eksportavimo priemonėmis eksportavus skirtingus 3D modelius į M3G formatą bei palyginus gautus rezultatus pagal 3D objektų savybių bei transformacijų atitikimą pradiniam 3D modeliui, eksportavimo priemonės patikimumą bei kitus kriterijus.
In the master thesis the M3G standard for 3D graphics in mobile phones, its compatibility and possibilities of exporting 3D models to M3G format are investigated. Most popular 3D modelling and exporting to M3G format tools are analyzed. Blender was selected as main master thesis modelling tool for possibility to reach 3D model data using Python scripts. An exporting scheme from Blender to M3G format was created and realized using Python and Java programming languages. The exporter works as a plug-in for the Blender modelling tool. It can be accessed in the main Blender menu. The Python programming language in the exporter was used for extracting data from 3D scene and saving it in XML format. The Java programming language was used for reading 3D data from XML file, making hierarchical 3D elements tree, creating 3D elements tree, constructing data arrays and sectors, where 3D data arrays and sectors are kept and exporting data to M3G file. The exporter can load an exported M3G file to mobile application and show it in Java emulator – mobile phone. On the basis of several 3D models, the created exporter was compared to other exporters using the quality of performance, reliability and other criteria.
APA, Harvard, Vancouver, ISO, and other styles
6

Lohse, Dag. "Ein Standard-File für 3D-Gebietsbeschreibungen." Universitätsbibliothek Chemnitz, 2005. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200501080.

Full text
Abstract:
Es handelt sich hierbei um die Dokumentation eines Dateiformats zur Beschreibung dreidimensionaler FEM-Gebiete in Randrepräsentation. Eine interne Datenbasis dient als Verbindung zwischen externem Dateiformat und verschiedenen verarbeitenden Programmen.
APA, Harvard, Vancouver, ISO, and other styles
7

Burns, Jessica L. "Defining the Modeling Standard for 3D Character Artists." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/honors/296.

Full text
Abstract:
The focus of this thesis is to find the most modern methods to craft 3D characters for implementation in game engines. The industry is constantly adapting to new software and my study is to cover the most efficient way to create a character from an idea to fully realized character in 3D. The following is my journey in learning new techniques and adapting to the new software. To demonstrate, I will work through the process of creating a character from a 2D concept to a 3D model rendered in real time.
APA, Harvard, Vancouver, ISO, and other styles
8

Nzetchou, Stéphane. "Méthodologie d'enrichissement sémantique de la CAO dans un environnement de continuité numérique." Thesis, Compiègne, 2021. http://www.theses.fr/2021COMP2642.

Full text
Abstract:
La transition numérique dans l’industrie manufacturière se caractérise par un passif de trois voire quatre décennies. Certains modèles 3D ou maquettes numériques accumulés durant cette période sont des solides morts, c’est-à-dire des modèles 3D dépourvus d’arbre de construction, qui se caractérisent par des géométries absentes, dû aux changements des logiciels ou à des versions de formats 3D qui n’ont pas subi de mise à jour. Des activités de rétro-conception des modèles 3D, visent à obtenir des modèles 3D sémantiquement riches, c’est-à-dire paramétriques et modifiables, constitués d’opérations de constructions, porteur d’attributs et de métadonnées, avec des règles et contraintes géométriques, etc., grâce à l’utilisation des outils d’ingénierie comme CATIA par exemple ou par des approches à base de nuages de points provenant d’une numérisation par exemple. Mais ce n’est toujours pas satisfaisant, car à l’issue de l’opération de rétro-conception, nous retrouvons souvent un solide avec une représentation sémantique faible ou un arbre de construction absent. Ce qui nous amène à proposer dans le cadre de ce travail de thèse, une méthodologie de gestion des informations liées aux modèles 3D afin d’intégrer à ces modèles 3D des informations expertes que nous qualifions de sémantique. Les solides morts manipulés sont généralement au format de bas niveau tels que STL, IGES ou STEP AP203. Ils sont utilisés comme données d’entrée pour notre méthodologie et ils peuvent aussi être associés à des données de définition du produit, telles qu’une mise en plan du produit ou des documents. Le traitement des modèles 3D exige une solution qui soit capable de gérer d’une part, les maquettes numériques et les informations qu’elles pourraient éventuellement intégrer et d’autre part, l’incomplétude de certains modèles 3D qui est liée au format 3D ou à la limite de la technologie utilisée pour obtenir le modèle 3D (ex : limite logiciel, format 3D de représentation géométrique uniquement et qui ne supporte pas une représentation de l’arbre de construction ou bien qui ne peut pas représenter graphiquement des dimensions géométriques et des tolérances, etc.). Enfin, la pertinence des informations intégrées au modèle 3D, de nature non géométrique, lors de la phase de recouvrement sémantique devrait permettre, dans certains cas, de produire des modèles 3D paramétrés, propres à l’activité du domaine d’application. L’état de l’art, portant sur la représentation des informations contenues dans un modèle CAO et sur la gestion de ces informations, permet d’identifier les techniques et approches qui aident à l’enrichissement sémantique des modèles 3D à des niveaux de granularités diverses. Cette thèse propose une méthodologie nommée Vaquero For CAD Semantic Enrichment (VFCSE) et qui se décompose en trois étapes : l’accès, l’identification et l’annotation. Le but de cette méthodologie est d’intégrer aux solides morts des informations manquantes et standardisées, de nature non géométrique, comme par exemple des spécifications de produit, des tolérancements, des dimensions géométriques, etc. Ces informations seront issues des besoins de l’utilisateur intervenant sur le modèle 3D et proviendront d’un standard sémantiquement riche afin d’être utiles à de nombreuses opérations liées au cycle de vie du produit. Cet enrichissement grâce à ce standard sémantiquement riche, permettra une pérennisation des informations et une réutilisation efficace des informations du modèle 3D. Pour cela, un modèle 3D est récupéré dans un PDM (Product Data Management) grâce à une requête utilisateur. Il est visualisé dans une visionneuse 3D supportant le format STL, IGES et STEP AP203. Ensuite, suit une étape d’identification des composants du modèle 3D. Ces composants peuvent être des pièces ou des assemblages. Aux composants identifiés, est affectée une annotation métier liée à l’usage, basée sur le format STEP AP242 qui représente le standard sémantiquement riche
The digital transition in the manufacturing industry is characterised by a three or even four-decade liability. Some CAO models or digital mock-ups accumulated du ring this period are frozen, i.e. 3D models without a construction tree, which are characterised by missing geometries, due to software changes or versions of 3D formats that have not been updated Reverse engineering activities of CAO models, aiming at obtaining semantically rich 3D models, i.e. parametric and modifiable, made up of construction operations, carrying attributes and metadata, with geometric ru les and constraints, etc., thanks to the use of engineering tools such as CATIA for example, or by approaches based on point clouds coming from a scan for example. But, this is still not satisfactory, because at the end of the reverse engineering activities, we often obtain a solid with a weak semantic representation or an absent construction tree. This leads us to propose in the framework of this thesis work, a methodology for managing information linked to CAO models in order to integrate expert information that we call semantic into these CAO models. The frozen CAO models handled are usually in low-level formats such as STL, IGES or STEP AP203. They are used as input data for our methodology and they can be associated with product definition data, such as a product drawing or documents. The processing of CAO models requires a solution that is able to_manage the digital models and the information they couId possibly integrate. And also the incompleteness of some CAO models that is linked to the 3D format or to the limit of the technology used to obtain the CAO model (e.g. software li mit, 3D format for geometric representation only and that does not support a representation of the construction tree or that cannot graphically represent geometric dimensions and tolerances, etc.). Finally, the relevance of integrated information into CAO model, of a non-geometric nature, during the semantic overlay phase should make it possible, in certain cases, to produce parameterised CAO models, specific to the activity of the application domain. The state of the art, concerning the information representation contained in CAO model and the management of this information, makes it possible to identify techniques and approaches that help the semantic enrichment of CAO models at various levels of granularity. This thesis proposes a methodology named Vaquero For CAO Semantic Enrichment (VFCSE), which is made of three step access, identification and annotation. The aim of this methodology is to integrate missing and standardised information of a non-geometric nature, such as product specifications, tolerances, geometric dimensions, etc., into frozen CAO models. This information will be derived from user needs working on the CAO model and will corne from a semantically rich standard in order to be useful for many operations related to the product life cycle. The enrichment, thanks to this semantically rich standard, will allow for a perpetuation of the information and an efficient reuse of CAO model information. ln order to do this, a CAO model is retrieved from a PDM (Product Data Management) thanks to a user request. lt is visualised in a CAO viewer supporting STL, IGES and STEP AP203 formats. Then, follows a step of identifying components of CAO model. These components can be parts or assemblies. The identified components are annotated based on the STEP AP242 format, which represents the semantically rich standard. These annotations are stored in a standardised ontology, which serves as a minimal basis for carrying all the semantics to be integrated into the CAO mode in order to make the CAO model durable and reusable. The scientific contribution of this work is mainly based on the possibility of reverse engineering by using ontologies to annotate 3D models, according to user needs who has the CAO model at his disposal
APA, Harvard, Vancouver, ISO, and other styles
9

Evans, Alexander. "3D-visualisering av detaljplaner : Standarder och riktlinjer." Thesis, Karlstads universitet, Institutionen för miljö- och livsvetenskaper (from 2013), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-68522.

Full text
Abstract:
Användningen och behovet av 3D-modeller inom fysisk planering har ökat, både nationellt och internationellt. I nuläget saknas standarder och riktlinjer kring hur man bör förhålla sig vid 3D-visualisering av detaljplaner, vilket skapar oklarheter kring hur de skall visualiseras. De flesta av Sveriges kommuner ser positivt på ett införande av en gemensam nationell standard för 3D-visualiseringar inom planprocessen, då detta troligtvis skulle underlätta och förbättra arbetsprocessen vid 3D-detaljplanering.Syftet med arbetet var att undersöka vilka riktlinjer och förhållningssätt man kan utgå från för att 3D-modeller i planprocesser skall öka förståelsen och engagemanget vid exempelvis samråd och medborgardialog.Förslag på riktlinjeroch förhållningssättvid 3D-visualiseringar av detaljplanertogs fram, där fokus låg kring begreppen detaljeringsnivå, ändamål, höjd och utnyttjandegrad. En undersökning gjordes även om någon av standarderna SOSI eller CityGML är lämpliga att använda för 3D-detaljplanering i Sverige utifrån de krav som ställs i plan-och bygglagen.Metoden bestod övervägande av en litteraturstudie där både nationell och internationell forskning studerades. Det samlades även in kompletterande information genom personlig kommunikation med tjänstemän från Karlstad-och Falu kommun. En fallstudie gjordes över detaljplanerna Sundsta torg och Hyttan 16 och 18 där en diskussion fördes kring hur dessa skulle kunna visualiseras i 3D.Utifrån resultatet drogs slutsatsen att det idag finns för lite forskning kring ämnet för att ta fram riktlinjer för en fullskalig 3D-Modell. Resultatet visade även att den nuvarande versionen av SOSI inte är lämplig för 3D-visualisering av detaljplaner, men att det material som beskriver den kommande versionen ser lovande ut och att CityGML har potential att användas inom detaljplanering, men förslagsvis borde den innefatta fler detaljeringsnivåer.
APA, Harvard, Vancouver, ISO, and other styles
10

Tita, Ralf. "Variable isozentrische Steuerung für einen Standard-C-Bogen mit echtzeitfähiger 3D-Rekonstruktion /." Düsseldorf : VDI-Verl, 2007. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=015964890&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Alajmi, Abdulrahman N. "On The Lattice Size With Respect To The Standard Simplex in 3D." Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1598893373379275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Trache, Marian Tudor. "The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-202530.

Full text
Abstract:
Die technologische Entwicklung im Bereich der Echokardiographie hat in der letzten Dekade neue Methoden zur objektiven Erfassung der regionalen linksventrikulären Wandbewegung ermöglicht. Speckle Tracking erfasst die myokardiale Deformation durch die Positionsänderung einzelner Bildpunkte von einem Bild des analysierten Datensatzes zum nächsten. Diese Methode ist dem Gewebedoppler überlegen, insbesondere wegen ihrer Unabhängigkeit vom Anlotungswinkel. Zwei-dimensionale (2D) Speckle Tracking Analysen wurden für die klinische Praxis validiert. Die drei-dimensionale (3D) Echokardiographie erlaubt inzwischen Speckle Tracking Analysen von 3D Datensätzen, welche jedoch für die klinische Praxis noch nicht ausreichend validiert sind. Bei Patienten mit normaler regionaler linksventrikulärer Wandbewegung (N=37), sowie bei Patienten mit ischämie-bedingten Wandbewegungsstörungen (N=18) wurden 3D und 2D Speckle Tracking Analysen durchgeführt. Die Vergleichbarkeit der beiden Methoden hinschtlich der Quantifizierung von normalen und pathologischen Wandbewegungsmustern wurde anhand dieser Messungen geprüft. Des weiteren wurde der Einfluss der Bildrate und Bildqualität drei-dimensionaler Datensätze auf die Vergleichbarkeit beider Methoden analysiert. Es zeigte sich eine gute Vergleichbarkeit des 2D und 3D Speckle Tracking in der Diagnostik eingeschränkter linksventrikulärer systolischer Funktion, sowie in der Lokalisationsdiagnostik umschriebener Wandbewegungsstörungen. 2D und 3D Speckle Tracking sind jedoch noch nicht als gleichwertige Methoden anzusehen. Die Bildqualität, generell bei beiden Modalitäten - jedoch speziell bei 3D Datensätzen, sowie die Bildrate der 3D Datensätze zeigen signifikante Einflüsse auf die 3D Strain Analysen. Eine korrekte Standardisierung der analysierten Aufnahmen und eine optimale Bildqualität sind wichtige Faktoren, die die Zuverlässigkeit des 2D und 3D Speckle Trackings bestimmen.
APA, Harvard, Vancouver, ISO, and other styles
13

Amiri, Delaram. "Bilateral and adaptive loop filter implementations in 3D-high efficiency video coding standard." Thesis, Purdue University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10109195.

Full text
Abstract:

In this thesis, we describe a different implementation for in loop filtering method for 3D-HEVC. First we propose the use of adaptive loop filtering (ALF) technique for 3D-HEVC standard in-loop filtering. This filter uses Wiener-based method to minimize the Mean Squared Error between filtered pixel and original pixels. The performance of adaptive loop filter in picture based level is evaluated. Results show up to of 0.2 dB PSNR improvement in Luminance component for the texture and 2.1 dB for the depth. In addition, we obtain up to 0.1 dB improvement in Chrominance component for the texture view after applying this filter in picture based filtering. Moreover, a design of an in-loop filtering with Fast Bilateral Filter for 3D-HEVC standard is proposed. Bilateral filter is a filter that smoothes an image while preserving strong edges and it can remove the artifacts in an image. Performance of the bilateral filter in picture based level for 3D-HEVC is evaluated. Test model HTM- 6.2 is used to demonstrate the results. Results show up to of 20 percent of reduction in processing time of 3D-HEVC with less than affecting PSNR of the encoded 3D video using Fast Bilateral Filter.

APA, Harvard, Vancouver, ISO, and other styles
14

Kumlin, Maja. "Redovisning av 3D-fastighetsbildning : En analys av nuvarande kravnivå och alternativ metod." Thesis, Högskolan i Gävle, Samhällsbyggnad, GIS, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-27565.

Full text
Abstract:
Sedan 1 januari 2004 är det möjligt att om- och nybilda fastigheter som avgränsas i tre dimensioner; höjd, bredd och djup och som enligt fastighetsbildningslagen (FBL) definieras som tredimensionell fastighet (3D- fastighet). Trots det faktum att 3D-fastigheter innebär införandet av ytterligare en dimension i fastighetsbildningen gjordes dock få ändringar i FBL. En av ändringarna i FBL gällde utstakning och utmärkning av gränser. Sedan tidigare ska gräns som tillkommer genom fastighetsbildning utstakas och utmärkas i behövlig omfattning. År 2004 infördes att om utstakning/utmärkning lämpligen inte kan ske i en förrättning så ska gränserna beskrivas på den karta som upprättas eller i andra förrättningshandlingar. I och med att 3D-fastigheter sällan är kopplade till marken är förrättningshandlingarna de dokument där gränserna beskrivs. Redan vid införandet av 3D-fastighetsbildning diskuterades svårigheterna med gränsmarkering och redovisning och idag pågår en stor diskussion och flertalet utredningar världen över avseende redovisning av 3D- fastighetsbildningåtgärder. Idag redovisas de nya fastighetsgränserna vid 3D- fastighetsbildning i Sverige i karta, verbal beskrivning samt i ritningar. Allt i 2D, antingen i pappersformat eller PDF. Syftet med denna uppsats är att undersöka vilka krav som idag ställs på redovisning av 3D-fastighetsbildning, analysera behovet av en mer standardiserad process samt redogöra för aktuella alternativa redovisningsmetoder. Fokus för studien ligger i förståeligheten och entydigheten i de förrättningsakter som bildas vid 3D-förrättningar. Till grund för studien ligger studier av lagtext, förrättningsakter, tidigare forskning och annan litteratur samt intervjuer. Resultatet visar att det finns ett behov av att förbättra förståeligheten i de förrättningsakter som bildas vid 3D- fastighetsbildning idag, främst ur ett framtida perspektiv. Samtliga respondenter vid intervjuer anser att akterna är förståeliga för förrättningslantmätare och sakägare idag men kan bli problematiskt när t.ex. en tredje part i framtiden behöver sätta sig in i de juridiska gränserna vid behov av exempelvis en ändring i gränsdragningen. Det saknas även en entydighet i framförallt de ritningar som ligger som underlag för gränserna. Studien har visat att införandet av 3D-modeller i förrättningshandlingarna samt en utveckling av de riktlinjer som finns för ritningar som underlag vid 3D-fastighetsbildning skulle öka förståeligheten av 3D-förrättningar och därmed kunna vara bidragande till en smidigare samhällsbyggnadsprocess.
APA, Harvard, Vancouver, ISO, and other styles
15

Banjak, Hussein. "X-ray computed tomography reconstruction on non-standard trajectories for robotized inspection." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI113/document.

Full text
Abstract:
La tomographie par rayons X ou CT pour "Computed Tomography" est un outil puissant pour caractériser et localiser les défauts internes et pour vérifier la conformité géométrique d’un objet. Contrairement au cas des applications médicales, l’objet inspecté en Contrôle Non Destructif (CND) peut être très grand et composé de matériaux de haute atténuation, auquel cas l’utilisation d’une trajectoire circulaire pour l’inspection est impossible à cause de contraintes dans l’espace. Pour cette raison, l’utilisation de bras robotisés est l’une des nouvelles tendances reconnues dans la CT, car elle autorise plus de flexibilité dans la trajectoire d’acquisition et permet donc la reconstruction 3D de régions difficilement accessibles dont la reconstruction ne pourrait pas être assurée par des systèmes de tomographie industriels classiques. Une cellule de tomographie X robotisée a été installée au CEA. La plateforme se compose de deux bras robotiques pour positionner et déplacer la source et le détecteur en vis-à-vis. Parmi les nouveaux défis posés par la tomographie robotisée, nous nous concentrons ici plus particulièrement sur la limitation de l’ouverture angulaire imposée par la configuration en raison des contraintes importantes sur le mouvement mécanique de la plateforme. Le deuxième défi majeur est la troncation des projections qui se produit lorsque l’objet est trop grand par rapport au détecteur. L’objectif principal de ce travail consiste à adapter et à optimiser des méthodes de reconstruction CT pour des trajectoires non standard. Nous étudions à la fois des algorithmes de reconstruction analytiques et itératifs. Avant d’effectuer des inspections robotiques réelles, nous comptons sur des simulations numériques pour évaluer les performances des algorithmes de reconstruction sur des configurations d’acquisition de données. Pour ce faire, nous utilisons CIVA, qui est un outil de simulation pour le CND développé au CEA et qui est capable de simuler des données de projections réalistes correspondant à des configurations d’acquisition définies par l’utilisateur
X-ray computed tomography (CT) is a powerful tool to characterize or localize inner flaws and to verify the geometric conformity of an object. In contrast to medical applications, the scanned object in non-destructive testing (NDT) might be very large and composed of high-attenuation materials and consequently the use of a standard circular trajectory for data acquisition would be impossible due to constraints in space. For this reason, the use of robotic arms is one of the acknowledged new trends in NDT since it allows more flexibility in acquisition trajectories and therefore could be used for 3D reconstruction of hardly accessible regions that might be a major limitation of classical CT systems. A robotic X-ray inspection platform has been installed at CEA LIST. The considered system integrates two robots that move the X-ray generator and detector. Among the new challenges brought by robotic CT, we focus in this thesis more particularly on the limited access viewpoint imposed by the setup where important constraints control the mechanical motion of the platform. The second major challenge is the truncation of projections that occur when only a field-of-view (FOV) of the object is viewed by the detector. Before performing real robotic inspections, we highly rely on CT simulations to evaluate the capability of the reconstruction algorithm corresponding to a defined scanning trajectory and data acquisition configuration. For this purpose, we use CIVA which is an advanced NDT simulation platform developed at CEA and that can provide a realistic model for radiographic acquisitions and is capable of simulating the projection data corresponding to a specific CT scene defined by the user. Thus, the main objective of this thesis is to develop analytical and iterative reconstruction algorithms adapted to nonstandard trajectories and to integrate these algorithms in CIVA software as plugins of reconstruction
APA, Harvard, Vancouver, ISO, and other styles
16

Shaheed, Rawaa. "3D Numerical Modelling of Secondary Current in Shallow River Bends and Confluences." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34922.

Full text
Abstract:
Secondary currents are one of the important features that characterize flow in river bends and confluences. Fluid particles follow a helical path instead of moving nearly parallel to the axis of the channel. The local imbalance between the vertically varying centrifugal force and the cross-stream pressure gradient results in generating the secondary flow and raising a typical motion of the helical flow. A number of studies, including experimental or mathematical, have been conducted to examine flow characteristics in curved open channels, river meanders, or confluences. In this research, the influence of secondary currents is studied on the elevation of water surface and the hydraulic structures in channel bends and confluences by employing a 3D OpenFOAM numerical model. The research implements the 3D OpenFOAM numerical model to simulate the horizontal distribution of the flow in curved rivers. In addition, the progress in unraveling and understanding the bend dynamics is considered. The finite volume method in (OpenFOAM) software is used to simulate and examine the behavior of secondary current in channel bends and confluences. Thereafter, a comparison between the experimental data and a numerical model is conducted. Two sets of experimental data are used; the data provided by Rozovskii (1961) for sharply curved channel, and the dataset provided by Shumate (1998) for confluent channel. Two solvers in (OpenFOAM) software were selected to solve the problem regarding the experiment; InterFoam and PisoFoam. The InterFoam is a transient solver for incompressible flow that is used with open channel flow and Free Surface Model. The PisoFoam is a transient solver for incompressible flow that is used with closed channel flow and Rigid-Lid Model. Various turbulence models (i.e. Standard k-ε, Realizable k-ε, LRR, and LES) are applied in the numerical model to assess the accuracy of turbulence models in predicting the behaviour of the flow in channel bends and confluences. The accuracies of various turbulence models are examined and discussed.
APA, Harvard, Vancouver, ISO, and other styles
17

Abeidia, Ilham. "Méthodes de comparaison et de synthèse des ECG orthogonaux et 3D à partir des ECG standard à 12 dérivations." Lyon, INSA, 1994. http://www.theses.fr/1994ISAL0134.

Full text
Abstract:
Nous proposons dans ce mémoire une méthodologie d'analyse comparative de la structure spatio-temporelle de diverses images électrocardiographiques de surface représentatives de l'activité électrique du cœur, obtenues soit par synthèse à partir des techniques d'exploration à 12 dérivations (ECG-12D) standard, soit directement par vectocardiographie (VCG). Basée sur la méthode CAVIAR d'analyse sérielle des ECGs orthogonaux ou pseu-orthogonaux, notre approche consiste à transformer les signaux ECG-12D en ECG-3D par différentes méthodes de synthèse, puis à les comparer entre eux et avec le VCG de Frank en procédant à une superposition optimale des tracés dans un nouvel espace de représentation 3D lié au système cœur. Il en résulte une quantification optimale des différences spatio-temporelles des images électrocardiographiques indépendante des variabilités inter et intra-sujets d'origine extra-cardiaque. Neuf méthodes de synthèse publiées dans la littérature (basées essentiellement sur une approche statistique) ont été implémentaées et évaluées sur 250 tracés ECG+VCG à 15 voies simultanées de la base de données européenne CSE " Common Standards for Quantitative Electrocardiography ". Une optimisation des méthodes de synthèse existantes, ainsi qu'une nouvelle transformation ont été développées et mises en œuvre. Cette dernière, en utilisant des techniques d'analyse factorielle, vise à minimiser les pertes d'information d'ordre méthodologique tout en tenant compte de la spécificité du patient. Une évaluation diagnostique complémentaire a été réalisé, mettant en œuvre des méthodes de reconnaissance de formes basées sur des méthodes d'analyse factorielle discriminante pas à pas, pour comparer le contenu sémantique des informations présentes dans les ECG-3D de synthèse et s'assurer de la reproductibilité des résultats. On conclut en émettant quelques recommandations pour le contrôle des processus d'analyse sérielle de l'ECG, élaborées sur la base d'une analyse statistique approfondie, à visée cognitive.
APA, Harvard, Vancouver, ISO, and other styles
18

Trache, Marian Tudor [Verfasser], Andreas [Akademischer Betreuer] Hagendorff, Gerhard [Gutachter] Schuler, and Fabian [Gutachter] Knebel. "The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate / Marian Tudor Trache ; Gutachter: Gerhard Schuler, Fabian Knebel ; Betreuer: Andreas Hagendorff." Leipzig : Universitätsbibliothek Leipzig, 2016. http://d-nb.info/1240481381/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Vignetti, Matteo Maria. "Development of a 3D Silicon Coincidence Avalanche Detector (3D-SiCAD) for charged particle tracking." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI017/document.

Full text
Abstract:
L’objectif de cette thèse est de développer un détecteur innovant de particules chargées, dénommé 3D Silicon Coincidence Avalanche Detector (3D-SiCAD), réalisable en technologie silicium CMOS standard avec des techniques d’intégration 3D. Son principe de fonctionnement est basé sur la détection en "coïncidence" entre deux diodes à avalanche en mode "Geiger" alignées verticalement, avec la finalité d’atteindre un niveau de bruit bien inférieur à celui de capteurs à avalanche standards, tout en gardant les avantages liés à l’utilisation de technologies CMOS; notamment la grande variété d’offres technologiques disponibles sur le marché, la possibilité d’intégrer dans un seul circuit un système complexe de détection, la facilité de migrer et mettre à jour le design vers une technologie CMOS plus moderne, et le faible de coût de fabrication. Le détecteur développé dans ce travail se révèle particulièrement adapté au domaine de la physique des particules de haute énergie ainsi qu’à la physique médicale - hadron thérapie, où des performances exigeantes sont demandées en termes de résistance aux rayonnements ionisants, "material budget", vitesse, bruit et résolution spatiale. Dans ce travail, un prototype a été conçu et fabriqué en technologie HV-CMOS 0,35µm, en utilisant un assemblage 3D de type "flip-chip" avec pour finalité de démontrer la faisabilité d’un tel détecteur. La caractérisation du prototype a finalement montré que le dispositif développé permet de détecter des particules chargées avec une excellente efficacité de détection, et que le mode "coïncidence" réduit considérablement le niveau de bruit. Ces résultats très prometteurs mettent en perspective la réalisation d’un système complet de détection CMOS basé sur ce nouveau concept
The objective of this work is to develop a novel position sensitive charged particle detector referred to as "3D Silicon Coincidence Avalanche Detector" (3D-SiCAD). The working principle of this novel device relies on a "time-coincidence" mode detection between a pair of vertically aligned Geiger-mode avalanche diodes, with the aim of achieving negligible noise levels with respect to detectors based on conventional avalanche diodes, such as Silicon Photo-Multipliers (SiPM), and, at the same time, providing single charged particle detection capability thanks to the high charge multiplication gain, inherent of the Geiger-mode operation. A 3D-SiCAD could be particularly suitable for nuclear physics applications, in the field of High Energy Physics experiments and emerging Medical Physics applications such as hadron-therapy and Proton Computed Tomography whose future developments demand unprecedented figures in terms of material budget, noise, spatial resolution, radiation hardness, power consumption and cost-effectiveness. In this work, a 3D-SiCAD demonstrator has been successfully developed and fabricated in the Austria Micro-Systems High-Voltage 0.35 μm CMOS technology by adopting a “flip-chip” approach for the 3D-assembling. The characterization results allowed demonstrating the feasibility of this novel device and validating the expected performances in terms of excellent particle detection efficiency and noise rejection capability with respect to background counts
APA, Harvard, Vancouver, ISO, and other styles
20

Onyeako, Isidore. "Resolution-aware Slicing of CAD Data for 3D Printing." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34303.

Full text
Abstract:
3D printing applications have achieved increased success as an additive manufacturing (AM) process. Micro-structure of mechanical/biological materials present design challenges owing to the resolution of 3D printers and material properties/composition. Biological materials are complex in structure and composition. Efforts have been made by 3D printer manufacturers to provide materials with varying physical, mechanical and chemical properties, to handle simple to complex applications. As 3D printing is finding more medical applications, we expect future uses in areas such as hip replacement - where smoothness of the femoral head is important to reduce friction that can cause a lot of pain to a patient. The issue of print resolution plays a vital role due to staircase effect. In some practical applications where 3D printing is intended to produce replacement parts with joints with movable parts, low resolution printing results in fused joints when the joint clearance is intended to be very small. Various 3D printers are capable of print resolutions of up to 600dpi (dots per inch) as quoted in their datasheets. Although the above quoted level of detail can satisfy the micro-structure needs of a large set of biological/mechanical models under investigation, it is important to include the ability of a 3D slicing application to check that the printer can properly produce the feature with the smallest detail in a model. A way to perform this check would be the physical measurement of printed parts and comparison to expected results. Our work includes a method for using ray casting to detect features in the 3D CAD models whose sizes are below the minimum allowed by the printer resolution. The resolution validation method is tested using a few simple and complex 3D models. Our proposed method serves two purposes: (a) to assist CAD model designers in developing models whose printability is assured. This is achieved by warning or preventing the designer when they are about to perform shape operations that will lead to regions/features with sizes lower than that of the printer resolution; (b) to validate slicing outputs before generation of G-Codes to identify regions/features with sizes lower than the printer resolution.
APA, Harvard, Vancouver, ISO, and other styles
21

Strand, Robin. "Distance Functions and Image Processing on Point-Lattices : with focus on the 3D face- and body-centered cubic grids." Doctoral thesis, Uppsala universitet, Centrum för bildanalys, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9312.

Full text
Abstract:
There are many imaging techniques that generate three-dimensional volume images today. With higher precision in the image acquisition equipment, storing and processing these images require increasing amount of data processing capacity. Traditionally, three-dimensional images are represented by cubic (or cuboid) picture elements on a cubic grid. The two-dimensional hexagonal grid has some advantages over the traditionally used square grid. For example, less samples are needed to get the same reconstruction quality, it is less rotational dependent, and each picture element has only one type of neighbor which simplifies many algorithms. The corresponding three-dimensional grids are the face-centered cubic (fcc) grid and the body-centered cubic (bcc) grids. In this thesis, image representations using non-standard grids is examined. The focus is on the fcc and bcc grids and tools for processing images on these grids, but distance functions and related algorithms (distance transforms and various representations of objects) are defined in a general framework allowing any point-lattice in any dimension. Formulas for point-to-point distance and conditions for metricity are given in the general case and parameter optimization is presented for the fcc and bcc grids. Some image acquisition and visualization techniques for the fcc and bcc grids are also presented. More theoretical results define distance functions for grids of arbitrary dimensions. Less samples are needed to represent images on non-standard grids. Thus, the huge amount of data generated by for example computerized tomography can be reduced by representating the images on non-standard grids such as the fcc or bcc grids. The thesis gives a tool-box that can be used to acquire, process, and visualize images on high-dimensional, non-standard grids.
APA, Harvard, Vancouver, ISO, and other styles
22

Kyzlink, Lukáš. "Most na křižovatce v Brně." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2014. http://www.nusl.cz/ntk/nusl-226752.

Full text
Abstract:
Diploma thesis describes the design and assessment of bridge supporting structure on the ramp in Brno. Road bridge crosses the I/42 road and railway. The bridge is part of the VMO Brno. The structure is designed from prestressed concrete. Assessment of construction is carried out in accordance with applicable European standards. When designing the support structure is taken into account the gradual construction of the bridge. The thesis includes design calculation, drawing documentation and 3D visualization.
APA, Harvard, Vancouver, ISO, and other styles
23

Jelínek, Vít. "Kalibrace skleněných měřítek." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232162.

Full text
Abstract:
This thesis deals with a more work-efficient and time-efficient method of calibration of standard glass scales, with practical use in the Czech Metrology Institute Regional Inspectorate in Brno. The desired streamlining of calibration were achieved in the use of a 3D coordinate measuring machine Micro-Vu Excel 4520. In the service software InSpec, six measuring programs were designed in the use of a standard glass scale brand SIP. The measurement uncertainties of this calibration were presented and calculated. This thesis draws up a draft proposal of the calibration procedure and drafts a formalized document of the calibration.
APA, Harvard, Vancouver, ISO, and other styles
24

Košťák, Ondřej. "Kalibrace optických souřadnicových měřicích strojů GOM ATOS." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318719.

Full text
Abstract:
This master’s thesis contains the basic division of 3D methods which are used for reconstruction of measured objects. Furthermore the types of structured light patterns used for 3D triangulation. It also contains a description of the construction and functions of GOM ATOS and TRITOP machines. Primarily the thesis deals with the calibration of these machines, the design of suitable materials standard of size and the internal calibration methodology used by ŠKODA AUTO a.s. The text also includes verification of the proposed calibration procedure in practice, proposal for determination of calibration uncertainties and recommendations for practice.
APA, Harvard, Vancouver, ISO, and other styles
25

He, Shuang. "Production et visualisation de niveaux de détail pour les modèles 3D urbains." Ecole centrale de Nantes, 2012. http://portaildocumentaire.citechaillot.fr/search.aspx?SC=theses&QUERY=+marie+Durand#/Detail/%28query:%28Id:%270_OFFSET_0%27,Index:1,NBResults:1,PageRange:3,SearchQuery:%28CloudTerms:!%28%29,ForceSearch:!t,Page:0,PageRange:3,QueryString:%27Shuang%20he%27,ResultSize:10,ScenarioCode:theses,ScenarioDisplayMode:display-standard,SearchLabel:%27%27,SearchTerms:%27Shuang%20he%27,SortField:!n,SortOrder:0,TemplateParams:%28Scenario:%27%27,Scope:%27%27,Size:!n,Source:%27%27,Support:%27%27%29%29%29%29.

Full text
Abstract:
Les modèles 3D urbains sont de plus en plus utilisés dans une large variété d'applications urbaines, comme plates-formes d’intégration et de visualisation de diverses informations. Ce travail porte sur la production et la visualisation de niveaux de détail pour les modèles 3D urbains, en vue de leur utilisation dans des applications SIG. Ce travail propose une solution hybride pour la production de niveaux de détail pour les modèles 3D urbains en utilisant une combinaison de techniques : l'extrusion, l'intégration, la généralisation et la modélisation procédurale. La condition préalable à l’utilisation de notre solution est de disposer des données (comme les empreintes au sol des bâtiments provenant du cadastre) pour générer des modèles volumiques de bâtiments et des données (comme le réseau routier, division administrative, etc. ) pour diviser la ville en unités significatives. Il peut également tirer avantage de l’utilisation d’autres modèles 2D et 3D d’objets de la ville autant que possible. Parce que ces exigences peuvent être remplies à faible coût, la solution peut être facilement adoptée. L’objectif principal de ce travail porte sur les techniques et les algorithmes de généralisation. Un cadre de la généralisation est proposé sur la base du diviser pour régner, réalisé par subdivision de la couverture du sol et la généralisation de cellules. Le cadre permet la généralisation à l’échelle de la ville entière vers des modèles plus abstraits et facilite la généralisation à l’échelle locale en regroupant les objets selon une subdivision hiérarchique de la ville. Une mise en œuvre de la généralisation à l’échelle de la ville est réalisée sur la base de ce cadre. Une approche basée sur l’empreinte au sol des bâtiments est développée pour généraliser les groupes de bâtiments 3D, qui peut être utilisée pour la généralisation à l’échelle locale. En outre, en utilisant les niveaux de détail des modèles 3D produits par la solution proposée, trois exemples de visualisation sont donnés pour démontrer quelques-unes des utilisations possibles : une visualisation multi-échelle, une visualisation « focus + contexte », ainsi qu’une visualisation dépendant de la vue
3D city models have been increasingly used in a variety of urban applications as platforms to integrate and visualize diverse information. This work addresses level-of-detail production and visualization of 3D city models, towards their use in GIS applications. This work proposes a hybrid solution to LoD production of 3D city models, using a combination of techniques: extrusion, integration, generalization, and procedural modeling. The prerequisite for using our solution is to have data (like 2D cadastral buildings) for generating city coverage volumetric building models and data (like road network, administrative division, etc) for dividing the city into meaningful units. It can also gain advantage from using other accessible 2D and 3D models of city objects as many as possible. Because such requirements can be fulfilled at low cost, the solution may be easily adopted. The main focus of this work is placed on generalization techniques and algorithms. A generalization framework is proposed based on the divide-and-conquer concept, realized by land cover subdivision and cell-based generalization. The framework enables city scale generalization towards more abstract city models, and facilitates local scale generalization by grouping city objects according to city cells. An implementation of city scale generalization is realized based on the framework. A footprint-based approach is developed for generalizing 3D building groups at medium LoD, which can be used for local scale generalization. Moreover, using the LoD 3D city models produced by the proposed solution, three visualization examples are given to demonstrate some of the potential uses: multi-scale, focus + context, and view-dependent visualization
APA, Harvard, Vancouver, ISO, and other styles
26

Huot, Stéphane. "Une nouvelle approche pour la conception créative : de l'interprétation du dessin à main levée au prototypage d'interactions non-standard." Phd thesis, Université de Nantes, 2005. http://tel.archives-ouvertes.fr/tel-00010210.

Full text
Abstract:
Il existe de nos jours beaucoup d'environnements dits de CAO, conçus pour assister et accompagner le travail des concepteurs tout au long d'un projet. Pourtant, ces logiciels sont peu adaptés à la première phase d'un projet: la conception créative. En effet, leurs approches géométriques et excessivement directrices ignorent le caractère flou et itératif de la démarche du concepteur, appuyée par un outil figuratif essentiel: le croquis. Cette thèse s'inscrit dans un courant de travaux axés sur l'utilisation du dessin pour la réalisation de modèles numériques 3D, dans le but de fournir des outils informatiques de support à la conception créative. Mais au delà de l'interprétation de croquis, nous proposons une approche originale appliquée au domaine de la conception architecturale, reposant sur un environnement instrumenté: SVALABARD, une Table à dessin virtuelle, basée sur un paradigme de feuilles d'interaction, composant des périphériques d'entrée et des techniques d'interaction non-standard.
Toutefois, la faisabilité technique d'une telle approche a rapidement été compromise de par la rigidité et le manque d'extensibilité des outils actuels pour le développement d'interfaces. C'est pourquoi nous proposons aussi dans cette thèse un nouveau modèle d'architecture logicielle plus flexible et dynamique, qui, une fois réalisé dans la boîte à outils MAGGLITE, nous a permis de faciliter et d'encourager le prototypage et la réalisation d'interactions avancées.
APA, Harvard, Vancouver, ISO, and other styles
27

Weiße, Carsten, and Rene Stöckel. "Quake II meets Java." Universitätsbibliothek Chemnitz, 2005. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200500863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Comendador, Maramara Marlou, and Jacob Sandström. "Kvalitetskontroll av en fasmätande terrester laserskanner FARO Focus3D." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-13569.

Full text
Abstract:
Det finns en ISO-standard (ISO-17123) som anger hur de flesta geodetiska mätinstrument ska kontrolleras. Denna standard omfattar dock inte terrester laserskanners (TLS). Detta trots att sådana instrument har funnits ute på marknaden ett tag. National Institute of Standards and Technology (NIST) i USA har utvecklat en amerikansk standard för detta ändamål. Den är möjlig att använda i väntan på att en ISO-standard för TLS fastställts. Syftet med detta examensarbete är att undersöka en fasmätande laserskanner FARO Focus3D, som tillhör avdelningen för mätnings- & kartteknik på Tyréns AB i Stockholm. Närmare bestämt har vi riktat in oss på ”osäkerheten i avståndsmätning, och hur denna påverkas av avstånd och infallsvinkel”. Ett ytterligare syfte är att undersöka huruvida en metod, utvecklad av Fédération Internationale des Géomètres (FIG), för bestämning av nollpunktsfelet för totalstationer kan appliceras på TLS. Undersökningen har ägt rum under våren 2012 i och utanför den ca 50 m långa mäthallen i hus 45 vid Högskolan i Gävle. Avstånden som studerades inomhus var 10 – 40 m med intervall om 10 m, samt med både sfäriska och platta svartvita signaler. De sista med infallsvinkeln 0°, 30° och 45°. Även bestämningen av nollpunktsfelet utfördes inomhus med ett avstånd på 30 m med ett intervall på 10 m. Avstånden som undersöktes utomhus var 20 – 120 m med intervall om 20 m med samma signaler. Avstånden vi bestämde oss för att studera valdes pga. att specifikationerna angav att FARO Focus3D skulle klara av att registrera returer från dessa avstånd, där 120 m var det maximala avståndet som angavs. Vid kontroll av avvikelserna mot ett referensavstånd vid avstånden 10 – 40 m inomhus uppfyllde endast mätningarna mot två signaler laserskannerns specifikationer. De två signalerna var sfärerna vid 20- och 40 m avstånd från instrumentet. Vid kontroll av avståndsbruset uppfyllde inga mätningar mot signaler vid något avstånd laserskannerns specifikationer. Dock är avståndsbruset nästan hela tiden ganska lågt, förutom vid 40 m och infallsvinklarna 30° och 45°. Här var ökningen av bruset väldigt kraftig. Vid mätningarna utomhus gick det inte att registrera några signaler. Punkttätheten var för gles och antalet laserreturer alldeles för få. Gemensamt för alla mätningar mot svartvita signaler, oberoende av infallsvinkel, är att de har den minsta avvikelsen från referensavståndet vid det längsta testade avståndet, dvs. 40 m från instrumentet. Mätningarna mot sfärerna har vid alla avstånd en lägre avvikelse mot referensavståndet än vad de svartvita signalerna har. Emellertid har skanningarna mot sfärerna en högre standardosäkerhet än mot de svartvita måltavlorna, som endast var högre vid det längsta avståndet, 40 m. Vi anser att metoden för bestämning av nollpunktsfel är lätt och relativt snabb att arbeta enligt, samt att den är användbar för TLS. Vår slutsats är att längre avstånd från instrumentet inte nödvändigtvis behöver ge större avvikelser mot ett referensavstånd. I stället kan det resultera i mindre avvikelser.
APA, Harvard, Vancouver, ISO, and other styles
29

Bayón, Barrachina Arnau. "Numerical analysis of air-water flows in hydraulic structures using computational fluid dynamics (CFD)." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/90440.

Full text
Abstract:
The new legal regulations derived from climate change dictate that hydraulic structures must be designed to handle flood events associated with return periods up to 10,000 years. This obviously involves adapting the existing infrastructure to meet such requirements. In order to avoid risks in the restitution of the flow discharged to rivers, such as bank overflows or streambed erosion and scour processes, hydraulic design must be supported by reliable tools capable of reproducing the behavior of hydraulic structures. In the work presented herein, a fully three-dimensional CFD model to reproduce the behavior of different types of air-water flow in hydraulic structures is presented. The flow is assumed to be turbulent, isotropic and incompressible. Several RANS turbulence models are tested and structured rectangular meshes are employed to discretize the analyzed domain. The presence of two fluids is modeled using different VOF approaches and simulations are run using the PIMPLE algorithm. The model is implemented using the open-source platform OpenFOAM and its performance is compared to the commercial code FLOW-3D. The analysis is conducted separately on two different parts of hydraulic structures, namely: the spillway and the stilling basin. Additionally, a case of practical application, where the model reproduces the flow of a real-life case, is also presented in order to prove the suitability of the model to actual design cases. Mesh independence and model validation using experimental data are checked in the results of all the case studies. The sensitivity of the presented model to certain parameters is extensively discussed using different indicator variables. Among these parameters are turbulence closure, discretization scheme, surface tracking approach, CFD code or boundary conditions. Pros and contras of each of them are addressed. The analyzed turbulence models are the Standard k ¿ ¿, the Realizable k ¿ ¿, the RNG k ¿ ¿, and the SST k ¿ ¿. The discretization schemes under study are: a first-order upwind method, the second-order limited Van Leer method, and a second-order limited central difference method. The VOF approaches analyzed are the Partial VOF, as implemented in OpenFOAM, and the TruVOF, as implemented in FLOW-3D. In most cases, the Standard k ¿ ¿ model provides the most accurate estimations of water free surface profiles, although the rest of variables, with few exceptions, are better predicted by the RNG k ¿ ¿. The latter model generally requires slightly longer computation times. The SST k ¿ ¿ reproduces correctly the phenomena under study, although it generally turned out to be less accurate than its k ¿ ¿ counterparts. As regards the comparison among VOF approaches and codes, it is impossible to determine which one performs best. E.g. OpenFOAM, using the Partial VOF, managed to reproduce the in- ternal hydraulic jump structure and all derived variables better than FLOW-3D, using the TruVOF, although the latter seems to capture better the momentum transfer and so all derived variables. In the case of flow in stepped spillways, OpenFOAM captures better the velocity profiles, although FLOW-3D is more accurate when estimating the water free surface profile. It is worth remark- ing that not even their response to certain model parameters is comparable. E.g. FLOW-3D is significantly less sensitive to mesh refinement than OpenFOAM. Given the result accuracy achieved in all cases, the proposed model is fully applicable to more complex design cases, where stilling basins, stepped spillways and hydraulic structures in general must be investigated.
Las nuevas disposiciones legales derivadas del cambio climático dictaminan que las estructuras hidráulicas sean capaces de funcionar correctamente con eventos de inundación asociados a periodos de retorno de hasta 10,000 años. Esto, obviamente, implica adaptar la infraestructura existente para satisfacer dichos requerimientos. A fin de evitar riesgos en la restitución de los caudales vertidos al río, como desbordamientos o procesos erosivos y de socavación, el diseño hidráulico ha de sustentarse en herramientas fiables capaces de reproducir el comportamiento de las estructuras hidráulicas. En este trabajo, se presenta un modelo numérico CFD completamente tridimensional para reproducir el comportamiento de diferentes tipos de flujo aire-agua en estructuras hidráulicas. Se asume que el flujo es turbulento, isotrópico e incompresible. Diversos modelos de turbulencia RANS son contrastados y se emplean mallas estructuradas rectanuglares para discretizar el dominio analizado. La presencia de dos fluidos es modelada utilizando diferentes enfoques VOF y las simulaciones son ejecutadas empleando el algoritmo PIMPLE. El modelo es implementado mediante la plataforma de código abierto OpenFOAM y su respuesta es comparada con la del modelo comercial FLOW-3D. El análisis se lleva a cabo sobre dos partes diferentes de una estructura hidráulica, a saber, el aliviadero y el cuenco amortiguador, de forma separada. Además, un caso de aplicación práctica, donde el modelo reproduce el flujo en una estructura real, es presentado también a fin de probar la adecuación del modelo a casos de diseño aplicado. Se comprueban la independencia de la malla y la validación con datos experimentales de los resultados de todos los casos de estudio. La sensibilidad del modelo presentado a ciertos parámetros es analizada de forma exhaustiva empleando diferentes variables indicadoras. Los pros y contras de cada uno de éstos son planteados. Los modelos de turbulencia analizados son el Standard k-epsilon, el Realizable k-epsilon, el RNG k-epsilon y el SST k-omega. Los esquemas de discretización estudiados son: un método de primer orden upwind, uno de Van Leer de segundo orden y un esquema de segundo orden limitado de diferencias centradas. Los enfoques VOF analizados son el Partial VOF, implementado en OpenFOAM, y el TruVOF, implementado en FLOW-3D. En la mayoría de casos, el modelo k-epsilon aporta las estimaciones más precisas de perfiles de lámina libre de agua, pese a que el resto de variables, con alguna excepción, son mejor predichas por el RNG k-epsilon. Este modelo generalmente requiere mayores tiempos de cálculo. El k-omega reproduce correctamente los fenómenos bajo estudio, pese a que su precisión es generalmente más baja que la de los modelos k-epsilon. En lo que respecta a la comparación entre enfoques VOF y códigos, es imposible determinar cuál es el mejor. Por ejemplo, OpenFOAM, empleando el Partial VOF, logra reproducir la estructura interna del resalto hidráulico y todas las variables derivadas mejor que FLOW-3D, empleando el TruVOF, a pesar de que este último parece capturar mejor la transferencia de cantidad de movimiento y, por tanto, todas las variables derivadas. En el caso del flujo en aliviaderos escalonados, OpenFOAM captura mejor los perfiles de velocidad, pese a que FLOW-3D es más preciso en la estimación de los perfiles de lámina libre de agua. Conviene recalcar que ni tan sólo su respuesta a ciertos parámetros del modelo es comparable. Por ejemplo, FLOW-3D es significativamente menos sensible al refinado de malla que OpenFOAM. A la luz de la precisión de los resultados obtenidos en todos los casos, el modelo propuesto es completamente aplicable a casos de diseño más complejos, donde cuencos amortiguadores, aliviaderos escalonados y estructuras hidráulicas en general han de ser investigadas.
Les noves disposicions legals derivades del canvi climàtic dictaminen que cal que les estructures hidràuliques siguen capaces de funcionar correctament amb esdeveniments d'inundació associats a períodes de retorn de fins a 10,000 anys. Això, òbviament, implica adaptar la infraestrctura existent per satisfer aquests requeriments. A fi d'evitar riscs en la restitució dels cabals vessats al riu, com desbordaments o processos erosius i de socavació, el disseny hidràulic ha de recolzar-se en ferramentes fiables capaces de reproduir el comportament de les estructures hidràuliques. En aquest treball, es prsenta un model numèric CFD completament tridimensional per a reproduir el comportament de diferents tipus de flux aire-aigua en estructures hidràuliques. S'assumeix que el flux és turbulent, isotròpic i incompressible. Diferents models de turbulència RANS són contrastats i s'empren malles estructurades rectangulars per discretitzar el domini analitzat. La presència de dos fluids és modelada utilitzant diferents enfocaments VOF i les simulacions són executades emprant l'algorisme PIMPLE. El model és implementat mitjançant la plataforma de codi obert OpenFOAM i la seua resposta és comparada amb la del codi comercial FLOW-3D. L'anàlisi es du a terme sobre les diferents parts d'una estructura hidràulica, a saber, sobreeixidors esgraonats i vas esmorteïdor, de forma separada. A més, un cas d'aplicació pràctica, on el model reprodueix el flux a una estructura real, és presentat també a fi de provar l'adequació del model a casos de disseny aplicat. Es comproven la independència de la malla i la validació amb dades experimentals dels resultats de tots els casos d'estudi. La sensibilitat del model presentat a certs paràmetres és analitzada de forma exhaustiva emprant diferents variables indicadores. Els pros i contres de cadascun d'aquests són plantejats. Els models de turbulència analitzats són l'Standard k-epsilon, el Realizable k-epsilon, el RNG k-epsilon i l'SST k-omega. Els esquemes de discretització estudiats són: un mètode de primer ordre upwind, un de Van Leer de segon ordre i un esquema de segon ordre limitat de diferències centrades. Els enfocaments VOF analitzats són el Partial VOF, implementat en OpenFOAM, i el TruVOF, implementat en FLOW-3D. En la majoria de casos, el model Standard k-epsilon aporta les estimacions més precises de perfils de làmina lliure d'aigua, tot i que la resta de variables, amb alguna excepció, són millor predites pel RNG k-epsilon. Aquest model generalment requereix majors temps de càlcul. El k-omega reprodueix correctament els fenòmens sota estudi, tot i que la seua precisió és generalment més baixa que la dels models k-epsilon. Pel que fa la comparació entre enfocaments VOF i codis, és impossible determinar quin és el millor. Per exemple, OpenFOAM, emprant el Partial VOF, aconsegueix reproduir l'estructura interna del ressalt hidràulic i totes les variables derivades millor que FLOW-3D, emprant el TruVOF, tot i que aquest últim pareix capturar millor la transferència de quantitat de moviment i, per tant, totes les variables derivades. En el cas del flux en sobreeixidors esgraonats, OpenFOAM captura millor els perfils de velocitat, tot i que FLOW-3D és més precís en estimar els perfils de làmina lliure d'aigua. Cal deixar palès que ni tan sols la seua resposta a certs paràmetres del model és comparable. Per exemple, FLOW-3D és significativament menys sensible al refinament de malla que OpenFOAM. En base a la precisió dels resultats obtinguts en tots els casos, el model proposat és completament aplicable a casos de disseny més complexos, on vassos esmorteïdors, sobreeixidors esgraonats i estructures hidràuliques en general han de ser investigades.
Bayón Barrachina, A. (2017). Numerical analysis of air-water flows in hydraulic structures using computational fluid dynamics (CFD) [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90440
TESIS
APA, Harvard, Vancouver, ISO, and other styles
30

Kbayer, Nabil. "Advanced Signal Processing Methods for GNSS Positioning with NLOS/Multipath Signals." Thesis, Toulouse, ISAE, 2018. http://www.theses.fr/2018ESAE0017/document.

Full text
Abstract:
Les avancées récentes dans le domaine de navigation par satellites (GNSS) ontconduit à une prolifération des applications de géolocalisation dans les milieux urbains. Pourde tels environnements, les applications GNSS souffrent d’une grande dégradation liée à laréception des signaux satellitaires en lignes indirectes (NLOS) et en multitrajets (MP). Cetravail de thèse propose une méthodologie originale pour l’utilisation constructive des signauxdégradés MP/NLOS, en appliquant des techniques avancées de traitement du signal ou àl’aide d’une assistance d’un simulateur 3D de propagation des signaux GNSS. D’abord, nousavons établi le niveau maximal réalisable sur la précision de positionnement par un systèmeGNSS "Stand-Alone" en présence de conditions MP/NLOS, en étudiant les bornes inférieuressur l’estimation en présence des signaux MP/NLOS. Pour mieux améliorer ce niveau deprécision, nous avons proposé de compenser les erreurs NLOS en utilisant un simulateur 3D dessignaux GNSS afin de prédire les biais MP/NLOS et de les intégrer comme des observationsdans l’estimation de la position, soit par correction des mesures dégradées ou par sélectiond’une position parmi une grille de positions candidates. L’application des approches proposéesdans un environnement urbain profond montre une bonne amélioration des performances depositionnement dans ces conditions
Recent trends in Global Navigation Satellite System (GNSS) applications inurban environments have led to a proliferation of studies in this field that seek to mitigatethe adverse effect of non-line-of-sight (NLOS). For such harsh urban settings, this dissertationproposes an original methodology for constructive use of degraded MP/NLOS signals, insteadof their elimination, by applying advanced signal processing techniques or by using additionalinformation from a 3D GNSS simulator. First, we studied different signal processing frameworks,namely robust estimation and regularized estimation, to tackle this GNSS problemwithout using an external information. Then, we have established the maximum achievablelevel (lower bounds) of GNSS Stand-Alone positioning accuracy in presence of MP/NLOSconditions. To better enhance this accuracy level, we have proposed to compensate for theMP/NLOS errors using a 3D GNSS signal propagation simulator to predict the biases andintegrate them as observations in the estimation method. This could be either by correctingdegraded measurements or by scoring an array of candidate positions. Besides, new metricson the maximum acceptable errors on MP/NLOS errors predictions, using GNSS simulations,have been established. Experiment results using real GNSS data in a deep urban environmentshow that using these additional information provides good positioning performance enhancement,despite the intensive computational load of 3D GNSS simulation
APA, Harvard, Vancouver, ISO, and other styles
31

Champ-Rigot, Laure. "Nouvelles perspectives diagnostiques et thérapeutiques dans la prise en charge rythmologique des patients en situation d'insuffisance cardiaque Rationale and Design for a Monocentric Prospective Study: Sleep Apnea Diagnosis Using a Novel Pacemaker Algorithm and Link With Aldosterone Plasma Level in Patients Presenting With Diastolic Dysfunction (SAPAAD Study) Usefulness of sleep apnea monitoring by pacemaker sensor in elderly patients with diastolic dysfunction : the SAPAAD Study Clinical outcomes after primary prevention defibrillator implantation are better predicted when the left ventricular ejection fraction is assessed by magnetic resonance imaging Predictors of clinical outcomes after cardiac resynchronization therapy in patients ≥75 years of age: a retrospective cohort study Comparison between novel and standard high-density 3D electro-anatomical mapping systems for ablation of atrial tachycardia Safety and acute results of ultra-high density mapping to guide catheter ablation of atrial arrhythmias in heart failure patients Long-term clinical outcomes after catheter ablation of atrial arrhythmias guided by ultra-high density mapping system in heart failure patients." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC430.

Full text
Abstract:
L’insuffisance cardiaque est un problème de santé publique dans les pays développés, touchant 1 à 2% de la population générale, mais dont la prévalence atteint 10% après 70 ans. Les progrès thérapeutiques ont permis d’améliorer le pronostic des patients, notamment ceux ayant une altération de la fonction systolique ventriculaire gauche. Les troubles du rythme sont fréquents et nécessitent une pris en charge particulière au cours des situations d’insuffisance cardiaque. Cependant, il reste des questions non résolues : comment améliorer l’efficacité du traitement de l’insuffisance cardiaque à fonction systolique préservée, comment mieux sélectionner les patients pouvant bénéficier de la prévention primaire de la mort subite par un défibrillateur implantable, les patients âgés peuvent-ils bénéficier de la même prise en charge que les patients plus jeunes, et pour finir comment améliorer les résultats de l’ablation de fibrillation auriculaire dans les situations d’insuffisance cardiaque. Nous avons mis en place une étude prospective chez des patients présentant une dysfonction diastolique pour évaluer l’intérêt de l’algorithme de surveillance de l’apnée du sommeil disponible dans des stimulateurs cardiaques. En parallèle, nous avons analysé l’impact de l’évaluation par résonance magnétique des patients candidats à un défibrillateur sur la prédiction des évènements rythmiques, mais aussi le devenir des patients de plus de 75 ans appareillés avec un système de resynchronisation cardiaque. Enfin, nous nous sommes intéressés aux résultats d’un nouveau système de cartographie électroanatomique ultra-haute densité pour guider les procédures d’ablation de troubles du rythme supraventriculaires complexes chez des patients insuffisants cardiaques comparés à des patients contrôles
Heart failure is a major public health issue in developed countries, with a prevalence of 1-2% of global population, rising to 10% after 70 years of age. Therapeutic progresses have succeeded in improving patients’ prognosis, particularly in case of reduced left ventricular ejection fraction. Rhythm abnormalities are frequent, and need special consideration in case of heart failure. Meanwhile, there are still some gaps in the evidence: heart failure with preserved systolic function is complex and difficult to treat, primary prevention of sudden cardiac death is effective but there is a need to better select candidates, whether elderly patients should be treated as younger individuals, and finally how to improve outcomes of atrial fibrillation catheter ablation. Firstly, we have conducted a prospective study to evaluate the Sleep Apnea Monitoring algorithm provided in a novel pacemaker in patients with diastolic dysfunction. Besides, we analyzed whether magnetic resonance imaging could predict cardiac outcomes in patients with an implantable cardioverter defibrillator better than echocardiography. We also reported the outcomes of the cardiac resynchronization therapy in patients ≥75 years old compared to younger patients. Finally, we studied the results of a novel ultra-high density mapping system to guide ablation procedures of complex atrial arrhythmias in heart failure patients compared to controls
APA, Harvard, Vancouver, ISO, and other styles
32

Simões, Luís Gonçalo Lucas dos Santos da Cruz. "Projeto de automatização da produção de artigo pirotécnico de classe F1." Master's thesis, 2018. http://hdl.handle.net/10316/86434.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Mecânica apresentada à Faculdade de Ciências e Tecnologia
This dissertation aims at the elaboration of a mechanism that allows for the production of F1 class pyrotechnic articles, commonly called throw downs or bang-snaps, through a completely automated method, using a purely manual production process as a basis. The classification of pyrotechnic articles is based on the European directive that regulates the free trade of explosive devices within the EU, with the final product being intended to have a valid CE marking that allows its export. The legal quality requirements demanded of an F1 class pyrotechnic will be tackled, specifically the physical characteristics of the product, the necessary documentation to obtain a CE certificate and the suitable method for conformance evaluation. The legislation referring the control of operational hazards as well as the safety systems relevant to the mechanism object of this thesis is also analyzed.The elaboration of the mechanism will be accomplished through 3D modelling by employing SOLIDWORKS software. The kinematics, the stresses on critical components and control system logic are studied with the aim of estimating feasibility, durability and productivity of the mechanism. A semi-automatic, simplified version of the mechanism that requires the direct intervention of a laborer is also considered.The estimated performance values for both mechanisms are then used for a comparative economic analysis with existing market production solutions, with the goal of determining if the physical construction of both mechanisms is profitable.
A presente dissertação tem como objetivo a elaboração de um mecanismo que permita produzir estalinhos, artigos pirotécnicos de classe F1, de forma totalmente automática tendo como base um método de produção completamente manual. A classificação dos artigos pirotécnicos tem por base a diretiva europeia referente a comercialização livre de artigos pirotécnicos dentro da união, sendo pretendido que o produto final possua marcação CE válida que permita a sua exportação.São abordados os requisitos legais de qualidade que são exigidos a um artigo pirotécnico de classe F1, nomeadamente as características físicas do produto, a documentação necessária para obter um certificado CE e o método de avaliação de conformidade aplicável. É igualmente analisada a legislação em relação aos sistemas de segurança e ao controlo dos riscos de operação relevantes para o mecanismo alvo desta tese. A elaboração do mecanismo vai ser realizada através de modelação 3D recorrendo ao programa SOLIDWORKS, sendo estudada a cinemática, os esforços aplicados nos componentes críticos e o sistema de controlo a fim de ser estimada a viabilidade, a durabilidade e a produtividade do mecanismo. É também considerada uma versão simplificada do mecanismo com funcionamento semiautomático, que requer intervenção direta de um trabalhador.Os valores estimados para os dois mecanismos são então utilizados para uma análise económica comparativa entre soluções de produção existentes no mercado, com o fim de avaliar se a realização física dos mecanismos propostos é rentável economicamente.
APA, Harvard, Vancouver, ISO, and other styles
33

Hu, Min-You, and 何明祐. "Depth generator design for 3D video compression standard." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/67443811097950639333.

Full text
Abstract:
碩士
國立雲林科技大學
電子工程系
102
3D high efficiency video coding (3D-HEVC) is a compression technology formulated by Joint Collaborative Team on Video Coding (JCT-VC) consisted of International Standardization Organization (ISO) and International Telecommunications Union (ITU), which is a compression standard proposed to multi-view video feature. With 3D-HEVC, multi-view video and the corresponding depth map can be compressed effectively, and depth information is applied to improve compression efficiency. 3D TV system applies encoder to compress inter frame redundancy into a bit stream. Decoder and subsequent View Synthesis Reference Software (VSRS) can let users watch 3D image on multi-view display. 3D-HEVC needs to film video sequences of different perspectives and their corresponding depth maps, which applies depth estimation algorithm to obtain depth information of images or uses cameras that can film depth information to produce their depth images. To reduce additional time for producing the corresponding depth maps or spend extra money for purchasing depth cameras, this study proposes a depth generator design for 3D video compression standard. The proposed algorithm can be mainly applied to encode which uses coding information of video sequence, such as the correlation of inter-view, to perform conditional filtering. The filtered data will be converted into depth information to achieve reduction of additional computation in encoded image and produce the corresponding depth map. Finally, to solve the blocking effect in estimated depth map, this study proposes a depth map inpainting algorithm to inpaint image with the edge line of original texture map, which makes depth map contain become more continuous in object edge. The experiment analysis shows that to compare the qualities of estimated depth maps, this study applies View Synthesis Reference Software provided by MPEG to perform image synthesis of virtual perspective with information from original image map and depth map, and compares the virtual image with the original image. Finally, in peak-signal-to-noise ratio (PSNR), the difference value between the estimated depth map and the original depth map is about -0.98 dB on average, and the coding speed decreases 2% of the entire coding time.
APA, Harvard, Vancouver, ISO, and other styles
34

Trache, Marian Tudor. "The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate: The agreement between 3D, standard 2D and triplane 2Dspeckle tracking: effects of image quality and 3D volumerate." Doctoral thesis, 2015. https://ul.qucosa.de/id/qucosa%3A14692.

Full text
Abstract:
Die technologische Entwicklung im Bereich der Echokardiographie hat in der letzten Dekade neue Methoden zur objektiven Erfassung der regionalen linksventrikulären Wandbewegung ermöglicht. Speckle Tracking erfasst die myokardiale Deformation durch die Positionsänderung einzelner Bildpunkte von einem Bild des analysierten Datensatzes zum nächsten. Diese Methode ist dem Gewebedoppler überlegen, insbesondere wegen ihrer Unabhängigkeit vom Anlotungswinkel. Zwei-dimensionale (2D) Speckle Tracking Analysen wurden für die klinische Praxis validiert. Die drei-dimensionale (3D) Echokardiographie erlaubt inzwischen Speckle Tracking Analysen von 3D Datensätzen, welche jedoch für die klinische Praxis noch nicht ausreichend validiert sind. Bei Patienten mit normaler regionaler linksventrikulärer Wandbewegung (N=37), sowie bei Patienten mit ischämie-bedingten Wandbewegungsstörungen (N=18) wurden 3D und 2D Speckle Tracking Analysen durchgeführt. Die Vergleichbarkeit der beiden Methoden hinschtlich der Quantifizierung von normalen und pathologischen Wandbewegungsmustern wurde anhand dieser Messungen geprüft. Des weiteren wurde der Einfluss der Bildrate und Bildqualität drei-dimensionaler Datensätze auf die Vergleichbarkeit beider Methoden analysiert. Es zeigte sich eine gute Vergleichbarkeit des 2D und 3D Speckle Tracking in der Diagnostik eingeschränkter linksventrikulärer systolischer Funktion, sowie in der Lokalisationsdiagnostik umschriebener Wandbewegungsstörungen. 2D und 3D Speckle Tracking sind jedoch noch nicht als gleichwertige Methoden anzusehen. Die Bildqualität, generell bei beiden Modalitäten - jedoch speziell bei 3D Datensätzen, sowie die Bildrate der 3D Datensätze zeigen signifikante Einflüsse auf die 3D Strain Analysen. Eine korrekte Standardisierung der analysierten Aufnahmen und eine optimale Bildqualität sind wichtige Faktoren, die die Zuverlässigkeit des 2D und 3D Speckle Trackings bestimmen.
APA, Harvard, Vancouver, ISO, and other styles
35

Su, Xiang-Han, and 蘇湘涵. "Design for Testability of 3D Graphics SoC with IEEE 1500 Standard." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/29677903452997076785.

Full text
Abstract:
碩士
國立高雄大學
資訊工程學系碩士班
98
As the quality of the life improves, various electronic information technologies are developed and various electronic products make our life more convenient, such as PDA which can receive the latest message any time wherever we go, the ticket purchasing system, the ibon system of 7-11 which offers the different welfare. No matter where we go, there are many relevant service system interfaces to solve the requirement of us. In order to let electronic products could be easily handled by people of any age bracket, all the system interfaces are designed to be the Graphical User Interface (GUI). The user can understand the system functions by simple icon or instruction. Following the GUI instructions, user can use the system functions. Recently, the developments of SoC techniques are popular, such as Hardware and Software Association set, SoC (System-on-Chip) check technology, low power consumption, the reuse of IP, and embedded software development and so on, so that the area can be smaller, speed of process can be faster and the power consumption can be lower, etc. Traditional IEEE-1500-based circuit needs many test time to scan input the test pattern. It raises the testing cost a lot. In this thesis, we combine the IEEE-1500 standard and Built-In Self-Testing (BIST) technique to test the system. Under such test environment, not only the functions of IEEE-1500 standard, but also the fast testing advantage of BIST (Built-In Self-Testing) can be achieved when test the circuit. We apply it to test many benchmark circuits and 3D Graphics SoC as well as.
APA, Harvard, Vancouver, ISO, and other styles
36

Amiri, Delaram. "Bilateral and adaptive loop filter implementations in 3D-high efficiency video coding standard." Thesis, 2015. http://hdl.handle.net/1805/7983.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
In this thesis, we describe a different implementation for in loop filtering method for 3D-HEVC. First we propose the use of adaptive loop filtering (ALF) technique for 3D-HEVC standard in-loop filtering. This filter uses Wiener–based method to minimize the Mean Squared Error between filtered pixel and original pixels. The performance of adaptive loop filter in picture based level is evaluated. Results show up to of 0.2 dB PSNR improvement in Luminance component for the texture and 2.1 dB for the depth. In addition, we obtain up to 0.1 dB improvement in Chrominance component for the texture view after applying this filter in picture based filtering. Moreover, a design of an in-loop filtering with Fast Bilateral Filter for 3D-HEVC standard is proposed. Bilateral filter is a filter that smoothes an image while preserving strong edges and it can remove the artifacts in an image. Performance of the bilateral filter in picture based level for 3D-HEVC is evaluated. Test model HTM- 6.2 is used to demonstrate the results. Results show up to of 20 percent of reduction in processing time of 3D-HEVC with less than affecting PSNR of the encoded 3D video using Fast Bilateral Filter.
APA, Harvard, Vancouver, ISO, and other styles
37

Wu, Chia-Chou, and 吳嘉洲. "Generation of Standard Drosophila Brain for Archiving Neural circuitry in 3D Image Database." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/91417825033270075115.

Full text
Abstract:
碩士
國立清華大學
生物科技研究所
96
The study of behavior in terms of genes has been appreciated but the underlying mechanism has not been fully explained. Currently there are two major targets for researches in the field: the first is aimed at the hardware, the structure of the circuitry; the second is to study the information flow and its modification. Here, we focus on the “hardware” part, and try to propose a platform for building a complete circuitry. Since Drosophila melanogaster is recognized as a model system for studying the mechanism underlying the behavior, we choose this model organism as our framework to build the platform. Two methods were adopted to reconstruct an average model: the first one, the average shape is determined from the intensity map of the samples. In the second one, the average shape is determined from a 3D distance map of brain surfaces. The efficacy of these two methods is discussed. In addition to the average shape of whole brain, the average mushroom body is constructed and located within the average brain. A global coordination in the average brain is also assigned. Single neurons with their associated mushroom body, as the landmark are transformed into the standard brain accordingly. We demonstrated that the platform can be used to archive spatial patterns of gene expressions and single neurons and other types’ images of protein in Drosophila brain and many searching and analysis tools we have developed would give scientists an insight about how the brain works.
APA, Harvard, Vancouver, ISO, and other styles
38

Tita, Ralf [Verfasser]. "Variable isozentrische Steuerung für einen Standard-C-Bogen mit echtzeitfähiger 3D-Rekonstruktion / Ralf Tita." 2007. http://d-nb.info/985460482/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Spickermann, Andreas [Verfasser]. "Photodetektoren und Auslesekonzepte für 3D-Time-of-Flight-Bildsensoren in 0,35 μm-Standard-CMOS-Technologie [My m-Standard-CMOS-Technologie] / von Andreas Spickermann." 2010. http://d-nb.info/1008672610/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Liang-CheLi and 李良哲. "On-Chip 3D-IC Test System Design for Pre-bond, Post-bond, TSV Test and TSV Diagnosis Based on IEEE 1838 Standard." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/fz97xh.

Full text
Abstract:
碩士
國立成功大學
電機工程學系
102
3D-IC uses the Through Silicon Via (TSV) technology to reduce the connection length between each other circuits and enhance I/O bandwidth. It is also suitable to heterogeneous integration for memory, logic and analog circuits. However, due to the stacked structure with many different dies, the 3D-IC test flow is more complex than the 2D-IC. In the current research on test flow of 3D-IC, it can be divided into two main steps, Pre-bond and Post-bond test. The Post-bond test contains the partial stack, TSV and complete stack test. A low-cost and high-quality test mechanism is proposed in this thesis. We integrate the 3D-IC Test Platform to 3D-IC wrapped with the test interface called IEEE std. 1838, and the overall circuits become a 3D-IC Test System. The system just needs the external equipment or computer through 1149.1 signals sends the required test vectors and test data to platform and then it will generate all control signals and finish the 3D-IC test flow to achieve Pre-bond and Post-bond test and diagnosis for 3D-IC. It can significantly reduce the demand for external test equipment and reduce the test cost of 3D-IC chips by this 3D-IC Test System. In order to improve the yield of TSV by N-detection method, we further propose an efficient test framework of TSV under the overall test time no increasing; In addition, we design a graphical user interface (GUI) to help testers to integrate circuits with 3D-IC Test System quickly and controls the test flow of 3D-IC Test Platform. In experimental results, we just use 1149.1 signals to send the test data to platform and then the platform can effectively execute test functions that contain bottom die logic circuit test in Pre-bond test, logic circuit, memory and analog circuit test in Post-bond test, as well as TSV test and diagnosis. Otherwise the platform can test a single TSV tens to tens of thousands times without increasing test time.
APA, Harvard, Vancouver, ISO, and other styles
41

Moura, Daniel Cardoso de. "Three-Dimensional Biplanar Reconstruction of the Scoliotic Spine for Standard Clinical Setup." Doctoral thesis, 2010. http://hdl.handle.net/10216/61703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Moura, Daniel Cardoso de. "Three-Dimensional Biplanar Reconstruction of the Scoliotic Spine for Standard Clinical Setup." Doctoral thesis, 2010. https://repositorio-aberto.up.pt/handle/10216/68791.

Full text
Abstract:
A escoliose é caracterizada por uma deformação tridimensional (3D) da coluna vertebral que requer uma avaliação a três dimensões. No entanto, as técnicas convencionais de imagem 3D mostram-se desadequadas e, por isso, as reconstruções são normalmente realizadas recorrendo a radiografias planares, uma no plano frontal e outra no plano lateral. Porém, esta abordagem apresenta desafios. O primeiro prende-se, desde logo, com a extracção de dados 3D a partir de radiografias 2D. Actualmente, este problema é solucionado através de métodos de calibração que recorrem a objectos de calibração radiopacos, e de plataformas rotatórias que visam minimizar os erros de posicionamento do indivíduo. No entanto, estes métodos requerem alterações consideráveis nos protocolos e instalações radiológicas, para além de introduzirem artefactos nas imagens que, por vezes, se sobrepõem a estruturas anatómicas de interesse. Após a calibração das radiografias torna-se necessário resolver um segundo problema, a recuperação da forma da coluna. Os métodos de referência requerem identificação manual de um conjunto extenso de pontos em cada uma das radiografias, o que os torna dispendiosos e propensos a erros. Por este motivo, e pelas razões supramencionadas, os métodos actualmente utilizados para a reconstruação 3D da coluna vertebral não são viáveis para avaliações clínicas de rotina em serviços de radiologia com configurações comuns.Esta tese assentou na hipótese de que é possível construir modelos geométricos personalizados da coluna vertebral escoliótica a partir de duas radiografias adquiridas em ambiente clínico comum, por meio de métodos computacionais que requerem baixa interacção com o utilizador. Para alcançar este objectivo são propostos dois métodos: um método de calibração geométrica que recorre a uma distância, facilmente medida no local, que visa minimizar o recurso a objectos de calibração; e um método de reconstrução 3D baseado na deformação de um modelo articulado da coluna que visa reconstruções rápidas e exactas.Mostra-se pela primeira vez, nesta tese, que é possível recuperar a escala da coluna vertebral sem recurso a objectos de calibração, ainda que a introdução de um pequeno objecto permita melhorar a exactidão e robustez das reconstruções 3D. Adicionalmente, mostra-se que o modelo proposto permite melhorar os resultados de calibração de radiografia biplanar baseada na minimização do erro de retro-projecção. O método de calibração proposto requer apenas mudanças pontuais nos protocolos de aquisição radiográfica, introduzindo poucos ou nenhuns artefactos nas radiografias. Conclui-se, assim, que satisfaz os requisitos necessários para utilização em ambientes clínicos comuns. No que diz respeito à recuperação da estrutura da coluna vertebral, o método proposto alcançou o menor tempo de reconstrução, tendo em conta o tempo de interacção com o utilizador e o tempo de computação da solução. Alcançou ainda, de entre os métodos semi-supervisionados, a maior exactidão na determinação da localização das vértebras, assim como do centro das faces dos corpos vertebrais, tanto em casos de escoliose moderada como severa. Adicionalmente, foi mostrado que o método permite o cálculo de índices clínicos de forma fiável, mesmo quando o input é fornecido por utilizadores não-especialistas. Conclui-se, deste modo, que o método de reconstrução 3D pode ser utilizado para avaliações clínicas de rotina.
Scoliosis is characterized by a three-dimensional (3D) deformation of the spine that requires a 3D evaluation. However, conventional 3D imaging techniques have shown to be inadequate and, therefore, 3D reconstructions are typically performed from planar radiographs, one on the frontal plane and another on the lateral. Yet, this approach presents several challenges. The first is concerned with extracting 3D data from 2D radiographs. Currently, this is solved by calibration methods that make use of radiopaque objects and rotatory platforms that minimise patient positioning errors. However, these methods require considerable changes in radiological setups and protocols, besides introducing artefacts on the images that, sometimes, overlap anatomical structures of interest. After calibrating the acquired radiographs another problem arises, recovering the shape of the spine. The gold-standard methods require an extensive set of landmarks that must be manually identified on each radiograph, making them resource-consuming and error-prone. For this and the aforementioned reasons, the methods that are currently used for 3D reconstructing the spine are not viable for routine clinical evaluations on radiological services with standard setups.In this thesis it was hypothesised that it is possible to build personalised geometricalmodels of scoliotic spines from two radiographs acquired in standard clinical environments by means of computational methods that require limited user-interaction. For accomplishing this goal, two methods are proposed here: a calibration method that makes use of a distance, easily measured on site, that aims at minimising the need for calibration objects; and a 3D reconstruction method based on the deformation of an articulated model of the spine that aims at providing fast and accurate 3D reconstructions. In this thesis, it is shown by the first time that it is possible to recover the scale of the spine without calibration objects, although introducing a small calibration object improves both accuracy and robustness of the 3D reconstructions. In addition, it is shown that the proposed model enables to improve results of calibrations of biplanar radiographs based on the minimisation of the retro-projection error. The calibration method proposed here only requires minimal changes to the radiographs acquisition protocols, resulting in minimal artefacts on the radiographs. Thus, we conclude that it satisfies the requirements for being used in standard clinical environment.Concerning recovering the shape of the spine, the proposed method achieved the fastest reconstruction times ever, including user-interaction time and computation time. Additionally, it achieved, within semi-supervised methods, the highest accuracy determining the location of vertebrae and the centres of their endplates for both mild and severe scoliotic patients. Furthermore, it was shown that the method enables to reliably calculate clinical indices, even by non-expert users. Therefore, we conclude that the 3D reconstruction method can be used for routine clinical evaluations.
APA, Harvard, Vancouver, ISO, and other styles
43

Li-Ting, Lo, and 羅莉婷. "A Study on the Standard Operation Procedure of 3D Laser Scanner System— A Case Study of the Song-Shan Tobacco Factory No.5 Warehouse Historic Building." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/89294939398968561317.

Full text
Abstract:
碩士
中國文化大學
建築及都市設計學系
100
Most of the external and interior surveying of traditional building are used for the artificial and optical measurement methods, It needs more measure time to taking external and internal work, and impact with low precision. In recent year, 3D Light Detection And Ranging (it also call LIDAR) surveying system can cut down the surveying and mapping time, compare with the data scan from the 3D system input to the computer backward analysis, 3D system has highly work efficiency. But new 3D instruments have more instrument functions and complex operation procedure, it require for establish standard operating procedures of 3D surveying system. In this research, we use FARO Photon 120 system(from the U.S.A.) as the SOP object, The SOP of FARO Photon 120 system include the contains of instrument unit, the instrument setup location, the instrument scan method and procedure, the scan point cloud data reservation processing, the software installation, the procedure of point cloud mapping, and the chromatics of the image processing program. All the contains transfer to establish the FARO Photon 120 system of standard operating procedures, Applied to setup the 3D image of Song-Shan Tobacco Factory No.5 Warehouse Historic Building. As the result, the FARO Photon 120 system standard operating procedures include instrument setup pre-operation, scanning operations and backward software setup operations should consider the different scanning accuracy, and erected height and distant of instrument, amplitude, and the amount of measurement points and instrumentation and whether there is something countercheck the scan results between the measured object, through backward software evaluation point cloud density differences, in order to reduce scanning difference, lower the scan number of times, improve the precision of the scanning accuracy, this standard operating procedures 3D appearance, we can get four side elevation, and eight digital image diagram of the Song-Shan Tobacco Factory No.5 warehouse building. The results of this study, Set up the available document of the FARO Photon 120 system measuring technology to the other users.
APA, Harvard, Vancouver, ISO, and other styles
44

Moura, Daniel Cardoso de. "Three-Dimensional Biplanar Reconstruction of the Scoliotic Spine for Standard Clinical Setup." Tese, 2010. http://hdl.handle.net/10216/61703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Moore, Nina Zobenica. "Meningiomas Assessed with In Vivo 3D 1H-Magnetic Resonance Spectroscopy Integrated Into a Standard Neurosurgical Image Guidance System: Determining Biochemical Markers of Clinically Aggressive Behavior and Providing a Resection Advantage." Thesis, 2011. http://hdl.handle.net/10150/170535.

Full text
Abstract:
A Thesis submitted to The University of Arizona College of Medicine - Phoenix in partial fulfillment of the requirements for the Degree of Doctor of Medicine.
15 usable patients with recurrent or newly diagnosed meningiomas using a 3T GE Signa scanner. Quantified spectral metabolite peaks were used to select voxels that had high or low alanine for tissue sampling. 3D 1H-MRSI was integrated into a standard image guided surgery (IGS) system; a mask of the voxel was loaded onto the IGS system allowing surgeons to precisely extract tissue intraoperatively according to biochemical mapping. Ex vivo NMR and conventional histological grading were performed on the extracted tissue. Results: Tumor spectra showed biochemically heterogeneous regions, especially for choline, lactate and alanine. Mean alanine concentrations were lower in more aggressive--histologically and immunohistochemically--regions of the meningiomas in the study. In addition, lower grade meningiomas showed high alanine at the tumor periphery with decreased central alanine. Ex vivo NMR was well-correlated with in vivo 3D 1H-MRSI. Conclusions: Non-invasive detection of various intratumoral biochemical markers using 3D 1H-MRSI can distinguish areas within meningiomas that express more aggressive features. There is regional heterogeneity in the concentrations of these markers within individual tumors. Furthermore, 3D 1H-MRSI may be able to exploit these regional differences to separate more aggressive from less aggressive areas within a given meningioma. Such knowledge may be useful to 5 the neurosurgeon faced with the task of meningioma resection, and in the planning adjuvant therapy for residual meningioma
APA, Harvard, Vancouver, ISO, and other styles
46

Silva, João Pedro Mendonça de Assunção da. "Automatic and intelligent integration of manufacture standardized specifications to support product life cycle - an ontology based methodology." Doctoral thesis, 2009. http://hdl.handle.net/1822/10154.

Full text
Abstract:
Tese de doutoramento em Tecnologias da Produção
In the last decades, the globalization introduced significant changes in the product’s lifecycle. A worldwide market advantageously offered a vast range of Products, both in terms of variety and quality. In consequence, markets progressively demand highly customized products with short life cycle. Computational resources provided an important contribute to maintain Manufacture competitiveness and a rapid adaptation to paradigm change from mass production to mass customization as well. In this environment, Enterprise and Product modeling were the best response to new requirements like flexibility, agility and intense dynamic behavior. Enterprise Modeling enabled production convergence to an integrated virtual process. Several enterprises clearly assumed new formats like Extended Enterprises or Virtual/Agile Enterprises to guarantee product and resources coordination and management within the organization and with volatile external partners. By the other hand, Product modeling suffered an evolution, with traditional human based resources (like technical drawings) migrating to more skilful computational product models (like CAD or CAE models). Product modeling, together with an advanced information structure, has been recognized by academic and industrial communities as the best way to integrate and co-ordinate in early Design stages the various aspects of product’s lifecycle. An early and accurate product specifications settlement is the direct consequence of the product models enrichment with additional features. Therefore, Manufacture specifications – for longtime included in technical drawings or text based notes – need to re-adapt to such reality, namely due to missing integration automation and computational support. Recent enhancements in standard product models (like ISO 10303 STEP product data models) made a significant contribution towards product knowledge capture and information integration skills. Nevertheless, computational integration issues arise because multiple terminologies are in use along Product Life Cycle, namely due to different team backgrounds. Besides, the advent of internet claimed semantic capabilities in standard product models to a better integration with Enterprise agents. Ontologies facilitate the computational understanding, communication and seamless interoperability between people and organizations. They allow key concepts and terms relevant in a given domain to be identified and defined in an open and unambiguous computational way. Therefore, ontologies facilitate the use and exchange of data, information, and knowledge among inter-disciplinary teams and heterogeneous systems, towards intelligent systems integration. This work proposed a methodology to support the development of a harmonized reference ontology for a group of enterprises sharing a business domain. This methodology is based on the concept of Mediator Ontology (MO), which assists the semantic transformations between each enterprise’s ontology and the referential one. The methodology makes possible each organization to keep its own terminology, glossary and ontological structures, providing seamless communication and interaction with the others. The methodology foment the re-use of data and knowledge incorporated in the standard product models, as an effective support of collaborative engineering teams in the process of product manufacturability evaluation, anticipating validity of manufacture specifications.
Nas últimas décadas, o advento da globalização introduziu mudanças significativas no ciclo de vida dos produtos. Um mercado mundial passou a oferecer vantajosamente uma gama alargada de Produtos, tanto em termos de variedade como de qualidade. Como consequência, os mercados passaram a exigir progressivamente produtos muito personalizados e com um ciclo de vida mais curto. O recurso a meios computacionais constitui um contributo importante para manter a competitividade da Manufactura e uma adaptação rápida à mudança de paradigma da Produção em massa para a personalização em massa. Neste ambiente, com muitas novas exigências tais como a flexibilidade, a agilidade e o comportamento extremamente dinâmico, a modelação das Empresas e do Produto foram a melhor solução encontrada para os meios produtivos. A Modelação de Empresas permitiu a convergência da produção para um processo virtual integrado. Várias empresas assumiram claramente novos formatos como Empresas Estendidas ou Empresas Virtuais/Ágeis de modo a garantir coordenação e gestão do produto e de recursos, quer dentro da organização e quer com parceiros externos voláteis e/ou pontuais. Por outro lado, a modelação de Produto sofreu uma evolução, assistindo-se à migração dos recursos tradicionais de natureza humana (como desenhos técnicos) para recurso a modelos de produtos auxiliados por computador (como CAD ou modelos CAE). A modelação de Produto, juntamente com uma estrutura de informação avançada, tem sido reconhecida pelas comunidades académicas e industriais, como a melhor forma de integrar e coordenar, na fase inicial de Design/Projecto, os multifacetados aspectos do ciclo de vida do produto. Todas estas contribuições permitiram a estipulação de especificações dos produtos com mais antecedência e com melhor precisão. No entanto, ficaria ainda a faltar uma adaptação das especificações de Fabrico - incluídas desde sempre em desenhos técnicos ou em notas baseadas em texto – a essa nova realidade, nomeadamente pela falta de automação, de integração e de possibilidade de suporte computacional adequado. As melhorias recentes em modelos de produto normalizados (como é exemplo o modelo de dados de produto STEP – ISO 10303) deram um contributo significativo para a inclusão de conhecimento e mecanismos de integração de informação adicional acerca do produto. Contudo, subsistiram alguns problemas de integração computacional porque várias terminologias são usadas ao longo do Ciclo de Vida do Produto, tendo em conta as diferentes vocações das equipas de Projecto e Fabrico. Por outro lado, a crescente utilização de Internet começou a necessitar de modelos de produtos com capacidades semânticas, para uma completa e profícua integração com os agentes de Modelos de Empresas Virtuais. As ontologias facilitam o entendimento computacional entre aplicações, e como tal a melhoria da comunicação e interoperabilidade entre pessoas e organizações. As ontologias visaram que conceitos chave e termos relevantes de um determinado domínio fossem identificados e definidos de um modo computacional explicito, normalizado e inequívoco. Assim, as ontologias facilitam o uso e intercâmbio de dados, informação e conhecimento entre equipas interdisciplinares e sistemas heterogéneos, catalizando a integração de sistemas inteligentes. Este trabalho propõe uma metodologia de apoio ao desenvolvimento de uma ontologia de referência, harmonizada para um grupo de empresas que partilhem um domínio de negócios. Esta metodologia é baseada no conceito de Ontologia Mediadora, que possibilita as transformações semânticas entre a ontologia preexistente de cada empresa e a de referência. A metodologia possibilita que cada organização mantenha a sua própria terminologia, glossário e estruturas ontológicas, proporcionando uma comunicação e interacção directa com os outros. Esta metodologia contribui para a reutilização de dados e conhecimentos incorporados nos modelos de produto normalizados, como um apoio efectivo às equipas de engenharia no processo de avaliação da fasebilidade do produto, nomeadamente pela averiguação automática da validade das especificações de fabrico.
APA, Harvard, Vancouver, ISO, and other styles
47

Landeman, Philip. "Samband mellan geologiska och bergmekaniska egenskaper i bergmaterial som bärlager till riksväg 51 : Riksväg 51 sträckan Svennevad - Kvarntorpskorset." Thesis, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-80189.

Full text
Abstract:
This thesis was created to ensure the quality of the rock which would be crushed to base layer construction material, in a road cut at Swedish highway 51, and to find a possible link between the rock's abrasion resistance and its mineralogy. Rock samples were collected, and among other things, several ball mill tests were carried out. The design of the road project was carried out by Loxia Group AB with NCC Group as contractor.A total of 18 rock samples and 2 base layer samples were taken in the area and they were all tested in a ball mill. The results showed that of the 18 rock samples, 2 samples had a ball mill value of less than 16 on the scale, 10 samples had values from 16 to 20, in addition to this, 3 samples had values from 20 to 21 and 3 samples had values in excess of 21 on the Swedish ball mill scale.Of the 3 samples with a ball mill value higher than 21, all contained a larger amount of biotite. Biotite did not appear to the same extent among the samples that ended up further down the ball mill scale. This link was so clear that a conclusion was subsequently drawn from this. The samples taken on the prefabricated base layer both had a ball mill value between 16 and 20. Overall, both the base layer and the rock material passed the Swedish Government’s Transport Administration’s requirements according to "TRVKB 10, Obundna lager". The rock type that was on the south part of the rock cut, adjacent to a deformation zone, had way too poor quality to undergo a ball mill test and therefore there are no values taken from that area.The conclusion of the work is that the rock material overall meets the Swedish Government’s Transport Administration’s requirements for base layer construction materials according to "TRVKB 10 Obundna lager", that a clear link between the proportion of biotite in a rock material and its abrasion resistance exists, and that the broken rock in the south should not be used as construction materials since the rock has insufficient mechanic capacity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography