To see the other types of publications on this topic, follow the link: Post-processing techniques.

Dissertations / Theses on the topic 'Post-processing techniques'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 dissertations / theses for your research on the topic 'Post-processing techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lönroth, Per, and Mattias Unger. "Advanced Real-time Post-Processing using GPGPU techniques." Thesis, Linköping University, Department of Science and Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-14962.

Full text
Abstract:

 

Post-processing techniques are used to change a rendered image as a last step before presentation and include, but is not limited to, operations such as change of saturation or contrast, and also more advanced effects like depth-of-field and tone mapping.

Depth-of-field effects are created by changing the focus in an image; the parts close to the focus point are perfectly sharp while the rest of the image has a variable amount of blurriness. The effect is widely used in photography and movies as a depth cue but has in the latest years also been introduced into computer games.

Today’s graphics hardware gives new possibilities when it comes to computation capacity. Shaders and GPGPU languages can be used to do massive parallel operations on graphics hardware and are well suited for game developers.

This thesis presents the theoretical background of some of the recent and most valuable depth-of-field algorithms and describes the implementation of various solutions in the shader domain but also using GPGPU techniques. The main objective is to analyze various depth-of-field approaches and look at their visual quality and how the methods scale performance wise when using different techniques.

 

APA, Harvard, Vancouver, ISO, and other styles
2

Farsi, Hassan. "Advanced pre-and-post processing techniques for speech coding." Thesis, University of Surrey, 2003. http://epubs.surrey.ac.uk/844491/.

Full text
Abstract:
Advances in digital technology in the last decade have motivated the development of very efficient and high quality speech compression algorithms. While in the early low bit rate coding systems, the main target was the production of intelligible speech at low bit rates, expansion of new applications such as mobile satellite systems increased the demand for reducing the transmission bandwidth and achieving higher speech quality. This resulted in the development of efficient parametric models for speech production system. These models were the basis of powerful speech compression algorithms such as CELP, MBE, MELP and WI. The performance of a speech coder not only depends on the speech production model employed but also on the accurate estimation of speech parameters. Periodicity, also known as pitch, is one of the speech parameters that greatly affect the synthesised speech quality. Thus, the subject of pitch determination has attracted much research in the area of low bit rate coding. In these studies it is assumed that for a short segment of speech, called frame, the pitch is fixed or smoothly evolving. The pitch estimation algorithms generally fail to determine irregular variations, which can occur at onset and offset speech segments. In order to overcome this problem, a novel preprocessing method, which detects irregular pitch variations and modifies the speech signal such as to improve the accuracy of the pitch estimation, is proposed. This method results in more regular speech while maintaining perceptual speech quality. The perceptual quality of the synthesised speech may also be improved using postfiltering techniques. Conventional postfiltering methods generally consider the enhancement of the whole speech spectrum. This may result in the broadening of the first formant, which leads to the increase of quantisation noise for this formant. A new postfiltering technique, which is based on factorising the linear prediction synthesis filter, is proposed. This provides more control over the formant bandwidth and attenuation of spectral speech valleys. Key words: Pitch smoothing, speech pre-processor, postfiltering.
APA, Harvard, Vancouver, ISO, and other styles
3

Goldfarb, Daniel Scott. "An Evaluation of Assignment Algorithms and Post-Processing Techniques for Travel Demand Forecast Models." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/31631.

Full text
Abstract:
The purpose of this research project was to evaluate the techniques outlined in the National Cooperative Highway Research Program Technical Report 255 Highway Traffic Data for Urbanized Area Project Planning and Design (NCHRP-255), published in 1982 by the Transportation Research Board. This evaluation was accomplished by using a regional travel demand forecast model calibrated and validated for the year 1990 and developing a highway forecast for the year 2000. The forecasted volumes along the Capital Beltway (I-495/I-95) portion located in the State of Maryland were compared to observed count data for that same year. A series of statistical measures were used to quantitatively evaluate the benefits of the techniques documented in NCHRP-255. The primary research objectives were: ·To critically evaluate the ability of a regional travel demand forecast model to accurately forecast freeway corridor volumes by comparing link forecast volumes to the actual count data. ·To evaluate and determine the significance of post-processing techniques as outlined in NCHRP-255. The most important lesson learned from this research is that although it was originally written in 1982, NCHRP-255 is still a very valuable resources for supplementing travel demand forecast model output. The â rawâ model output is not reliable enough to be used directly for highway design, operational analysis, nor alternative or economic evaluations. The travel demand forecast model is a tool that is just part of the forecasting process. It is not a turn-key operation, and travel demand forecasts cannot be done without the application of engineering judgment.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
4

Ni, Karl S. "Pattern recognition techniques for image and video post-processing specific application to image interpolation /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3307557.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed July 15, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 142-151).
APA, Harvard, Vancouver, ISO, and other styles
5

McLeish, Kate. "Combining data acquisition and post-processing techniques for magnetic resonance imaging of moving objects." Thesis, King's College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.406105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Weis, Christian [Verfasser], and Ben [Gutachter] Fabry. "Monitoring of cell dynamics - Imaging techniques and post-processing / Christian Weis ; Gutachter: Ben Fabry." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2016. http://d-nb.info/1123284385/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hendriks, Lukas Anton. "Image processing techniques for sector scan sonar." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/2487.

Full text
Abstract:
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2009.
ENGLISH ABSTRACT: Sonars are used extensively for underwater sensing and recent advances in forward-looking imaging sonar have made this type of sonar an appropriate choice for use on Autonomous Underwater Vehicles. The images received from these sonar do however, tend to be noisy and when used in shallow water contain strong bottom reflections that obscure returns from actual targets. The focus of this work was the investigation and development of post-processing techniques to enable the successful use of the sonar images for automated navigation. The use of standard image processing techniques for noise reduction and background estimation, were evaluated on sonar images with varying amounts of noise, as well as on a set of images taken from an AUV in a harbour. The use of multiple background removal and noise reduction techniques on a single image was also investigated. To this end a performance measure was developed, based on the dynamic range found in the image and the uniformity of returned targets. This provided a means to quantitatively compare sets of post-processing techniques and identify the “optimal” processing. The resultant images showed great improvement in the visibility of target areas and the proposed techniques can significantly improve the chances of correct target extraction.
AFRIKAANSE OPSOMMING: Sonars word algemeen gebruik as onderwater sensors. Onlangse ontwikkelings in vooruit-kykende sonars, maak hierdie tipe sonar ’n goeie keuse vir die gebruik op ’n Outomatiese Onderwater Voertuig. Die beelde wat ontvang word vanaf hierdie sonar neig om egter raserig te wees, en wanneer dit in vlak water gebruik word toon dit sterk bodemrefleksies, wat die weerkaatsings van regte teikens verduister. Die fokus van die werk was die ondersoek en ontwikkeling van naverwerkings tegnieke, wat die sonar beelde bruikbaar maak vir outomatiese navigasie. Die gebruik van standaard beeldverwerkingstegnieke vir ruis-onderdrukking en agtergrond beraming, is geëvalueer aan die hand van sonar beelde met verskillende hoeveelhede ruis, asook aan die hand van ’n stel beelde wat in ’n hawe geneem is. Verdere ondersoek is ingestel na die gebruik van meer as een agtergrond beramings en ruis onderdrukking tegniek op ’n enkele beeld. Hierdie het gelei tot die ontwikkeling van ’n maatstaf vir werkverrigting van toegepaste tegnieke. Hierdie maatstaf gee ’n kwantitatiewe waardering van die verbetering op die oorspronklike beeld, en is gebaseer op die verbetering in dinamiese bereik in die beeld en die uniformiteit van die teiken se weerkaatsing. Hierdie maatstaf is gebruik vir die vergelyking van verskeie tegnieke, en identifisering van die “optimale” verwerking. Die verwerkte beelde het ’n groot verbetering getoon in die sigbaarheid van teikens, en die voorgestelde tegnieke kan ’n betekenisvolle bedrae lewer tot die suksesvolle identifisering van obstruksies.
APA, Harvard, Vancouver, ISO, and other styles
8

Paolani, Giulia. "Brain perfusion imaging techniques." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
In questo lavoro si sono analizzate due diverse tecniche di imaging di perfusione implementate in Risonanza Magnetica e Tomografia Assiale Computerizzata (TAC). La prima analisi proposta riguarda la tecnica di Arterial Spin Labeling che permette di ottenere informazioni di perfusione senza la somministrazione di un mezzo di contrasto. In questo lavoro si è sviluppata e testata una pipeline completa, attraverso lo sviluppo sia di un protocollo di acquisizione che di post-processing. In particolare, sono stati definiti parametri di acquisizione standard, che permettono di ottenere una buona qualità dei dati, successivamente elaborati attraverso un protocollo di post processing che, a partire dall'acquisizione di un esperimento di ASL, permette il calcolo di una mappa quantitativa di cerebral blood flow (CBF). Nel corso del lavoro, si è notata una asimmetria nella valutazione della perfusione, non giustificata dai dati e probabilmente dovuta ad una configurazione hardware non ottimale. Risolta questa difficoltà tecnica, la pipeline sviluppata sarà utilizzata come standard per l’acquisizione e il post-processing di dati ASL. La seconda analisi riguarda dati acquisiti attraverso esperimenti di perfusione TAC. Si è presa in considerazione la sua applicazione a casi di infarti cerebrali in cui le tecniche di trombectomia sono risultate inefficaci. L'obiettivo di questo lavoro è stata la definizione di una pipeline che permetta il calcolo autonomo delle mappe di perfusione e la standardizzazione della trattazione dei dati. In particolare, la pipeline permette l’analisi di dati di perfusione attraverso l’utilizzo di soli software open-source, contrapponendosi alla metodologia operativa comunemente utilizzata in clinica e rendendo le analisi riproducibili. Il lavoro proposto è inserito in un progetto più ampio, che include future analisi longitudinali con coorti di pazienti più ampie per definire e validare parametri predittivi degli outcome dei pazienti.
APA, Harvard, Vancouver, ISO, and other styles
9

Georgantzoglou, Antonios. "Development of near real-time image processing techniques for cell detection, microbeam targeting and tracking post-irradiation." Thesis, University of Cambridge, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rao, Anita. "High resolution magnetic resonance angiography (mra) of the renal vasculature : development of improved acquisition and post- processing techniques /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487940308432037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

VERDOYA, JACOPO. "EXPERIMENTAL ANALYSIS OF TRANSITIONAL FLOWS UNDER TURBINE-LIKE CONDITIONS VIA APPLICATION AND DEVELOPMENT OF ADVANCED POST-PROCESSING TECHNIQUES." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1082838.

Full text
Abstract:
The present thesis is primarily devoted to developing and applying advanced post-processing techniques to inspect complex transitional boundary layer (BL) flows evolving under variable inflow conditions. A large amount of data has been experimentally acquired utilizing particle image velocimetry (PIV) and laser Doppler velocimetry (LDV) in a test section consisting of a flat plate installed between two adjustable endwalls. Depending on the Reynolds number (Re), the free-stream (FS) turbulence intensity (Tu) and the adverse pressure gradient (APG) imposed to the flow, attached or separated boundary layer transition was obtained. The effects of the inflow parameters variation have been studied in detail, focusing on the flow statistical and dynamical behavior. Due to the complexity and variety of the transitional phenomena, data-driven modal decomposition techniques have been employed to reduce the large amount of experimental data collected here. Moreover, new variants of well-established post-processing techniques have been developed to identify the main features embedded in the extensive databases. In the case of separated flow transition, the modal decomposition procedures allowed a deep insight into the instability mechanism developing in the shear layer. Dynamic Mode Decomposition (DMD) was used to analyze the most unstable wavelengths related to the Kelvin-Helmholtz (K-H) vortices driving transition. Proper Orthogonal Decomposition (POD) was applied to PIV data, inspecting the main flow structures developing in the different regions of the LSB. Subsequently, an Extended Proper Orthogonal Decomposition (E-POD) procedure was applied, highlighting the correlation between the main dynamics observed in the forward part of the bubble and the breakup events occurring in the reattachment region. Regarding the data reduction, the extensive database was used to develop new empirical correlations predicting the transition process regarding the geometry of a LSB and the related shedding process. The transition process was systematically analyzed using decomposition techniques in the context of the free-stream turbulence induced transition. In order to inspect BL receptivity to free-stream disturbances, a variant of the E-POD was proposed, based on the correlating events between the FS and the BL. Low-order reconstructions of the original data were used to highlight the most correlating events directly linked to the formation and the breakup of streaky structures. Moreover, a turbulent spot recognition algorithm was implemented to identify the BL statistical response to the inflow parameters through the probability density function (PDF) of spot nucleation. Thus, a model for the PDF of spot nucleation is proposed as a function of the main flow parameters involved in the transition process. Based on the results of the previous analyses, engineering correlations for predicting the free-stream turbulence induced transition are also introduced. Independently on the transition type, results obtained employing the aforementioned procedures allowed a fruitful characterization of the different instability mechanisms developing in the first stage of transition, the description and evolution of coherent structures, and the correlation between their dynamics.
APA, Harvard, Vancouver, ISO, and other styles
12

Griesbach, Christopher James. "Improving LiDAR Data Post-Processing Techniques for Archaeological Site Management and Analysis: A Case Study from Canaveral National Seashore Park." Scholar Commons, 2015. https://scholarcommons.usf.edu/etd/5491.

Full text
Abstract:
Methods used to process raw Light Detection and Ranging (LiDAR) data can sometimes obscure the digital signatures indicative of an archaeological site. This thesis explains the negative effects that certain LiDAR data processing procedures can have on the preservation of an archaeological site. This thesis also presents methods for effectively integrating LiDAR with other forms of mapping data in a Geographic Information Systems (GIS) environment in order to improve LiDAR archaeological signatures by examining several pre-Columbian Native American shell middens located in Canaveral National Seashore Park (CANA).
APA, Harvard, Vancouver, ISO, and other styles
13

Yalla, Veeraganesh. "OPTIMAL PHASE MEASURING PROFILOMETRY TECHNIQUES FOR STATIC AND DYNAMIC 3D DATA ACQUISITION." UKnowledge, 2006. http://uknowledge.uky.edu/gradschool_diss/348.

Full text
Abstract:
Phase measuring Profilometry (PMP) is an important technique used in 3D data acquisition. Many variations of the PMP technique exist in the research world. The technique involves projecting phase shifted versions of sinusoidal patterns with known frequency. The 3D information is obtained from the amount of phase deviation that the target object introduces in the captured patterns. Using patterns based on single frequency result in projecting a large number of patterns necessary to achieve minimal reconstruction errors. By using more than one frequency, that is multi-frequency, the error is reduced with the same number of total patterns projected as in the single frequency case. The first major goal of our research work is to minimize the error in 3D reconstruction for a given scan time using multiple frequency sine wave patterns. A mathematical model to estimate the optimal frequency values and the number of phase shift patterns based on stochastic analysis is given. Experiments are conducted by implementing the mathematical model to estimate the optimal frequencies and the number of patterns projected for each frequency level used. The reduction in 3D reconstruction errors and the quality of the 3D data obtained shows the validity of the proposed mathematical model. The second major goal of our research work is the implementation of a post-processing algorithm based on stereo correspondence matching adapted to structured light illumination. Composite pattern is created by combining multiple phase shift patterns and using principles from communication theory. Composite pattern is a novel technique for obtaining real time 3D depth information. The depth obtained by the demodulation of captured composite patterns is generally noisy compared to the multi-pattern approach. In order to obtain realistic 3D depth information, we propose a post-processing algorithm based on dynamic programming. Two different communication theory principles namely, Amplitude Modulation (AM) and Double Side Band Suppressed Carrier (DSBSC) are used to create the composite patterns. As a result of this research work, we developed a series of low-cost structured light scanners based on the multi-frequency PMP technique and tested them for their accuracy in different 3D applications. Three such scanners with different camera systems have been delivered to Toyota for vehicle assembly line inspection. All the scanners use off the shelf components. Two more scanners namely, the single fingerprint and the palmprint scanner developed as part of the Department of Homeland Security grant are in prototype and testing stages.
APA, Harvard, Vancouver, ISO, and other styles
14

DELLACASAGRANDE, MATTEO. "Experimental study of the boundary layer separation and transition processes under turbine-like conditions by means of advanced post-processing techniques." Doctoral thesis, Università degli studi di Genova, 2019. http://hdl.handle.net/11567/944948.

Full text
Abstract:
In this work the transition process of the boundary layer (BL) evolving under turbine-like conditions has been experimentally investigated in details. The effects of the Reynolds number (Re), the free-stream turbulence intensity (Tu) and the adverse pressure gradient (APG) imposed to the flow have been studied for a large variation of these parameters, since they are known to strongly influence the separation and transition processes of the boundary layer. Emphasis has been put on both the statistical and the dynamic behaviour of the flows at hand, that have been experimentally characterized by means of advanced and ad-hoc developed post processing techniques. The study of the effects of the Reynolds number and the Tu level on the development of laminar separation bubbles (LSB) under fixed APG is presented in the first part of this work. The mechanisms by which the variations of Re and Tu act on the bubble size were found to be substantially different and the coexistence of different amplification mechanisms has been observed in the LSBs for high Tu levels. In case of by-pass transition, the effects of the APG has been investigated with respect to the zero pressure gradient condition. The transition process has been found to be more rapid due to the APG imposed to the flow with respect to the zero pressure gradient case. The profiles of the mean streamwise velocity and velocity fluctuation rms obtained by means of Hot-Wire instrumentation showed a self-similar behavior in the laminar part of the boundary layer. For what concern the effects of the Tu level on the velocity and rms of velocity fluctuation profiles, the high free-stream turbulence has been found to reduce the effects of the pressure gradient on the curvature of the mean velocity profiles and shifting the maximum of the turbulence peak towards the wall. In order to shed light on the effects of the APG variation on the statistical and dynamic behaviour of LSB, as well as to provide a complete experimental database containing information about the effects of Re, Tu and APG in case of both attached and separated flows, a new test section has been designed in the second part of this work allowing the continuous variation of the pressure gradient imposed to the flow. In case of separated flows, the separation position was found to move downstream when the APG is reduced and the bubble becomes longer. However, the bubble thickness is reduced with respect to the higher APGs conditions. Proper Orthogonal Decomposition (POD) has been adopted to reduce the large amount of experimental data collected, obtaining a statistical treatment of the main dynamics at hand in terms of their energy content. Moreover, with the aim of characterizing the coexistence of structures with different energy within the flow (e.g., boundary layer streaks, Kelvin- Helmholtz and free-stream vortices) a variant of the classical POD procedure has been proposed. The application of this technique in case of both attached and separated flows highlighted the presence of free-stream structures near the edge of the boundary layer where the transition process has been found to occur, suggesting that free-stream structures can actually play a crucial role in the evolution and breakdown of structures growing into the boundary layer, thus leading transition. Finally, the analysis of the statistical quantities of the flows at hand (i.e. BL integral parameters) has been carried out for all the acquired conditions with the aim of developing new empirical correlations for the prediction of the transition onset and length in case of separated flows. Data collected during both the measuring campaigns allowed the tuning of the proper coefficients in order to take into account for the variation of all the parameters considered in this work. The proposed correlations have been found to fit both the collected data as well as other experimental data available in literature.
APA, Harvard, Vancouver, ISO, and other styles
15

Atié, Michèle. "Perception des ambiances lumineuses d'architectures remarquables : analyse des impressions en situation réelle et à travers des photographies omnidirectionnelles dans un casque immersif." Electronic Thesis or Diss., Ecole centrale de Nantes, 2024. http://www.theses.fr/2024ECDN0047.

Full text
Abstract:
Cette thèse s’inscrit au croisement des domaines des ambiances lumineuses, de la pédagogie architecturale, de la perception et de l’immersion. Elle se concentre principalement sur la conception et la mise en oeuvre d’une nouvelle méthodologie expérimentale ayant pour but d’évaluer la capacité des photographies statiques omnidirectionnelles stéréoscopiques HDR, projetées dans un casque immersif, à restituer fidèlement des impressions subjectives d’ambiances lumineuses vécues dans des architectures de référence. Une attention particulière est accordée à l’influence des opérateurs de mappage de tons (TMOs). La méthodologie développée comprend plusieurs étapes : la constitution d’une grille d’analyse des ambiances lumineuses de lieux remarquables basée sur des propos d'experts ; la mise en place d’une méthode de collecte de données in situ pour l’évaluation des ambiances lumineuses (questionnaire, relevés lumineux, captationsphotographiques omnidirectionnelles HDR) ; et la mise en place d’une méthode d’évaluation des ambiances lumineuses dans un casque immersif. Les résultats fournissent des connaissances sur les caractéristiques des ambiances lumineuses in situ de sept architectures remarquables et sur la fidélité de perception de chaque impression d’ambiance lumineusedans le casque immersif en fonction des TMOs. Ils mettent également en évidence le lien entre les impressions sélectionnées par les propos d’experts et celles évaluées in situ et dans le casque immersif. Ces connaissances sont utiles pour des applications pédagogiques futures en architecture
This thesis is at the crossroads of the fields of luminous atmospheres, architectural pedagogy, perception and immersion. It focuses on the design and implementation of a new experimental methodology for evaluating the ability of HDR stereoscopic omnidirectional static photographs, projected in an immersive Head-Mounted Display (HMD), to faithfully reproducesubjective impressions of luminous atmospheres experienced in reference architectural places. Specific consideration is given to the impact of tone mapping operators (TMOs). Our methodology involves several steps: designing a grid for analyzing the luminous atmospheres of iconic places based on expert judgement; implementing in situ data collection to assess luminous atmospheres (questionnaire, light measurements, HDR omnidirectional photographic recordings), and implementinga method for assessing luminous atmospheres in an HMD. The results provide knowledge about the characteristics of the in situ luminous atmospheres of seven iconic buildings and the perceptual fidelity of each luminous atmosphere’s impression in the HMD, depending on the TMOs. The findings also highlight the relationship between the impressions selected by the experts and those assessed in situ and in the HMD. This knowledge is useful for future pedagogical applications in architecture
APA, Harvard, Vancouver, ISO, and other styles
16

Johansson, Ingrid. "Post-processing for roughness reduction of additive manufactured polyamide 12 using a fully automated chemical vapor technique - The effect on micro and macrolevel." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279316.

Full text
Abstract:
Additive manufacturing has increased in popularity in recent years partly due to the possibilities of producing complex geometries in a rapid manner. Selective laser sintering (SLS) is a type of additive manufacturing technique that utilizes polymer powder and a layer-by-layer technique to build up the desired geometry. The main drawbacks with this technique are related to the reproducibility, mechanical performance and the poor surface finish of printed parts. Surface roughness increase the risk of bacterial adhesion and biofilm formation, which is unbeneficial for parts to be used in the healthcare industry. This thesis investigated the possibility in reducing the surface roughness of SLS printed polyamide 12 with the fully automated post-processing technology PostPro3D. The postprocessing relies on chemical post-processing for smoothening of the parts’ surface. PostPro3D utilizes vaporized solvent which condenses on the printed parts causing the surface to reflow. By this roughness, in terms of unmolten particles, is dissolved and surface pores are sealed. The influence of alternating post-processing parameters; pressure, temperature, time and solvent volume was evaluated with a Design of Experiments (DoE). The roughness reduction was quantified with monitoring the arithmetic mean average roughness (Ra), the ten-point height roughness (Rz) and the average waviness (Wa) using a stylus profilometer and confocal laser scanning microscope (CLSM). The effect of post-processing on mechanical properties was evaluated with tensile testing and the effect on microstructure by scanning electron microscopy (SEM). A comparison was made between post-processed samples and a non-postprocessed reference, as well as between samples post-processed with different degree of aggressivity, with regards to the roughness values, mechanical properties and the microstructure. Results indicated that solvent volume and time had the largest effect in reducing the roughness parameters Ra and Rz, while time had the largest influence in increasing the elongation at break, tensile strength at break and toughness. The post-processing’s effect on waviness and Young’s modulus was less evident. SEM established that complete dissolution of powder particles was not achieved for the tested parameter ranges, but a clear improvement of the surface was observed for all different post-processing conditions, as compared to a non-post-processed specimen. The reduction in roughness by increased solvent volume and time was thought to be due to increased condensation of solvent droplets on the SLS-parts. The increase in mechanical properties was likely related to elimination of crack initiation points at the surface. In general, the mechanical properties experienced a wide spread in the results, this was concluded to be related to differences in intrinsic properties of the printed parts, and highlighted the problems with reproducibility related to the SLS. An optimal roughness of Ra less than 1 µm was not obtained for the tested post-processing conditions, and further parameter optimization is required.
Möjligheten att tillverka komplexa geometrier på ett snabbt sätt, har fått additiv tillverkning att öka i popularitet. Selective laser sintering (SLS) är en typ av additiv tillverkning där polymer pulver sintras samman succesivt lager för lager. Dessa lager bygger tillsammans upp den önskade geometrin. De största nackdelarna med SLS är att de tillverkade delarna har bristande mekaniska egenskaper, har brister i reproducerbarheten samt att ytan har en dålig kvalitet, den är ojämn. Ytojämnheten ökar risken för att bakterier fastnar och ett en biofilm bildas. Då produkten ska användas inom sjukvården, är det viktigt att biofilm bildning undviks. Den här uppsatsen har undersökt möjligheterna att reducera ytojämnheten av SLS-printad polyamid 12 med hjälp av kemisk efterbehandling i PostPro3D. Denna maskin är helt automatisk och åstadkommer ytbehandling genom att förånga lösningsmedel som sedan kondenserar på det SLS-printade materialet. Ytan på materialet löses upp vilket minskar ytojämnheter i form av pulver partiklar samt sluter porer på ytan. Genom att ändra på parametrarna för efterbehandlingen kan graden av aggressivitet påverkas, detta gäller tryck, temperatur, tid och lösningsmedels volym. De optimala parametrarna för att åstadkomma en jämn yta utvärderades med en Design of Experiments (DoE). Reducering av ytojämnhet mättes med hjälp av aritmetisk genomsnittlig ojämnhet (Ra), tio-punkts höjd ojämnhet (Rz) och medel-vågighet (Wa), med nålprofilometer och konfokal mikroskop. Efterbehandlingens påverkan på de mekaniska egenskaperna utvärderades i ett dragprov, medan mikrostrukturen undersöktes med svepelektronmikroskop (SEM). Ytjämnheten, de mekaniska egenskaperna och mikrostrukturen jämfördes mellan icke behandlade prover och ytbehandlade prover, med varierad grad av aggressivitet. Resultaten indikerade att tid och volym hade störst effekt på Ra och Rz, medan tid hade störst positiv inverkan på töjning, styrka och seghet. Effekten på styvheten (E-modulen) och vågigheten (Wa) var mindre uppenbar, och någon tydlig påverkan kunde inte observeras. SEM-analys visade att fullständig upplösning av partiklar på ytan inte sker för de testade behandlingarna, men en tydlig förbättring kunde ses vid jämförelse av ett obehandlat prov och ett behandlat prov. Den ökade ytjämnheten för längre tid och högre volym tros bero på en ökad kondensering av lösningsmedel på ytan under efterbehandlingen. Ökningen i mekaniska egenskaperna är troligtvis relaterade till eliminering av kritiska defekter på ytan. Generellt visade de mekaniska egenskaper en stor spridning i resultaten, detta tros bero på inneboende egenskaper i provstavarna. Denna slutsats understryker den bristande reproducerbarheten för SLS-printning. En optimal ytjämnhet antas vara ett Ra värde under 1 µm, denna ytjämnhet har inte uppnåtts med de testade efterbehandlingsparameter värdena, därför krävs ytterligare parameter optimering för att nå optimal efterbehandling.
APA, Harvard, Vancouver, ISO, and other styles
17

Cantarello, Luca. "Use of a Kalman filtering technique for near-surface temperature analysis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13455/.

Full text
Abstract:
A statistical post-processing of the hourly 2-meter temperature fields from the Nordic convective-scale operational Numerical Weather Prediction model Arome MetCoOp 2.5 Km has been developed and tested at the Norwegian Meteorological Institute (MET Norway). The objective of the work is to improve the representation of the temperature close to the surface combining model data and in-situ observations for climatological and hydrological applications. In particular, a statistical scheme based on a bias-aware Local Ensemble Transform Kalman Filter has been adapted to the spatial interpolation of surface temperature. This scheme starts from an ensemble of 2-meter temperature fields derived from Arome MetCoOp 2.5 Km and, taking into account the observations provided by the MET Norway network, produces an ensemble of analysis fields characterised by a grid spacing of 1 km. The model best estimate employed in the interpolation procedure is given by the latest avilable forecast, subsequently corrected for the model bias. The scheme has been applied off-line and the final analysis is performed independently at each grid point. The final analysis ensemble has been evaluated and its mean value has been proved to improve significantly the best estimate of Arome MetCoOp 2.5 km in representing the 2-meter temperature fields, in terms of both accuracy and precision, with a reduction in the root mean squared values as well as in the bias and an improvement in reproducing the cold extremes during wintertime. More generally, the analysis ensemble displays better forecast verification scores, with an overall reduction in the Brier Score and its reliability component and an increase in the resolution term for the zero degrees threshold. However, the final ensemble spread remains too narrow, though not as narrow as the model output.
APA, Harvard, Vancouver, ISO, and other styles
18

Verguet, Amandine. "Développements méthodologiques et informatiques pour la microscopie électronique en transmission appliqués à des échantillons biologiques Alignment of Tilt Series (Chapter 7 of the Book: Cellular Imaging: Electron Tomography and Related Techniques, Hanssen Eric) An ImageJ tool for simplified post-treatment of TEM phase contrast images (SPCI) Comparison of methods based on feature tracking for fiducial-less image alignment in electron tomography." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS487.

Full text
Abstract:
La microscopie électronique en transmission est une technique pertinente pour les études structurales en biologie. Certaines méthodes d’acquisition et d’analyse doivent être améliorées pour permettre l’observation d’échantillons sensibles aux doses d’électrons dans de bonnes conditions de contraste et de rapport signal sur bruit. Au cours de cette thèse, j’ai exploré différentes approches méthodologiques et informatiques dans le but d’améliorer la qualité des images. J’ai ainsi évalué la pertinence de la combinaison de l’imagerie en énergie filtrée avec le mode STEM. Je montre que cette combinaison est prometteuse puisqu’elle permet d’améliorer le rapport signal sur bruit des images. Par ailleurs, j’ai collaboré à des développements algorithmiques et logiciels pour la reconstitution d’images de contraste de phase. Ceci permet l’amélioration du contraste par rapport à une acquisition classique. Je montre aussi qu’à cette fin, la phase plate tout comme les séries focales sont des outils efficaces. En étudiant une approche logicielle pour l’exploitation des séries focales, nous avons déterminé qu’il est possible d’obtenir, en plus de données quantitatives, un résultat qualitatif à partir d’une seule image. J’ai ainsi développé le plugin SPCI pour le logiciel ImageJ, qui permet de traiter de une à trois images focales. Je m’intéresse également à l’optimisation du processus de reconstruction tomographique, tant à l’alignement qu’à la reconstruction proprement dite. L’approche évaluée pour l’alignement utilise des points caractéristiques associés à des descripteurs locaux. Elle s’est montrée performante et permet de traiter des images sans marqueurs fiduciaires. Enfin, je propose une nouvelle méthode unifiée de reconstruction tridimensionnelle de séries tomographiques parcimonieuses. Il en découle une approche innovante mélangeant reconstruction et alignement dont l’ébauche servira de base à des travaux futurs pour le traitement de séries tomographiques parcimonieuses. L’ensemble des méthodes évoquées ici, leur validation, ainsi que les perspectives d’évolution associées sont décrites dans ce manuscrit
Transmission Electron Microscopy is a major tool for performing structural studies in biology. Some methods used for image sampling and analysis need to be improved in order to observe electron dose sensitive samples with good contrast and good signal to noise ratio. During this thesis, various methodological and computational approaches have been studied which aim to improve image quality. First, I evaluated the relevance of combining energy filtered imaging with the STEM mode. I show that this allows an improvement of the signal to noise ratio of images. Then, I devised an algorithm that generates an image from phase data. This approach allows improving the image contrast over direct imaging. The use of a phase plate and focal tilt series are both efficient tools to achieve this goal. While working on the software approach for processing of tilt series, we found that a qualitative result can be obtained from a single image. I developped the SPCI plugin for the ImageJ software. It allows processing between one and three focal images. My work involves optimization of the tomographic reconstruction process, including working with both alignment algorithms and reconstruction algorithms. I expose my studies on image alignment methods used on tilt series. These methods do rely on the use of key points and associated local descriptors. They have proved to be efficient to process images lacking fiducial markers. Finally, I propose a new unified algorithmic approach for 3D reconstruction of tomographic tilt series acquired with sparse sampling. I then derived another novel method that integrates the image alignment step in the process. Studies and developments will continue on both methods in futur work
APA, Harvard, Vancouver, ISO, and other styles
19

Scholz, Volker [Verfasser]. "New editing techniques for video post-processing / vorgelegt von Volker Scholz." 2007. http://d-nb.info/985574437/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chandrasekhar, J. "Performance Analysis Of Post Detection Integration Techniques In The Presence Of Model Uncertainties." Thesis, 2011. https://etd.iisc.ac.in/handle/2005/2106.

Full text
Abstract:
In this thesis, we analyze the performance of the Post Detection Integration (PDI) techniques used for detection of weak DS/CDMA signals in the presence of uncertainty in the frequency, noise variance and data bits. Such weak signal detection problems arise, for example, in the first step of code acquisition for applications such as the Global Navigation Satellite Systems (GNSS) based position localization. Typically, in such applications, a combination of coherent and post-coherent integration stages are used to improve the reliability of signal detection. We show that the feasibility of using fully coherent processing is limited due to the presence of unknown data-bits and/or frequency uncertainty. We analyze the performance of the two conventional PDI techniques, namely, the Non-coherent PDI (NC-PDI) and the Differential-PDI (D-PDI), in the presence of noise and data bit uncertainty, to establish their robustness for weak signal detection. We show that the NC-PDI technique is robust to uncertainty in the data bits, but a fundamental detection limit exists due to uncertainty in the noise variance. The D-PDI technique, on the other hand, is robust to uncertainty in the noise variance, but its performance degrades in the presence of unknown data bits. We also analyze the following different variants of the NC-PDI and D-PDI techniques: Quadratic NC-PDI technique, Non-quadratic NC-PDI, D-PDI with real component (D-PDI (Real)) and D-PDI with absolute component (D-PDI (Abs)). We show that the likelihood ratio based test statistic derived in the presence of data bits is non-robust in the presence of noise uncertainty. We propose two novel PDI techniques as a solution to the above mentioned shortcomings in the conventional PDI methods. The first is a cyclostationarity based sub-optimal PDI technique, that exploits the periodicity introduced due to the data bits. We establish the exact mathematical relationship between the D-PDI and cyclostationarity-based signal detection methods. The second method we propose is a modified PDI technique, which is robust against both noise and data bit uncertainties. We derive two variants of the modified technique, which are tailored for data and pilot channels, respectively. We characterize the performance of the conventional and proposed PDI techniques in terms of their false alarm and detection probabilities and compare them through the receiver operating characteristic (ROC) curves. We derive the sample complexity of the test-statistic in order to achieve a given performance in terms of detection and false alarm probabilities in the presence of model uncertainties. We validate the theoretical results and illustrate the improved performance that can be obtained using our proposed PDI protocols through Monte-Carlo simulations.
APA, Harvard, Vancouver, ISO, and other styles
21

Chandrasekhar, J. "Performance Analysis Of Post Detection Integration Techniques In The Presence Of Model Uncertainties." Thesis, 2011. http://etd.iisc.ernet.in/handle/2005/2106.

Full text
Abstract:
In this thesis, we analyze the performance of the Post Detection Integration (PDI) techniques used for detection of weak DS/CDMA signals in the presence of uncertainty in the frequency, noise variance and data bits. Such weak signal detection problems arise, for example, in the first step of code acquisition for applications such as the Global Navigation Satellite Systems (GNSS) based position localization. Typically, in such applications, a combination of coherent and post-coherent integration stages are used to improve the reliability of signal detection. We show that the feasibility of using fully coherent processing is limited due to the presence of unknown data-bits and/or frequency uncertainty. We analyze the performance of the two conventional PDI techniques, namely, the Non-coherent PDI (NC-PDI) and the Differential-PDI (D-PDI), in the presence of noise and data bit uncertainty, to establish their robustness for weak signal detection. We show that the NC-PDI technique is robust to uncertainty in the data bits, but a fundamental detection limit exists due to uncertainty in the noise variance. The D-PDI technique, on the other hand, is robust to uncertainty in the noise variance, but its performance degrades in the presence of unknown data bits. We also analyze the following different variants of the NC-PDI and D-PDI techniques: Quadratic NC-PDI technique, Non-quadratic NC-PDI, D-PDI with real component (D-PDI (Real)) and D-PDI with absolute component (D-PDI (Abs)). We show that the likelihood ratio based test statistic derived in the presence of data bits is non-robust in the presence of noise uncertainty. We propose two novel PDI techniques as a solution to the above mentioned shortcomings in the conventional PDI methods. The first is a cyclostationarity based sub-optimal PDI technique, that exploits the periodicity introduced due to the data bits. We establish the exact mathematical relationship between the D-PDI and cyclostationarity-based signal detection methods. The second method we propose is a modified PDI technique, which is robust against both noise and data bit uncertainties. We derive two variants of the modified technique, which are tailored for data and pilot channels, respectively. We characterize the performance of the conventional and proposed PDI techniques in terms of their false alarm and detection probabilities and compare them through the receiver operating characteristic (ROC) curves. We derive the sample complexity of the test-statistic in order to achieve a given performance in terms of detection and false alarm probabilities in the presence of model uncertainties. We validate the theoretical results and illustrate the improved performance that can be obtained using our proposed PDI protocols through Monte-Carlo simulations.
APA, Harvard, Vancouver, ISO, and other styles
22

Salpeter, Nathaniel. "Development of Spatio-Temporal Wavelet Post Processing Techniques for Application to Thermal Hydraulic Experiments and Numerical Simulations." Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10998.

Full text
Abstract:
This work focuses on both high fidelity experimental and numerical thermal hydraulic studies and advanced frequency decomposition methods. The major contribution of this work is a proposed method for spatio-temporal decomposition of frequencies present in the flow. This method provides an instantaneous visualization of coherent frequency ?structures? in the flow. The significance of this technique from an engineering standpoint is the ease of implementation and the importance of such a tool for design engineers. To validate this method, synthetic verification data, experimental data sets, and numerical results are used. The first experimental work involves flow through the side entry orifice (SEO) of a boiling water reactor (BWR) using non-intrusive particle tracking velocimetry (PTV) techniques. The second experiment is of a simulated double ended guillotine break in the prismatic block gas cooled reactor. Numerical simulations of jet flow mixing in the lower plenum of a prismatic block high temperature gas cooled reactor is used as a final data set for verification purposes as well as demonstration of the applicability of the method for an actual computational fluid dynamics validation case.
APA, Harvard, Vancouver, ISO, and other styles
23

Hsueh, Ko-Min, and 薛格閔. "Application of XML Technique to Finite Element Post-Processing." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/63901920829313538903.

Full text
Abstract:
碩士
國立臺灣大學
土木工程學研究所
89
The objective of this research is to design a universal post-processor for FEA programs using XML (eXtensible Markup Language) technique. A prototype of the designed system has also been implemented to demonstrate and verify the proposed design. In order to reach this goal, this research divide post-processor into three individual modules. The first one is the data translation module, which is responsible for translating ASCII files into XML format. The second one is the post-processing module that is in charge of the process of XML data and numerical smoothing. The last one is the visualization module, rendering geometry model and analysis results by using graphic and visual technologies. The data transfer and communication between these modules also base on XML technique, so these modules can easily cooperate with each other to complete post-processing tasks.
APA, Harvard, Vancouver, ISO, and other styles
24

Kao, Yi-Tzu, and 高怡慈. "Automatic Image Post-Processing Technique on CT Cerebral Perfusion Images." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/37105952613387710259.

Full text
Abstract:
碩士
國立陽明大學
生物醫學影像暨放射科學系
100
Purpose: Cerebral blood volume (CBV) and cerebral blood flow (CBF) are important hemodynamic parameters in CT brain perfusion for identifying tissue with delayed perfusion in stenosis patients. An arterial input function (AIF) and a venous output function (VOF) are necessary for quantification of hemodynamic parameters. The current automatic techniques for selecting arterial input function (AIF) and venous output function (VOF) on CT brain perfusion images are prone to motion artifact and random noise. In this article, we developed a new automatic technique for selecting AIF and VOF to overcome these problems. Material and methods: In this article, we collected 15 stenosis patients’ CT brain perfusion images. First, a principle axis transformation was applied to CT brain perfusion images to correct for translational and rotational motion artifacts. Second, we removed bone voxels and neighboring voxels from the corrected CT brain perfusion images. Only brain voxels were used in the AIF and VOF selecting procedures. Third, an anisotropic filtering was applied to the perfusion images to improve SNR. For identifying AIF and VOF, the criteria were : 1) large area under curve; 2) early arrival of contrast agents; 3) narrow effective width. After determined AIF and VOF, The characteristic values of concentration-time curve such area under curve, effective width, arrival time, and maximum relative concentration were calculated. The characteristic values of curves calculated from automatically selected AIFs and VOFs were compared with that from manually selected AIFs and VOFs in 15 stenosis patients. Results: The AIF and VOF can be successfully selected by using the proposed automatic technique in all 15 patients. The characteristic values of curve calculated from automatically selected AIFs and VOFs were comparable to that were calculated manually. The areas under the concentration-time curve of automatically measured VOFs were larger than that of manually measured VOFs. Conclusion: We developed an automatic technique for selecting AIF and VOF that can overcome the problems made by motion artifact and random noise. With the automatically selected AIF and VOF, the post processing of brain CT perfusion images can be fully automated and hemodynamic images can be generated promptly for clinical diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
25

Lin, Hsueh-Chun, and 林學群. "Circuit Design of Low Error-Floor LDPC Decoders Using R-LMSA with Post-Processing Technique for Wireless Systems." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/93283980129428279031.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
100
In this thesis, low error floor LDPC decoders using R-LMSA with a post-processing technique for wireless systems is presented with four major contributions. Firstly, a partition and shift LDPC (PS-LDPC) codes (480, 2400) is constructed with 4/5 coding rate and girth of 8. Secondly, an improved algorithm, named Reset Layer Min Sum Algorithm (R-LMSA), is proposed to lower the error floor of LDPC decoder due to quantization errors. The third is the proposed dual-path pipelined partial parallel architecture can increase the operating frequency, and double the throughput without idled circuit blocks. The last is the architecture was designed using the TSMC 90nm CMOS technology. The maximum frequency reaches 188MHz with the core area of 2.97mm2 at supply voltage of 0.9V. The throughput is 10.74Gbps for 7 iterations per decoding process with the power consumptions of 287mW. There are two types of errors that result in the error floor in a LDPC decoder. One is owing to quantization errors, the other are absorbing errors. To solve the quantization errors, the proposed R-LMSA lowers the error floor less than BER = 10-7 without any hardware cost. Although it complys with the requirement of the most wireless systems, some other applications may need BER < 10-7. Therefor, we propose a new post processing technique, named Check Node Tracing with Boundary Search (CNTBS), to further reduce the error floor due to absorbing errors. After the analysis by simulations, the BER is effectively lower than 10-7 when R-LMSA is combined with the CNTBS technique.
APA, Harvard, Vancouver, ISO, and other styles
26

(6417158), Gaurav Vilas Inamke. "THE INVESTIGATION OF WARM LASER SHOCK PEENING AS A POST PROCESSING TECHNIQUE TO IMPROVE JOINT STRENGTH OF LASER WELDED MATERIALS." Thesis, 2019.

Find full text
Abstract:

This study is concerned with investigating the effects of warm laser shock peening (wLSP) on the enhancement of mechanical performance of laser welded joints. A 3-D finite element model is presented which predicts the surface indentation geometry and in-depth compressive residual stresses generated by wLSP. To define the LSP pressure on the surface of the material, a 1-D confined plasma model is implemented to predict plasma pressure generated by laser-coating interaction in an oil confinement regime. Residual stresses predicted by the finite element model for wLSP reveal higher magnitude and depth of compressive residual stresses than room temperature laser shock peening. A novel dual laser wLSP experimental setup is developed for simultaneous heating of the sample, to a prescribed temperature, and to perform wLSP. The heating laser power is tuned to achieve a predefined temperature in the material through predictive analysis with a 3-D transient laser heating model.

Laser welded joints of AA6061-T6 and TZM alloy in bead-on-plate (BOP) and overlap configurations, created by laser welding with a high power fiber laser, were post processed with wLSP. To evaluate the strength of the welded joints pre- and post-processing, tensile testing and tensile-shear testing were carried out. To understand the failure modes in tensile-shear testing of the samples, a 3-D finite element model of the welded joint was developed with weld regions’ material strength properties defined through microhardness testing. The stress concentration regions predicted by the finite element model clearly explain the failure regions in the experimental tensile testing analysis. The tensile tests and tensile-shear tests carried out on wLSP processed AA6061-T6 samples demonstrate an enhancement in the joint strength by about 20% and ductility improvement of about 33% over as-welded samples. The BOP welds of TZM alloy processed with wLSP demonstrated an enhancement in strength by about 30% and lap welds demonstrated an increase in joint strength by 22%.

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography