To see the other types of publications on this topic, follow the link: Data depth.

Dissertations / Theses on the topic 'Data depth'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data depth.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

McGaughey, Karen J. "Variance testing with data depth /." Search for this dissertation online, 2003. http://wwwlib.umi.com/cr/ksu/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Suau, Cuadros Xavier. "Human body analysis using depth data." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/134801.

Full text
Abstract:
Human body analysis is one of the broadest areas within the computer vision field. Researchers have put a strong effort in the human body analysis area, specially over the last decade, due to the technological improvements in both video cameras and processing power. Human body analysis covers topics such as person detection and segmentation, human motion tracking or action and behavior recognition. Even if human beings perform all these tasks naturally, they build-up a challenging problem from a computer vision point of view. Adverse situations such as viewing perspective, clutter and occlusions, lighting conditions or variability of behavior amongst persons may turn human body analysis into an arduous task. In the computer vision field, the evolution of research works is usually tightly related to the technological progress of camera sensors and computer processing power. Traditional human body analysis methods are based on color cameras. Thus, the information is extracted from the raw color data, strongly limiting the proposals. An interesting quality leap was achieved by introducing the multiview concept. That is to say, having multiple color cameras recording a single scene at the same time. With multiview approaches, 3D information is available by means of stereo matching algorithms. The fact of having 3D information is a key aspect in human motion analysis, since the human body moves in a three-dimensional space. Thus, problems such as occlusion and clutter may be overcome with 3D information. The appearance of commercial depth cameras has supposed a second leap in the human body analysis field. While traditional multiview approaches required a cumbersome and expensive setup, as well as a fine camera calibration; novel depth cameras directly provide 3D information with a single camera sensor. Furthermore, depth cameras may be rapidly installed in a wide range of situations, enlarging the range of applications with respect to multiview approaches. Moreover, since depth cameras are based on infra-red light, they do not suffer from illumination variations. In this thesis, we focus on the study of depth data applied to the human body analysis problem. We propose novel ways of describing depth data through specific descriptors, so that they emphasize helpful characteristics of the scene for further body analysis. These descriptors exploit the special 3D structure of depth data to outperform generalist 3D descriptors or color based ones. We also study the problem of person detection, proposing a highly robust and fast method to detect heads. Such method is extended to a hand tracker, which is used throughout the thesis as a helpful tool to enable further research. In the remainder of this dissertation, we focus on the hand analysis problem as a subarea of human body analysis. Given the recent appearance of depth cameras, there is a lack of public datasets. We contribute with a dataset for hand gesture recognition and fingertip localization using depth data. This dataset acts as a starting point of two proposals for hand gesture recognition and fingertip localization based on classification techniques. In these methods, we also exploit the above mentioned descriptor proposals to finely adapt to the nature of depth data.%, and enhance the results in front of traditional color-based methods.
L’anàlisi del cos humà és una de les àrees més àmplies del camp de la visió per computador. Els investigadors han posat un gran esforç en el camp de l’anàlisi del cos humà, sobretot durant la darrera dècada, degut als grans avenços tecnològics, tant pel que fa a les càmeres com a la potencia de càlcul. L’anàlisi del cos humà engloba varis temes com la detecció i segmentació de persones, el seguiment del moviment del cos, o el reconeixement d'accions. Tot i que els essers humans duen a terme aquestes tasques d'una manera natural, es converteixen en un difícil problema quan s'ataca des de l’òptica de la visió per computador. Situacions adverses, com poden ser la perspectiva del punt de vista, les oclusions, les condicions d’il•luminació o la variabilitat de comportament entre persones, converteixen l’anàlisi del cos humà en una tasca complicada. En el camp de la visió per computador, l’evolució de la recerca va sovint lligada al progrés tecnològic, tant dels sensors com de la potencia de càlcul dels ordinadors. Els mètodes tradicionals d’anàlisi del cos humà estan basats en càmeres de color. Això limita molt els enfocaments, ja que la informació disponible prové únicament de les dades de color. El concepte multivista va suposar salt de qualitat important. En els enfocaments multivista es tenen múltiples càmeres gravant una mateixa escena simultàniament, permetent utilitzar informació 3D gràcies a algorismes de combinació estèreo. El fet de disposar d’informació 3D es un punt clau, ja que el cos humà es mou en un espai tri-dimensional. Això doncs, problemes com les oclusions es poden apaivagar si es disposa de informació 3D. L’aparició de les càmeres de profunditat comercials ha suposat un segon salt en el camp de l’anàlisi del cos humà. Mentre els mètodes multivista tradicionals requereixen un muntatge pesat i car, i una celebració precisa de totes les càmeres; les noves càmeres de profunditat ofereixen informació 3D de forma directa amb un sol sensor. Aquestes càmeres es poden instal•lar ràpidament en una gran varietat d'entorns, ampliant enormement l'espectre d'aplicacions, que era molt reduït amb enfocaments multivista. A més a més, com que les càmeres de profunditat estan basades en llum infraroja, no pateixen problemes relacionats amb canvis d’il•luminació. En aquesta tesi, ens centrem en l'estudi de la informació que ofereixen les càmeres de profunditat, i la seva aplicació al problema d’anàlisi del cos humà. Proposem noves vies per descriure les dades de profunditat mitjançant descriptors específics, capaços d'emfatitzar característiques de l'escena que seran útils de cara a una posterior anàlisi del cos humà. Aquests descriptors exploten l'estructura 3D de les dades de profunditat per superar descriptors 3D generalistes o basats en color. També estudiem el problema de detecció de persones, proposant un mètode per detectar caps robust i ràpid. Ampliem aquest mètode per obtenir un algorisme de seguiment de mans que ha estat utilitzat al llarg de la tesi. En la part final del document, ens centrem en l’anàlisi de les mans com a subàrea de l’anàlisi del cos humà. Degut a la recent aparició de les càmeres de profunditat, hi ha una manca de bases de dades públiques. Contribuïm amb una base de dades pensada per la localització de dits i el reconeixement de gestos utilitzant dades de profunditat. Aquesta base de dades és el punt de partida de dues contribucions sobre localització de dits i reconeixement de gestos basades en tècniques de classificació. En aquests mètodes, també explotem les ja mencionades propostes de descriptors per millor adaptar-nos a la naturalesa de les dades de profunditat.
APA, Harvard, Vancouver, ISO, and other styles
3

Aldrovandi, Lorenzo. "Depth estimation algorithm for light field data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

APA, Harvard, Vancouver, ISO, and other styles
4

Fu, Jiajun. "Human activities and status recognition using depth data." Thesis, Purdue University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1602904.

Full text
Abstract:

Human activities and status recognition is taking a more and more significant role in the healthcare area. Recognition systems can be based on many methods, such as video, motion sensor, accelerometer, etc. Depth data of Kinect sensor is a novel data type with 25 joints of information, which is popular in motion sensing games. The 25 joints include the head, neck, shoulders, elbows, wrists, hands, spine, hips, knees, ankles, and feet. This paper presents a recognition system based on these depth data. Depending on the characteristic from 5 kinds of statuses, a local coordinate frame was established and partial Euler angles were extracted as features. These Euler Angles can accurately describe the joints distribution in 3 dimensional space. With this kind of feature, a neural network model was built using 50000 sets of data to classify the daily activities into these 5 statuses. Experimental results of 10 volunteers’ action sequences in the same environment showed the accuracy was up to 97.96 percent. The recognition in dark environment had more than 90 percent accuracy.

APA, Harvard, Vancouver, ISO, and other styles
5

Figué, Valentin. "Depth prediction by deep learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240593.

Full text
Abstract:
Knowing the depth information is of critical importance in scene understanding for several industrial projects such as self-driving cars for instance. Where depth inference from a single still image has taken a prominent place in recent studies with the outcome of deep learning methods, practical cases often offer useful additional information that should be considered early in the architecture of the design to benefit from them in order to improve quality and robustness of the estimates. Hence, this thesis proposes a deep fully convolutional network which allows to exploit the informations of either stereo or monocular temporal sequences, along with a novel training procedure which takes multi-scale optimization into account. Indeed, this thesis found that using multi-scale information all along the network is of prime importance for accurate depth estimation and greatly improves performances, allowing to obtain new state-of-theart results on both synthetic data using Virtual KITTI and also on realimages with the challenging KITTI dataset.
Att känna till djupet i en bild är av avgörande betydelse för scenförståelse i flera industriella tillämpningar, exempelvis för självkörande bilar. Bestämning av djup utifrån enstaka bilder har fått en alltmer framträdande roll i studier på senare år, tack vare utvecklingen inom deep learning. I många praktiska fall tillhandahålls ytterligare information som är högst användbar, vilket man bör ta hänsyn till då man designar en arkitektur för att förbättra djupuppskattningarnas kvalitet och robusthet. I detta examensarbete presenteras därför ett så kallat djupt fullständigt faltningsnätverk, som tillåter att man utnyttjar information från tidssekvenser både monokulärt och i stereo samt nya sätt att optimalt träna nätverken i multipla skalor. I examensarbetet konstateras att information från multipla skalor är av synnerlig vikt för noggrann uppskattning av djup och för avsevärt förbättrad prestanda, vilket resulterat i nya state-of-the-art-resultat på syntetiska data från Virtual KITTI såväl som på riktiga bilder fråndet utmanande KITTI-datasetet.
APA, Harvard, Vancouver, ISO, and other styles
6

Hofsetz, Christian. "Image-based rendering of range data with depth uncertainty /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2003. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Villarreal, Vance A. "Relationship between the sonic layer depth and mixed layer depth identified from U.S. Navy sea glider data." Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/44025.

Full text
Abstract:
Approved for public release; distribution is unlimited
The mixed layer depth (MLD) represents the upper ocean mixing, and the sonic layer depth (SLD) reveals the capacity of the upper ocean to trap acoustic energy and create a surface duct. A set of sea glider date from the Naval Oceanographic Office is used to identify the MLD and SLD at five locations. The maximum angle method is found to be the best among 17 existing MLD determination schemes of the four major methods (difference, gradient, curvature, and maximum angle). The maximum angle method is also found better than the currently used maximum value method in determining SLD. The optimally determined MLD and SLD by the maximum angle method from theNavy's glider data shows that one can swiftly, accurately, and objectively determine the MLD and SLD for operations in seas around the world.
APA, Harvard, Vancouver, ISO, and other styles
8

Genel, Kerim, and Jörgen Andersson. "3D-visualization of fairway margins, vessel hull versus depth data." Thesis, University of Gävle, Department of Technology and Built Environment, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-232.

Full text
Abstract:

Fledermaus is software where different kind of analysis with spatial data can be done. The main area where to use Fledermaus is related to hydrographical surveys. This study is aimed to test and analyse the way Swedish Maritime Administration (Sjöfartsverket) uses Fledermaus. Through step by step explaining how to do when measuring sea bed conditions from a vessel, this text is possible to use as a manual for the applications that are mentioned in this report.

Another thing that is treated is the squat effect that belongs to vessel dynamic motions. Test of visualization that concerning squat in Fledermaus is done, but with a negative result when squat in a perspective to show motions in height that can be up to about a metre is very hard in a terrain model of thousands of metres. By further tests by arranging the input data, several interesting diagrams have been created through Microsoft Excel where graphs show that the depths are affecting the squat effect. This is showed in same diagram but with two different scales to show the relationship between how a point at the vessel moves in height compared to the depth under the vessel when the vessel is navigating in the sea.


Fledermaus är en programvara där olika analyser med rumsliga data kan genomföras. Största användningsområdet är att använda Fledermaus till mätningar som är relaterade till sjömätning. Den här studien är inriktad till att testa och analysera applikationer som Sjöfartsverket använder sig av i Fledermaus. Genom att steg för steg förklara hur Fledermaus ska användas när bottenförhållanden ska mätas sett från ett fartyg, så blir texten även möjlig att använda som en manual till de applikationer i Fledermaus som är nämnda i denna rapport.

Det andra som behandlas är squateffekten som tillhör ett fartygs dynamiska rörelser. Test av visualisering som behandlar squat i Fledermaus är genomförd, dock med negativt resultat då squat i ett perspektiv med att visa rörelser i höjd som kan uppgå till runt en meter är väldigt svårt i en terrängmodell som sträcker sig tusentals meter. Dock genom vidare tester genom behandling av indata, har flertalet intressanta diagram skapats genom Microsoft Excel där kurvor visar att djupet inverkar på squateffekten. Detta visas genom att i samma diagram fast med två olika skalor visa förhållandet mellan hur en punkt på båten rör sig i höjd jämfört med att djupet under fartyget ändras då fartyget gör fart genom vattnet.

APA, Harvard, Vancouver, ISO, and other styles
9

Wellmann, Robin. "On data depth with application to regression models and tests." Berlin Logos-Verl, 2007. http://d-nb.info/988446421/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wellmann, Robin. "On data depth with application to regression models and tests /." Berlin : Logos-Verl, 2008. http://d-nb.info/988446421/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Xiang. "Depth data improves non-melanoma skin lesion segmentation and diagnosis." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/5867.

Full text
Abstract:
Examining surface shape appearance by touching and observing a lesion from different points of view is a part of the clinical process for skin lesion diagnosis. Motivated by this, we hypothesise that surface shape embodies important information that serves to represent lesion identity and status. A new sensor, Dense Stereo Imaging System (DSIS) allows us to capture 1:1 aligned 3D surface data and 2D colour images simultaneously. This thesis investigates whether the extra surface shape appearance information, represented by features derived from the captured 3D data benefits skin lesion analysis, particularly on the tasks of segmentation and classification. In order to validate the contribution of 3D data to lesion identification, we compare the segmentations resulting from various combinations of images cues (e.g., colour, depth and texture) embedded in a region-based level set segmentation method. The experiments indicate that depth is complementary to colour. Adding the 3D information reduces the error rate from 7:8% to 6:6%. For the purpose of evaluating the segmentation results, we propose a novel ground truth estimation approach that incorporates a prior pattern analysis of a set of manual segmentations. The experiments on both synthetic and real data show that this method performs favourably compared to the state of the art approach STAPLE [1] on ground truth estimation. Finally, we explore the usefulness of 3D information to non-melanoma lesion diagnosis by tests on both human and computer based classifications of five lesion types. The results provide evidence for the benefit of the additional 3D information, i.e., adding the 3D-based features gives a significantly improved classification rate of 80:7% compared to only using colour features (75:3%). The three main contributions of the thesis are improved methods for lesion segmentation, non-melanoma lesion classification and lesion boundary ground-truth estimation.
APA, Harvard, Vancouver, ISO, and other styles
12

Grankvist, Ola. "Recognition and Registration of 3D Models in Depth Sensor Data." Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-131452.

Full text
Abstract:
Object Recognition is the art of localizing predefined objects in image sensor data. In this thesis a depth sensor was used which has the benefit that the 3D pose of the object can be estimated. This has applications in e.g. automatic manufacturing, where a robot picks up parts or tools with a robot arm. This master thesis presents an implementation and an evaluation of a system for object recognition of 3D models in depth sensor data. The system uses several depth images rendered from a 3D model and describes their characteristics using so-called feature descriptors. These are then matched with the descriptors of a scene depth image to find the 3D pose of the model in the scene. The pose estimate is then refined iteratively using a registration method. Different descriptors and registration methods are investigated. One of the main contributions of this thesis is that it compares two different types of descriptors, local and global, which has seen little attention in research. This is done for two different scene scenarios, and for different types of objects and depth sensors. The evaluation shows that global descriptors are fast and robust for objects with a smooth visible surface whereas the local descriptors perform better for larger objects in clutter and occlusion. This thesis also presents a novel global descriptor, the CESF, which is observed to be more robust than other global descriptors. As for the registration methods, the ICP is shown to perform most accurately and ICP point-to-plane more robust.
APA, Harvard, Vancouver, ISO, and other styles
13

Villagrá, Guilarte David. "Collaborative Localization and Mapping with Heterogeneous Depth Sensors." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-274341.

Full text
Abstract:
Simultaneous Localization and Mapping (SLAM) is the process in which a robot and other devices navigate in environments by simultaneously building a map of the surroundings and localizing itself within it. SLAM for single agents and specific sensors has matured during these last two decades. Nevertheless, the increasing demand for applications that require a high number of devices working together, with different types of sensors, have initiated and accelerated the interest for collaborative SLAM, and SLAM with heterogeneous sensors. This thesis proposes a collaborative SLAM framework that works with heterogeneous depth-based sensors, in particular, 3D LiDARs and stereo cameras. The framework is based on the SegMap framework, which makes use of a structural 3D segment representation of the map, and has a centralized structure that enables online multi-robot applications. Stereo-LiDAR support is enabled in the framework by a Stereo Estimation sub-module, which obtains a 3D point cloud from a stereo camera. Filtering of the stereo 3D point cloud and parameter optimization is performed in order to enhance the matching of segments from the stereo camera and 3D LiDAR. The system was evaluated on the KITTI dataset, in an offline fashion through its possible configurations. The results show that a vehicle containing a 3D LiDAR can be localized on a map created by a stereo camera, and vice-versa, enabling the generation of loop closures successfully when in an heterogeneous SLAM scenario. Furthermore, the influence of the system configuration and parameters of the framework on the heterogeneous localization performance is presented.
Simultaneous Localization and Mapping (SLAM) är processen där en robot och andra enheter navigerar i miljöer genom att samtidigt bygga en karta över omgivningen och lokalisera sig i den. SLAM för enskilda agenter och specifika sensorer har mognat under de senaste två decennierna. Den ökande efterfrågan på applikationer som kräver ett stort antal enheter som arbetar tillsammans och olika typer av sensorer har påskyndat intresset för samverkande SLAM och SLAM med heterogena sensorer. Det här examensarbetet föreslår ett samverkande SLAM-ramverk som fungerar med heterogena djupbaserade sensorer, särskilt 3D-LiDAR och stereokameror. Ramverket är baserat på SegMap-ramverket, som använder en strukturell 3D-segmentrepresentation av kartan, och har en centraliserad struktur som möjliggör online-multi-robotapplikationer. Stereo-LiDAR-stöd är aktiverat inom ramen för en stereomodul, som erhåller ett 3D-punktmoln från en stereokamera. Filtrering av stereo-3D-punktmoln och optimering av parametrar utförs för att förbättra matchningen av segment från stereokameran och 3D-LiDAR. Systemet utvärderades på KITTI-dataset på ett offline sätt genom dess möjliga konfigurationer. Resultaten visar att ett fordon som innehåller en 3D-LiDAR kan lokaliseras på en karta som skapats av en stereokamera, och vice versa, vilket möjliggör framställning av slingstängningar framgångsrikt i ett heterogent SLAM-scenario. Dessutom presenteras påverkan av systemkonfigurationen och parametrarna för ramverket på den heterogena lokaliseringsprestandan.
APA, Harvard, Vancouver, ISO, and other styles
14

Han, Xuejun. "On Nonparametric Bayesian Inference for Tukey Depth." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36533.

Full text
Abstract:
The Dirichlet process is perhaps the most popular prior used in the nonparametric Bayesian inference. This prior which is placed on the space of probability distributions has conjugacy property and asymptotic consistency. In this thesis, our concentration is on applying this nonparametric Bayesian inference on the Tukey depth and Tukey median. Due to the complexity of the distribution of Tukey median, we use this nonparametric Bayesian inference, namely the Lo’s bootstrap, to approximate the distribution of the Tukey median. We also compare our results with the Efron’s bootstrap and Rubin’s bootstrap. Furthermore, the existing asymptotic theory for the Tukey median is reviewed. Based on these existing results, we conjecture that the bootstrap sample Tukey median converges to the same asymp- totic distribution and our simulation supports the conjecture that the asymptotic consistency holds.
APA, Harvard, Vancouver, ISO, and other styles
15

Williamson, P. R. "Tomographic inversion of traveltime data in reflection seismology." Thesis, University of Cambridge, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ostnes, Runar. "Use of depth perception for the improved understanding of hydrographic data." Thesis, University of Plymouth, 2005. http://hdl.handle.net/10026.1/2114.

Full text
Abstract:
This thesis has reviewed how increased depth perception can be used to increase the understanding of hydrographic data First visual cues and various visual displays and techniques were investigated. From this investigation 3D stereoscopic techniques prove to be superior in improving the depth perception and understanding of spatially related data and a further investigation on current 3D stereoscopic visualisation techniques was carried out. After reviewing how hydrographic data is currently visualised it was decided that the chromo stereoscopic visualisation technique is preferred to be used for further research on selected hydrographic data models. A novel chromo stereoscopic application was developed and the results from the evaluation on selected hydrographic data models clearly show an improved depth perception and understanding of the data models.
APA, Harvard, Vancouver, ISO, and other styles
17

Sexton, Paul. "3D velocity-depth model building using surface seismic and well data." Thesis, Durham University, 1998. http://etheses.dur.ac.uk/4824/.

Full text
Abstract:
The objective of this work was to develop techniques that could be used to rapidly build a three-dimensional velocity-depth model of the subsurface, using the widest possible variety of data available from conventional seismic processing and allowing for moderate structural complexity. The result is a fully implemented inversion methodology that has been applied successfully to a large number of diverse case studies. A model-based inversion technique is presented and shown to be significantly more accurate than the analytical methods of velocity determination that dominate industrial practice. The inversion itself is based around two stages of ray-tracing. The first takes picked interpretations in migrated-time and maps them into depth using a hypothetical interval velocity field; the second checks the validity of this field by simulating fully the kinematics of seismic acquisition and processing as accurately as possible. Inconsistencies between the actual and the modelled data can then be used to update the interval velocity field using a conventional linear scheme. In order to produce a velocity-depth model that ties the wells, the inversion must include anisotropy. Moreover, a strong correlation between anisotropy and lithology is found. Unfortunately, surface seismic and well-tie data are not usually sufficient to uniquely resolve all the anisotropy parameters; however, the degree of non-uniqueness can be measured quantitatively by a resolution matrix which demonstrates that the model parameter trade-offs are highly dependent on the model and the seismic acquisition. The model parameters are further constrained by introducing well seismic traveltimes into the inversion. These introduce a greater range of propagation angles and reduce the non- uniqueness.
APA, Harvard, Vancouver, ISO, and other styles
18

Rasure, James O. "Aerosol optical depth retrieval with AVIRIS data : a test Of Tafkaa." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FRasure.pdf.

Full text
Abstract:
Thesis (M.S. in Meteorology and Physical Oceanography)--Naval Postgraduate School, September 2002.
Thesis advisor(s): Philip A. Durkee, Kurt E. Nielsen. Includes bibliographical references (p. 39-40). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
19

Michelioudakis, Dimitrios. "Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics." Thesis, Durham University, 2018. http://etheses.dur.ac.uk/12878/.

Full text
Abstract:
Velocity model building is a critical step in seismic reflection data processing. An optimum velocity field can lead to well focused images in time or depth domains. Taking into account the noisy and band limited nature of the seismic data, the computed velocity field can be considered as our best estimate of a set of possible velocity fields. Hence, all the calculated depths and the images produced are just our best approximation of the true subsurface. This study examines the quantification of uncertainty of the depths to drilling targets from two dimensional (2D) seismic reflection data using Bayesian statistics. The approach was tested in Mentelle Basin (south west of Australia), aiming to make depths predictions for stratigraphic targets of interest related with the International Ocean Discovery Program (IODP), leg 369. For the purposes of the project, Geoscience Australia 2D seismic profiles were reprocessed. In order to achieve robust predictions, the seismic reflection processing sequence was focused on improving the temporal resolution of the data by using deterministic deghosting filters in pre-stack and post-stack domains. The filters, combined with isotropic/anisotropic pre-stack time and depth migration algorithms, produced very good results in terms of seismic resolution and focusing of subsurface features. The application of the deghosting filters was the critical step for the subsequent probabilistic depth estimation of drilling targets. The best estimate of the velocity field along with the migrated seismic data were used as input to the Bayesian algorithm. The analysis, performed in one seismic profile intersecting the site location MBAS-4A, produced robust depth predictions for lithological boundaries of interest compared to the observed depths as reported in the IODP expedition. The significance of the result is more pronounced taking into account the complete lack of independent velocity information. Petrophysical information collected from the expedition was used to perform well-seismic tie, mapping the lithological boundaries with the reflectivity in the seismic profile. A very good match between observed and modelled traces was achieved and a new interpretation of the Mentelle Basin lithological boundaries in seismic image was provided. Velocity information from sonic logs was also implemented to perform anisotropic pre-stack depth migration. The migrated image successfully mapped the subsurface targets to their correct depth location while preserving the focus of the image. The pre-drilling depth estimation of subsurface targets using Bayesian statistics can be considered as a great example of successfully quantifying the uncertainty in depths and effectively merging seismic reflection data processing with statistical analysis. The derived well-seismic tie in MBAS-4A will be a valuable tool towards a more complete regional interpretation of the Mentelle Basin.
APA, Harvard, Vancouver, ISO, and other styles
20

Xie, Yiping. "Machine Learning for Inferring Depth from Side-scan Sonar Images." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264835.

Full text
Abstract:
Underwater navigation using Autonomous Underwater Vehicles (AUVs), which is significant for marine science research, highly depends on the acoustic method, sonar. Typically, AUVsare equipped with side-scan sonars and multibeam sonars at the same time since they both have their advantages and limitations. Side-scan sonars have a much wider range than multibeamsonars and at the same time are much cheaper, yet they could not provide accurate depth measurements. This thesis is aiming at investigating if a machine-interpreted method could beused to translate side-scan sonar data to multibeam data with high accuracy so that underwater navigation could be done by AUVs equipped only with side-scan sonars. The approaches considered in this thesis are based on Machine Learning methods, including generative models and discriminative models. The objective of this thesis is to investigate the feasibility of machine learning based models to infer the depth based on side-scan sonar images. Different models, including regression and Generative Adversarial Networks, are tested and compared. Different CNN based architectures such as U-Net and ResNet are tested andcompared as well. As an experiment trial, this project has already shown the ability and great potential of machine learning based methods extracting latent representations from side-scansonars and inferring the depth with reasonable accuracy. Further improvement could be madeto improve the performance and stability to be potentially verified on the AUV platforms inreal-time.
Undervattensnavigering med autonoma undervattensfordon (AUV från engelskans Autonomous Underwater Vehicle), är betydelsefull för marinvetenskaplig forskning, och beror starkt på vilken typ av sonar som används. Vanligtvis är AUV:er utrustade med både sidescansonar och multibeamsonar eftersom båda har sina fördelar och begränsningar. Sidescansonar har större omfång än multibeam-sonar och är samtidigt mycket billigare, men kan inte ge exakta mätningar av djupet. Detta examensarbete syftar till att undersöka om maskininlärningsmetoder skulle kunna användas för att översätta sidescandata till multibeamdata med hög noggrannhet så att undervattensnavigering skulle kunna göras av AUV:er utrustade endast med sidescansonar. Tillvägagångssättet i examensarbetet är baserat på olika maskininlärningsmetoder, däribland generativa modeller och diskriminerande modeller. Syftet är att undersöka om olika maskininlärningsbaserade modeller kan dra slutsatser om havsdjupet baserat endast på sidescandata. De modeller som testas och jämförs inkluderar regression och generativa adversativt nätverk. Även olika CNN-baserade arkitekturer som U-Net och ResNet testas och jämförs. Som ett experimentförsök har detta projekt redan visat förmågan och den stora potentialen för maskininlärningsbaserade metoder som extraherar latenta representationer från sidescansonar och kan estimera djupet med en rimlig noggrannhet. Ytterligare förbättringar skulle kunna göras för att förbättra prestanda och stabilitet som potentiellt kan verifieras på AUV-plattformar i realtid.
APA, Harvard, Vancouver, ISO, and other styles
21

Bourgeois, Adèle. "On the Restriction of Supercuspidal Representations: An In-Depth Exploration of the Data." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40901.

Full text
Abstract:
Let $\mathbb{G}$ be a connected reductive group defined over a p-adic field F which splits over a tamely ramified extension of F, and let G = $\mathbb{G}(F)$. We also assume that the residual characteristic of F does not divide the order of the Weyl group of $\mathbb{G}$. Following J.K. Yu's construction, the irreducible supercuspidal representation constructed from the G-datum $\Psi$ is denoted $\pi_G(\Psi)$. The datum $\Psi$ contains an irreducible depth-zero supercuspidal representation, which we refer to as the depth-zero part of the datum. Under our hypotheses, the J.K. Yu Construction is exhaustive. Given a connected reductive F-subgroup $\mathbb{H}$ that contains the derived subgroup of $\mathbb{G}$, we study the restriction $\pi_G(\Psi)|_H$ and obtain a description of its decomposition into irreducible components along with their multiplicities. We achieve this by first describing a natural restriction process from which we construct H-data from the G-datum $\Psi$. We then show that the obtained H-data, and conjugates thereof, construct the components of $\pi_G(\Psi)|_H$, thus providing a very precise description of the restriction. Analogously, we also describe an extension process that allows to construct G-data from an H-datum $\Psi_H$. Using Frobenius Reciprocity, we obtain a description for the components of $\Ind_H^G\pi_H(\Psi_H)$. From the obtained description of $\pi_G(\Psi)|_H$, we prove that the multiplicity in $\pi_G(\Psi)|_H$ is entirely determined by the multiplicity in the restriction of the depth-zero piece of the datum. Furthermore, we use Clifford theory to obtain a formula for the multiplicity of each component in $\pi_G(\Psi)|_H$. As a particular case, we take a look at the regular depth-zero supercuspidal representations and obtain a condition for a multiplicity free restriction. Finally, we show that our methods can also be used to define a restriction of Kim-Yu types, allowing to study the restriction of irreducible representations which are not supercuspidal.
APA, Harvard, Vancouver, ISO, and other styles
22

Legg, Cole C. "Using publicly available financial data to measure production depth in the automobile industry." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127930.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (page 14).
Often companies must make the decision whether or not to manufacture a part on their own or to outsource the manufacturing of that part to a supplier. The results of these "make-or-buy" decisions have impacts on that company's manufacturing competencies and strategies moving forward [1]. They also compound over time to define that company's production depth. A company's production depth is defined as the ratio of value-adding content that a company creates itself [2]. While the impact of "make-or-buy" decisions have clear implications on a company's long-term strategies, the relationship between a company's production depth and its profitability has not yet been studied as there is not a defined way to measure production depth from a company's publicly available financial data. This study examines two different methods of estimating production depth using publicly available financial data.
The first method uses the ratio of raw and in-progress materials versus finished goods in a company's inventories to represent the company's production depth. The second method for estimating production depth is the ratio of the difference between the company's manufacturing cost and total trade purchases to its total cost of manufacturing. This study used the first method to evaluate different automotive companies' production depth in 2018. This study also examines BMW's production depth using both methods. The first method of measuring production depth is advantageous because all public automotive companies published the data on their inventories necessary to make the calculation. The second method is advantageous because it takes into account costs with manufacturing outside of just material costs.
While there were no statistically significant relationships found between this study's production depth estimates and profitability, applying these two methods two automotive companies allowed us to gain insight into estimating production depth using publicly available financial data.
by Cole C. Legg.
S.B.
S.B. Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
23

Correia, José Diogo Madureira. "Visual and depth perception unit for Atlascar2." Master's thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/22498.

Full text
Abstract:
Mestrado em Engenharia Mecânica
This thesis is focused on the installation of multiple Light Detection And Ranging and vision-based sensors on a full sized mobile platform, ATLASCAR 2. This vehicle is a Mitsubishi i-MiEV. In the scope of this work it will be equipped with two planar scanners, a 3D scanner and a camera. The sensors will be installed in the vehicle's front supported by an infrastructure built in aluminium pro le and connected to the vehicle's chassis. All sensors are powered by the car's low voltage circuit and controlled by a switched board placed in the trunk alongside with a processing unit. Sensor calibration is accomplished using a calibration package developed at the Laboratory of Automation an Robotics, to which an option to calibrate a new 3D sensor was added, Velodyne Puck VLP-16. After the sensor calibration and to demonstrate the functionalities of the platform, an application was developed that merges the data from the Light Detection And Ranging sensors, properly referenced, in a single frame and computes and represents the space free to navigate around the vehicle.
Este trabalho assenta na instalação de sensores Light Detection And Ranging e de visão numa plataforma movel à escala real, o ATLASCAR 2. Este veículo é um Mitsubishi i-MiEV que, no ambito deste trabalho, será equipado com dois scaners planares, um scaner 3D e uma camara. Estes sensores serão instalados na frente do veículo e suportados por uma infraestrurura desenvolvida em per l da alumínio e xa ao chassis do mesmo. A alimentação dos sensores é feita atravéz do circuito de baixa tensão do veículo e controlada por um quadro elétrico situado no porta bagagens juntamente com a unidade de processamento. A calibração destes sensores realizou-se atravéz um pacote de calibração multisensorial devenvolvido no Laboratorio de Automa ção e Robotica, ao qual foi adicionada a opção de calibrar um novo sensor 3D, Velodyne Puck VLP-16. Após a calibração dos sensores e no sentido de demonstrar as funcionalidades da plataforma, foi desenvolvida uma aplicação que combina os dados dos sensors Light Detection And Ranging devidamente referenciados e calcula e representa o espaço, disponivel para navegar em torno do veiculo.
APA, Harvard, Vancouver, ISO, and other styles
24

Helgesen, Hans Kristian. "Anisotropic depth migration of converted wave data, inversion of multicomponent data to estimate elastic parameters at the seafloor and one-dimensional data-driven inversion." Doctoral thesis, Norwegian University of Science and Technology, Department of Physics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-2286.

Full text
Abstract:

The increasing demand for oil and gas in the world today drives the need for new and improved methods for identifying hydrocarbon prospects. The petroleum industry uses information about the subsurface in the exploration and production of oil and gas. The industry's tendency to explore deeper waters and more geologically complex areas requires reliable and more robust methods for extracting such information.

This thesis illustrates possible strategies for using seismic reflection data in the inversion for subsurface earth properties. One strategy which is the traditional approach in seismic is to consider inversion as a stepwise procedure consisting of a model-driven global reflectivity imaging process (migration) followed by target-related elastic inversion of the reflectivity information into earth property parameters.

In this thesis a method describing wave equation prestack depth migration of converted wave data in anisotropic media is presented. The migration is accomplished by numerical wavefield extrapolation in the frequency-space domain using precomputed space-variant fillter operators. Imaging is performed by crosscorrelating the source wavefield with the data wavefield at each depth level. Data examples demonstrate good dip response and correct kinematic behavior and illustrate the method's ability to handle complex multi-layer models with a relatively high degree of anisotropy.

By considering seismic inversion as a stepwise approach, this thesis also presents a method for inversion of reflection information into medium parameters. The method provides estimation of density and P-wave and S-wave velocities at the seafloor by inversion of the acoustic-elastic PP reflection coefficient estimated at the seafloor. The PP reflection coefficient is calculated in the frequency-slowness domain from seafloor measurements of the pressure and the vertical component of the particle velocity. The algorithm gives estimates of seafloor parameters in good agreement with the true model parameters.

Another strategy for using seismic data in the inversion for subsurface earth properties is to perform inversion as a data-driven procedure where the medium parameters are directly inverted for. In this thesis a new method on inverse scattering for the estimation of the medium properties of a onedimensional acoustic layered medium from single scattering data is presented. The method provides an explicit, non-iterative and fully data-driven solution of the inverse one-dimensional scattering problem.

APA, Harvard, Vancouver, ISO, and other styles
25

Theerthagiri, Dinesh. "Reversing Malware : A detection intelligence with in-depth security analysis." Thesis, Linköping University, Department of Electrical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-52058.

Full text
Abstract:

More money nowadays moves online and it is very understandable that criminals want to make more money online aswell, because these days’ banks don’t have large sums of money in their cash box. Since there are many other internalrisks involved in robbing a bank, criminals have found many other ways to commit crimes and much lower risMore money nowadays moves online and it is very understandable that criminals want to make more money online as well, because these days’ banks don’t have large sums of money in their cash box. Since there are many other internal risks involved in robbing a bank, criminals have found many other ways to commit crimes and much lower risk in online crime. The first level of change involved was email-based phishing, but later circumstances changed again.

Authentication methods and security of online bank has been improved over the period. This will drastically reduce effects of phishing based on emails and fraudulent website. The next level of online bank fraud is called banking Trojans. These Trojans infect the online customers of banks. These Trojans monitors customer’s activities and uses their authenticated session to steal customers’ money.

A lot of money is made by these kinds of attacks. Comparatively few perpetrators have been caught, and the problem is getting worse day by day. To have a better understanding of this problem, I have selected a recent malware sample named as SilentBanker. It had the capability of attacking more than 400 banks. This thesis presents the problem in general and includes my results in studying the behaviour of the SilentBanker Trojan.

APA, Harvard, Vancouver, ISO, and other styles
26

Thati, Satish Kumar, and Venkata Praneeth Mareedu. "Determining the Quality of Human Movement using Kinect Data." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13819.

Full text
Abstract:
Health is one of the most important elements in every individual’s life. Even though there is much advancement in science, the quality of healthcare has never been up to the mark. This appears to be true especially in the field of Physiotherapy. Physiotherapy is the analysis of human joints and bodies and providing remedies for any pains or injuries that might have affected the physiology of a body. To give patients a top notch quality health analysis and treatment, either the number of doctors should increase, or there should be an alternative replacement for a doctor. Our Master Thesis is aimed at developing a prototype which can aid in providing healthcare of high standards to the millions.  Methods: Microsoft Kinect SDK 2.0 is used to develop the prototype. The study shows that Kinect can be used both as Marker-based and Marker less systems for tracking human motion. The degree angles formed from the motion of five joints namely shoulder, elbow, hip, knee and ankle were calculated. The device has infrared, depth and colour sensors in it. Depth data is used to identify the parts of the human body using pixel intensity information and the located parts are mapped onto RGB colour frame.  The image resulting from the Kinect skeleton mode was considered as the images resulting from the markerless system and used to calculate the angle of the same joints. In this project, data generated from the movement tracking algorithm for Posture Side and Deep Squat Side movements are collected and stored for further evaluation.  Results: Based on the data collected, our system automatically evaluates the quality of movement performed by the user. The system detected problems in static posture and Deep squat based on the feedback on our system by Physiotherapist.
APA, Harvard, Vancouver, ISO, and other styles
27

Cöster, Jonatan. "The effects of shadows on depth perception in augmented reality on a mobile device." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249663.

Full text
Abstract:
In the visual perception of depth in computer graphics, people rely on a number of cues, including depth of field, relative size of objects in perspective, and shadows. This work focuses on shadows in Augmented Reality. An experiment was performed in order to measure the effects of having virtual objects cast shadows on real objects. Users performed the task of placing a virtual object on a physical table, in an Augmented Reality environment displayed on a mobile device. The virtual object was either a cube or a sphere. The effects of having shadows enabled was measured by time to task completion and the positional error. Qualitative measurements of the user experience were also made, through the use of questionnaires. The results showed a decrease in both positional error and time to task completion when shadows were enabled. The results also indicated that users placed the objects with a higher degree of certainty when shadows were enabled. The quantitative and qualitative results of the experiment showed that users found it easier to perceive the position of the virtual object with respect to the physical object when the virtual object cast shadows.
Människor använder flera olika indikatorer för att uppfatta djup i datorgrafik, bland annat skärpedjup, den relativa storleken hos objekt i perspektiv och skuggor. Denna studie fokuserar på skuggor i förstärkt verklighet. Ett experiment genomfördes, vars syfte var att mäta effekterna av att låta virtuella objekt kasta skuggor på fysiska objekt. Användare fick utföra en uppgift som gick ut på att placera ett virtuellt objekt på ett fysiskt bord i förstärkt verklighet som visades på en mobil enhet. Det virtuella objektet var antingen en kub eller en sfär. För att mäta effekterna av att objekten kastade skuggor så mättes tiden det tog användare att slutföra uppgiften, samt positioneringsfelet. Med hjälp av frågeformulär gjordes även kvalitativa mätningar. Resultaten visade att både positioneringsfelet och tiden för att slutföra uppgiften minskade när objekten kastade skuggor. Resultaten visade också att användarna verkade mer säkra när de placerade objekten, när objekten kastade skuggor. De kvantitativa och kvalitativa resultaten från experimentet visade att användarna tyckte att det var lättare att avgöra positionen på det virtuella objektet i förhållande till det fysiska objektet när det virtuella objektet kastade skuggor.
APA, Harvard, Vancouver, ISO, and other styles
28

Samson, Esuene M. A. "A critical evaluation of the "Tilt-Depth" method of magnetic data interpretation : application to aeromagnetic data from North Eastern (NE) Nigeria." Thesis, University of Leeds, 2012. http://etheses.whiterose.ac.uk/4925/.

Full text
Abstract:
To simplify the complex total magnetic field intensity (T) on datasets obtained from locations close to the geomagnetic Equator (inclinations |α| ≤ 20°) such datasets are routinely reduced-to-equator (RTE), since they cannot be stably reduced-to-pole (RTP). RTE anomalies tend to have small amplitudes and exhibit azimuth-based anisotropy, unlike RTP anomalies. Anisotropy describes the dependence of the amplitude and shape of an RTE anomaly on the strike direction of its source. For example, an East-West striking contact/fault will generate a strong RTE anomaly response whereas a North-South striking equivalent will not. Where adjacent sources occur, anisotropy causes interference between anomalies, displacing anomalies relative to their sources. This makes using magnetic data to map structures in regions that are close to the geomagnetic equator difficult or potentially of limited value. This thesis develops a strategy to interpret RTE datasets and applies it to determine the basement structure in NE Nigeria where |α| ≤ 8°. This area has >50% of the basement concealed beneath Cretaceous and Quaternary sediments of the Benue Trough and Chad basin, respectively. The aim of the study is to structurally map the basement underlying the Benue and Chad rifted basins in NE Nigeria, by tracing and determining the depths of basement faults and associated structures. The first-order derivative-based "Tilt-Depth" method has been evaluated to determine its effectiveness when applied to RTE datasets to determine the location and depth of structures. The method was tested first using RTE and RTP equivalents of synthetic  datasets obtained from profiles across East-West striking, 2D contacts at various depths, inclinations of effective magnetisation (ϕ), and dips (d). RTP datasets were used throughout as reference models. Errors in "Tilt-Depth" method estimates were invariant to changes in depth, but sensitive to changes in ϕ and d of sources. At error limits of 0-20%, the method effectively estimates locations and depths of 2D contacts when dip is within the 75 ≤ d° ≤ 105 range, inclination of remanent magnetisation relative to induced magnetisation is within the 155 ≤ β° ≤ 205 range (magnetisations are collinear), and Koenigsberger ratio (Q) of remanent to induced magnetisation amplitudes ≤ 1. Relationships between Q, α , β and ϕ suggests that the simplification of remanence-laden anomalies due to magnetisations being collinear results from deviations of ϕ from α of ≤12° when Q≤1. Similar deviations occur between ϕ and α , for all β values, when Q≤0.2. Hence, remanent magnetisation is negligible for RTP or RTE datasets when a priori information suggests Q≤0.2. The "Tilt-Depth" method was further tested for anisotropy-induced anomaly interference effects using RTP or RTE of the Complex “Bishop” Model (CBM) and Tanzania grids. The CBM grid contains 2D contacts of various strikes and three-dimensional (3D) sources with non-2D contacts at various depths (all precisely known), and satisfy the d, ϕ and Q requirements above. The Tanzania grid presented a real dataset from a Karoo rift basin, where more randomly striking 2D contacts occur at unknown depths. For comparison, the second vertical derivative, analytic signal amplitude, local wavenumber, and the horizontal gradient magnitudes of Ѳ (HGM(Ѳ)) and  (HGM()) methods were also tested using these grids. Locations estimated from all these methods show that: (1) Sources of all shapes and strikes are correctly imaged on RTP grids; (2) North-South striking 2D contacts are not imaged at all on RTE datasets, but can be inferred from linear alignments of stacked short wavelength East-West striking anomalies; (3) 2D contacts with strikes ranging from N045 to N135° are correctly imaged on RTE datasets; (4) Anomalies from poorly isolated 2D contacts with N±020° strikes interfere to further complicate RTE datasets, making it difficult to correctly image these sources; and (5) RTE anomalies from 3D sources tend to smear in an East-West direction, extending such anomalies well past edges of their sources along this direction. These North-South striking non-2D edges are not imaged at all, whilst their East-West striking non 2D (Northern and Southern edges are correctly imaged. Depths estimated for 2D and non-2D contacts with strikes ranging from N045 toN135° from RTP and RTE of the CBM grids, using the local wavenumber, analytic signal amplitude and |Ѳ| = 27°- based “Tilt-Depth" methods show that: (1) "Tilt-Depth” and local wavenumber methods underestimate the actual depth of sources, while the analytic signal amplitude method provided both severely underestimated and overestimated depths. Thus, “Tilt-Depth” and local wavenumber estimates were easier to utilise and interpret; (2) "Tilt-Depth" and local wavenumber methods underestimate 2D contacts from RTP and RTE grids by up to 25 and 35% of their actual depths, respectively; (3) 'Tilt-Depth" and local wavenumber methods, respectively, underestimate depths of East-West striking non-2D edges of 3D sources by about 35 and 30% from the RTP grid; and (4) "Tiit-Depth" method consistently underestimates non-2D contacts from RTE grids by up to 40%. Using knowledge gained from the above tests, all the methods were applied to a NE Nigeria  (RTE) dataset, to delineate basement structures in the area. The dataset was a 1 km upward-continued grid with 1 km x 1 km cell size, and extended well beyond NE Nigeria into Niger, Chad and Cameroon Republics. While basement depths were estimated from the dataset using the "Tilt-Depth" and local wavenumber methods only, these methods and the second vertical derivative, analytic signal amplitude, local wavenumber, as well as the horizontal gradient magnitudes of Ѳ (HGM(Ѳ)) and  (HGM()) methods, were used to map source edge locations. A basement structure map of NE Nigeria was obtained using the above methods and found not to be dominated by North-South striking faults. Instead the basement is dissected mainly by near vertical, NE-SW trending faults against which NW-SE or E-W trending faults terminate. The relationship between these inferred faults, basement horsts, volcanic plugs, and basement depressions, and outcrop information suggests that rifting was episodic as the mainly NorthEast directed rift propagation direction was occasionally deflected by transcurrent faults to relieve differential stresses built up from wall rock and/or crustal resistance. Apparent stress relief features include the Yola basin, flood basalts, Lamurde Anticline and Kaltungo Inlier. A number of isolated depocenters, mainly half grabens, with sediment thickness exceeding 11km seem to occur in NE Nigeria. Outside these depocenters, basement occur at depths generally shallower than 0.5 km, except where intra-basinal horsts occur, at depths shallower than 2.5 km. These depths agree well with well information and seismic data interpretation, and show the SW Chad basin depocenter to be isolated from adjoining basins in Cameroon, Chad and Niger Republics.
APA, Harvard, Vancouver, ISO, and other styles
29

Gooding, Linda Wells. "Effects of retinal disparity depth cues on cognitive workload in 3-D displays." Diss., This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-08062007-094403/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Brodd-Reijer, Christoffer. "Dance quantification with Kinect : Adjusting music volume by using depth data from a Kinect sensor." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-181928.

Full text
Abstract:
Humans interact with computers in many various ways. Different interaction models are suited for different situations and tasks. This thesis explores the motion based interaction made possible by the Kinect for Windows device. A stand-alone library is built for interpreting, analyzing and quantifying movements in a room. It highlights the complexity in movement analysis in real time and discusses various algorithms for interpolation and filtering. The finished library is integrated into an existing music application by expanding its plugin system, allowing users to control the volume level of the playing music with the quantity of their body movements. The results are evaluated by subjective interviews with test subjects and objective measurements such as execution time and memory consumption. The results show that it is possible to properly quantify movement in real time with little memory consumption while still getting satisfying results, ending in a greater incentive to dance.
APA, Harvard, Vancouver, ISO, and other styles
31

Zama, Ramirez Pierluigi. "Estimation of depth and semantics by a CNN trained on computer-generated and real data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/12921/.

Full text
Abstract:
La “stima” o “estrazione” della profondità si riferisce ad un gruppo di tecniche che puntano ad ottenere una rappresentazione della struttura tridimensionale della scena, a partire da immagini bidimensionali. In altre parole, si ricerca la distanza dalla telecamera, per ogni punto appartenente alla scena vista. Con “segmentazione semantica” invece si intende l’insieme delle tecniche che come obbiettivo hanno la suddivisione dell’immagine in gruppi, dove ogni gruppo è composto da elementi della stessa classe. La tesi proposta punta ad affrontare in maniera congiunta le due questioni sopra citate attraverso l’utilizzo di una rete neurale convoluzionale, in particolare si avvale di una struttura detta Fully Convolutional Neural Network, all'interno dell’ambiente urbano. Data la scarsità di dataset di immagini con corrispettive ground truth di semantica e profondità scarseggiano, è stato creato un dataset sintetico a partire da un modello tridimensionale di una città, sviluppato tramite l’utilizzo del software Blender. La rete quindi viene prima allenata sui dati artificiali e in seguito viene eseguita un’operazione di rifinitura su un dataset di immagini reali, CityScapes. La rete così allenata ottiene buoni risultati in riferimento ad entrambi gli obbiettivi, riuscendo a raggiungere una buona accuratezza e un basso errore sia nella predizione della profondità che nella segmentazione semantica. Inoltre, la predizione contemporanea dei due risultati permette tempi di computazione minori rispetto all'esecuzione separata dei due processi, di predizione semantica e di profondità.
APA, Harvard, Vancouver, ISO, and other styles
32

Honauer, Katrin [Verfasser], and Bernd [Akademischer Betreuer] Jähne. "Performance Metrics and Test Data Generation for Depth Estimation Algorithms / Katrin Honauer ; Betreuer: Bernd Jähne." Heidelberg : Universitätsbibliothek Heidelberg, 2019. http://d-nb.info/1177045168/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

He, Yuting. "RVD2: An ultra-sensitive variant detection model for low-depth heterogeneous next-generation sequencing data." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/499.

Full text
Abstract:
Motivation: Next-generation sequencing technology is increasingly being used for clinical diagnostic tests. Unlike research cell lines, clinical samples are often genomically heterogeneous due to low sample purity or the presence of genetic subpopulations. Therefore, a variant calling algorithm for calling low-frequency polymorphisms in heterogeneous samples is needed. Result: We present a novel variant calling algorithm that uses a hierarchical Bayesian model to estimate allele frequency and call variants in heterogeneous samples. We show that our algorithm improves upon current classifiers and has higher sensitivity and specificity over a wide range of median read depth and minor allele frequency. We apply our model and identify twelve mutations in the PAXP1 gene in a matched clinical breast ductal carcinoma tumor sample; two of which are loss-of-heterozygosity events.
APA, Harvard, Vancouver, ISO, and other styles
34

Shintani, Christina. "Comparing Photogrammetric and Spectral Depth Techniques in Extracting Bathymetric Data from a Gravel-Bed River." Thesis, University of Oregon, 2016. http://hdl.handle.net/1794/20517.

Full text
Abstract:
Recent advances in through-water photogrammetry and optical imagery indicate that accurate, continuous bathymetric mapping may be possible in shallow, clear streams. This research directly compares the ability of through-water photogrammetry and spectral depth approaches to extract water depth for monitoring fish habitat. Imagery and cross sections were collected on a 140 meter reach of the Salmon River, Oregon, using an unmanned aerial vehicle (UAV) and rtk-GPS. Structure-from-Motion (SfM) software produced a digital elevation model (DEM) (1.5 cm) and orthophoto (0.37 cm). The photogrammetric approach of applying a site-specific refractive index provided the most accurate (mean error 0.009 m) and precise (standard deviation of error 0.17 m) bathymetric data (R2 = 0.67) over the spectral depth and the 1.34 refractive index approaches. This research provides a quantitative comparison between and within bathymetric mapping methods, and suggests that a site-specific refractive index may be appropriate for similar gravel-bed, relatively shallow, clear streams.
APA, Harvard, Vancouver, ISO, and other styles
35

Islam, Faiz ul. "In-Depth Analysis of Texas Accidents Using Data-Mining Techniques and Geo-Statistical Analyst Tools." Thesis, North Dakota State University, 2018. https://hdl.handle.net/10365/28779.

Full text
Abstract:
Traffic accidents have been a consistently growing problem in the United States. The road-safety issues have not been completely resolved and pose danger to people driving on the roadways. This research used various approaches and techniques to evaluate and analyze the Texas State traffic-accident dataset profoundly and meticulously. Data-mining techniques were used to analyze the accident dataset for Texas statistically, and information were collected. The resulting information from the analysis suggested that the city of Houston, Texas, was the point of persistent accidents and accounted for most accidents in all Texas cities. Therefore, Houston was analyzed further by using the geostatistical and geo-analyst tools in ArcGIS. The Geostatistical Analysis tools including Space-Time identified the key hotspot locations within the city to study the overall behavior, and developed prediction maps from the kriging tool. A similar approach can apply to other parts of Texas and any location in the United States.
APA, Harvard, Vancouver, ISO, and other styles
36

Ertezaei, Bahareh. "Real-Time Water Depth Logger Data as Input to PCSWMM to Estimate Tree Filter Performance." University of Toledo / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1494593810003027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kong, Longbo. "Accurate Joint Detection from Depth Videos towards Pose Analysis." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1157524/.

Full text
Abstract:
Joint detection is vital for characterizing human pose and serves as a foundation for a wide range of computer vision applications such as physical training, health care, entertainment. This dissertation proposed two methods to detect joints in the human body for pose analysis. The first method detects joints by combining body model and automatic feature points detection together. The human body model maps the detected extreme points to the corresponding body parts of the model and detects the position of implicit joints. The dominant joints are detected after implicit joints and extreme points are located by a shortest path based methods. The main contribution of this work is a hybrid framework to detect joints on the human body to achieve robustness to different body shapes or proportions, pose variations and occlusions. Another contribution of this work is the idea of using geodesic features of the human body to build a model for guiding the human pose detection and estimation. The second proposed method detects joints by segmenting human body into parts first and then detect joints by making the detection algorithm focusing on each limb. The advantage of applying body part segmentation first is that the body segmentation method narrows down the searching area for each joint so that the joint detection method can provide more stable and accurate results.
APA, Harvard, Vancouver, ISO, and other styles
38

Hernández-Vela, Antonio. "From pixels to gestures: learning visual representations for human analysis in color and depth data sequences." Doctoral thesis, Universitat de Barcelona, 2015. http://hdl.handle.net/10803/292488.

Full text
Abstract:
The visual analysis of humans from images is an important topic of interest due to its relevance to many computer vision applications like pedestrian detection, monitoring and surveillance, human-computer interaction, e-health or content-based image retrieval, among others. In this dissertation in learning different visual representations of the human body that are helpful for the visual analysis of humans in images and video sequences. To that end, we analyze both RCB and depth image modalities and address the problem from three different research lines, at different levels of abstraction; from pixels to gestures: human segmentation, human pose estimation and gesture recognition. First, we show how binary segmentation (object vs. background) of the human body in image sequences is helpful to remove all the background clutter present in the scene. The presented method, based on “Graph cuts” optimization, enforces spatio-temporal consistency of the produced segmentation masks among consecutive frames. Secondly, we present a framework for multi-label segmentation for obtaining much more detailed segmentation masks: instead of just obtaining a binary representation separating the human body from the background, finer segmentation masks can be obtained separating the different body parts. At a higher level of abstraction, we aim for a simpler yet descriptive representation of the human body. Human pose estimation methods usually rely on skeletal models of the human body, formed by segments (or rectangles) that represent the body limbs, appropriately connected following the kinematic constraints of the human body, In practice, such skeletal models must fulfill some constraints in order to allow for efficient inference, while actually Iimiting the expressiveness of the model. In order to cope with this, we introduce a top-down approach for predicting the position of the body parts in the model, using a mid-level part representation based on Poselets. Finally, we propose a framework for gesture recognition based on the bag of visual words framework. We leverage the benefits of RGB and depth image modalities by combining modality-specific visual vocabularies in a late fusion fashion. A new rotation-variant depth descriptor is presented, yielding better results than other state-of-the-art descriptors. Moreover, spatio-temporal pyramids are used to encode rough spatial and temporal structure. In addition, we present a probabilistic reformulation of Dynamic Time Warping for gesture segmentation in video sequences, A Gaussian-based probabilistic model of a gesture is learnt, implicitly encoding possible deformations in both spatial and time domains.
L’anàlisi visual de persones a partir d'imatges és un tema de recerca molt important, atesa la rellevància que té a una gran quantitat d'aplicacions dins la visió per computador, com per exemple: detecció de vianants, monitorització i vigilància,interacció persona-màquina, “e-salut” o sistemes de recuperació d’matges a partir de contingut, entre d'altres. En aquesta tesi volem aprendre diferents representacions visuals del cos humà, que siguin útils per a la anàlisi visual de persones en imatges i vídeos. Per a tal efecte, analitzem diferents modalitats d'imatge com són les imatges de color RGB i les imatges de profunditat, i adrecem el problema a diferents nivells d'abstracció, des dels píxels fins als gestos: segmentació de persones, estimació de la pose humana i reconeixement de gestos. Primer, mostrem com la segmentació binària (objecte vs. fons) del cos humà en seqüències d'imatges ajuda a eliminar soroll pertanyent al fons de l'escena en qüestió. El mètode presentat, basat en optimització “Graph cuts”, imposa consistència espai-temporal a Ies màscares de segmentació obtingudes en “frames” consecutius. En segon lloc, presentem un marc metodològic per a la segmentació multi-classe, amb la qual podem obtenir una descripció més detallada del cos humà, en comptes d'obtenir una simple representació binària separant el cos humà del fons, podem obtenir màscares de segmentació més detallades, separant i categoritzant les diferents parts del cos. A un nivell d'abstraccíó més alt, tenim com a objectiu obtenir representacions del cos humà més simples, tot i ésser suficientment descriptives. Els mètodes d'estimació de la pose humana sovint es basen en models esqueletals del cos humà, formats per segments (o rectangles) que representen les extremitats del cos, connectades unes amb altres seguint les restriccions cinemàtiques del cos humà. A la pràctica, aquests models esqueletals han de complir certes restriccions per tal de poder aplicar mètodes d'inferència que permeten trobar la solució òptima de forma eficient, però a la vegada aquestes restriccions suposen una gran limitació en l'expressivitat que aques.ts models son capaços de capturar. Per tal de fer front a aquest problema, proposem un enfoc “top-down” per a predir la posició de les parts del cos del model esqueletal, introduïnt una representació de parts de mig nivell basada en “Poselets”. Finalment. proposem un marc metodològic per al reconeixement de gestos, basat en els “bag of visual words”. Aprofitem els avantatges de les imatges RGB i les imatges; de profunditat combinant vocabularis visuals específiques per a cada modalitat, emprant late fusion. Proposem un nou descriptor per a imatges de profunditat invariant a rotació, que millora l'estat de l'art, i fem servir piràmides espai-temporals per capturar certa estructura espaial i temporal dels gestos. Addicionalment, presentem una reformulació probabilística del mètode “Dynamic Time Warping” per al reconeixement de gestos en seqüències d'imatges. Més específicament, modelem els gestos amb un model probabilistic gaussià que implícitament codifica possibles deformacions tant en el domini espaial com en el temporal.
APA, Harvard, Vancouver, ISO, and other styles
39

Gomes, Ana Carolina Campos [UNESP]. "Retrieval of euphotic zone and Secchi disk depth in Bariri reservoir using OLI/Landsat-8 data." Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/153657.

Full text
Abstract:
Submitted by Ana Carolina Campos Gomes (carol.campos01@hotmail.com) on 2018-04-20T17:35:54Z No. of bitstreams: 1 AnaCarolina.pdf: 2801427 bytes, checksum: 1809e31f1376c275f1b3a1c9eca5cf6f (MD5)
Approved for entry into archive by Claudia Adriana Spindola null (claudia@fct.unesp.br) on 2018-04-20T19:06:20Z (GMT) No. of bitstreams: 1 gomes_acc_me_prud.pdf: 2801427 bytes, checksum: 1809e31f1376c275f1b3a1c9eca5cf6f (MD5)
Made available in DSpace on 2018-04-20T19:06:20Z (GMT). No. of bitstreams: 1 gomes_acc_me_prud.pdf: 2801427 bytes, checksum: 1809e31f1376c275f1b3a1c9eca5cf6f (MD5) Previous issue date: 2018-03-23
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
O presente trabalho teve como objetivo estimar as profundidades da zona eufótica (Zeu) e do disco de Secchi (ZSD) a partir do coeficiente de atenuação da luz (kd) utilizando dados do sensor Operational Land Imager (OLI)/Landsat-8 no reservatório de Bariri. Como importantes parâmetros de medida da claridade da água, kd, Zeu e ZSD são afetados pelas substâncias opticamente significativas (SOS). A caracterização óptica do reservatório foi realizada a partir de duas campanhas de campo realizadas no período seco, aqui nomeadas como BAR1 (agosto/2016) e BAR2 (junho/2017), que contaram com análises das propriedades ópticas inerentes (POIs), das SOS e da coleta de dados radiométricos para o cálculo da reflectância de sensoriamento remoto (Rsr). A localização do reservatório de Bariri como o segundo do Sistema de Reservatórios em Cascata (SRC) do Rio Tietê promove a heterogeneidade dos seus níveis de eutrofização na direção montante-jusante além de caracterizá-lo como altamente produtivo. As campanhas de campo foram marcadas por uma significativa diferença nos valores de concentração de clorofila-a ([Chl-a]) que apresentou variação média entre 7,99 e 119,76 μg L-1 com os maiores valores em BAR1, com decréscimo das SOS em BAR2 em relação a BAR1 e predomínio de material particulado orgânico (MPO) nas duas campanhas de campo; a turbidez variou entre 5,72 e 16,60 NTU. A absorção por matéria orgânica colorida dissolvida (aCDOM) foi predominante nas duas campanhas de campo, sendo mais expressiva em BAR2. Para as estimativas de kd, nove modelos empíricos e três modelos semi-analíticos baseados em dados radiométricos como razões entre as bandas azul/verde e azul/vermelho do sensor OLI/Landsat-8 e baseados em [Chl-a] foram avaliados. Considerando a propriedade óptica aparente (POA) do kd, um modelo semi-analítico baseado em POIs e na distribuição angular da luz apresentou os menores erros (erro médio percentual absoluto – MAPE) de 40% em relação aos modelos empíricos de [Chl-a] com 60% e de 80% para os modelos empíricos baseados em razões de bandas. A partir das estimativas de kd, modelos de estimativa de Zeu e ZSD foram avaliados. Para as estimativas de Zeu, cinco modelos empíricos, baseados na relação entre o coeficiente de atenuação da luz da radiação fotossinteticamente ativa [kd(PAR)] e de kd em 490 nm [kd(490)], e um modelo semi-analítico, baseado na equação de transferência radiativa, foram considerados; para as estimativas de ZSD, um modelo semi-analítico foi testado. Os resultados obtidos foram melhores para um modelo empírico (erro percentual absoluto – ε) de Zeu com 16% em relação ao modelo semi-analítico (ε 30%) e os erros nas estimativas de ZSD foram de 57%. Os erros nas estimativas de kd revelaram que a acurácia dos modelos empíricos foi comprometida devido à influência por CDOM e que o modelo semi-analítico, por considerar a natureza óptica de kd como uma POA, apresentou os melhores resultados. As estimativas de ZSD também foram afetadas pelas características ópticas de Bariri, não apresentando correlação com a matéria orgânica em BAR2, marcado pelo decréscimo de [Chl-a] e aumento dos valores de aCDOM. Zeu mostrou melhores resultados a partir de um modelo empírico calibrado com dados ópticos semelhantes aos do reservatório de Bariri em comparação ao modelo semi-analítico, desenvolvido para abranger as variações bio-ópticas sazonais e regionais. kd, Zeu e ZSD foram espacializados a partir de imagens do sensor OLI/Landsat-8 permitindo a avaliação espaçotemporal desses parâmetros que apresentaram um padrão sazonal quando analisados em relação aos dados de precipitação. kd apresentou variação entre 0,89 e 5,60 m-1 para o período analisado (2016) e Zeu e ZSD apresentaram variação entre 0,30 e 7,60 m e entre 0,32 e 2,95 m, respectivamente, para o período de 2014-2016. Pode-se concluir então, que apesar das estimativas de kd, Zeu e ZSD terem sido afetadas pela influência de CDOM no reservatório de Bariri, o esquema semi-analítico foi capaz de estimar kd com menor erro e permitiu as estimativas de Zeu e ZSD.
The objective of this present work was estimate the euphotic zone (Zeu) and Secchi disk (ZSD) depths from the light attenuation coefficient (kd) using the Operational Land Imager (OLI)/Landsat-8 data in Bariri reservoir. The kd, Zeu and ZSD are important water clarity parameters and are influenced by the optically significant substances (OSS). The optical characterization was carried out with data collected in two field campaigns in the dry period, here called BAR1 (august/2016) and BAR2 (june/2017), that included analysis of the inherent optical properties (IOPs), of the OSS and radiometric data to calculate the remote sensing reflectance (Rrs). The location of Bariri reservoir as the second of the Cascading Reservoir System (CRS) of Tietê River promotes the heterogeneity of the eutrophication levels from upstream to downstream besides characterizes the reservoir as highly productive. The field campaigns presented a significant difference in chlorophyll-a concentrations ([Chl-a]) with mean variation between 7.99 and 119.76 μg L-1 with the highest values in BAR1, with reduce of the OSS in BAR2 in relation to BAR1 and predominance of organic particulate matter (OPM) in both field campaigns and variation in turbidity from 5.72 to 16.60 NTU. The absorption of chromophoric dissolved organic matter (CDOM) was dominant in both field campaigns and more expressive in BAR2. For the kd estimates, nine empirical models and three semi-analytical models based on radiometric data such as ratios of blue-green and blue-red bands of (OLI)/Landsat 8 sensor and based on [Chl-a] were evaluated. Considering the apparent optical property (AOP) of kd, a semi-analytical model based on IOPs and the light angular distribution presented the lowest errors (mean absolute percentage error – MAPE) of 40% in relation to the empirical models of [Chl-a] with 60% and of 80% for the empirical models based on the band ratios. Through the kd estimates, models to derive Zeu and ZSD were evaluated. For the Zeu estimates, five empirical models were considered based on the relation between the attenuation coefficient of the photosynthetically active radiation [kd(PAR)] and the kd at 490 nm [kd(490)], and one semi-analytical model, based on the radiative transfer equation; for the ZSD estimates, one semi-analytical model was tested. The empirical model of Zeu showed the better results with the (unbiased absolute percentage error – ε) 16% in relation to the semi-analytical model (ε 30%) and the estimates errors of ZSD were 57%. The errors in kd estimates revealed that the accuracy of the empirical models was affected by the CDOM influence in Bariri reservoir and the semi-analytical model presented a better performance when considering the optical nature of kd as an AOP. The ZSD estimates were also affected by the optical characteristics of Bariri with no correlation to the SPM in BAR2, where the [Chl-a] decreased and the aCDOM increased. Zeu showed better results from an empirical model calibrated with similar optical data to Bariri reservoir in relation to the semi-analytical model developed to be applied in a wide range of bio-optical seasonal and regional variations. The kd, Zeu and ZSD were spatially distributed through OLI/Landsat-8 images allowing the temporal-spatial assessment of theses parameters, which presented a seasonal pattern when analyzed in relation to rainfall data. kd presented variation from 0.89 to 5.60 m-1 to the analyzed period (2016) and Zeu and ZSD presented variations between 0.30 and 7.60 m and between 0.32 and 2.95 m, respectively, for 2014-2016 period. It can be concluded, therefore, that despite of the CDOM have affected the kd, Zeu and ZSD retrievals in Bariri reservoir, the semi-analytical scheme was able to estimate kd with lowest error and enable the Zeu and ZSD estimates.
CNPq: 131737/2016-3
FAPESP: 2012/19821-1 e 2015/21586-9
APA, Harvard, Vancouver, ISO, and other styles
40

Gomes, Ana Carolina Campos. "Retrieval of euphotic zone and Secchi disk depth in Bariri reservoir using OLI/Landsat-8 data /." Presidente Prudente, 2018. http://hdl.handle.net/11449/153657.

Full text
Abstract:
Orientador: Enner Herenio de Alcântara
Banca: Nilton Nobuhiro Imai
Banca: Milton Kampel
Resumo: O presente trabalho teve como objetivo estimar as profundidades da zona eufótica (Zeu) e do disco de Secchi (ZSD) a partir do coeficiente de atenuação da luz (kd) utilizando dados do sensor Operational Land Imager (OLI)/Landsat-8 no reservatório de Bariri. Como importantes parâmetros de medida da claridade da água, kd, Zeu e ZSD são afetados pelas substâncias opticamente significativas (SOS). A caracterização óptica do reservatório foi realizada a partir de duas campanhas de campo realizadas no período seco, aqui nomeadas como BAR1 (agosto/2016) e BAR2 (junho/2017), que contaram com análises das propriedades ópticas inerentes (POIs), das SOS e da coleta de dados radiométricos para o cálculo da reflectância de sensoriamento remoto (Rsr). A localização do reservatório de Bariri como o segundo do Sistema de Reservatórios em Cascata (SRC) do Rio Tietê promove a heterogeneidade dos seus níveis de eutrofização na direção montante-jusante além de caracterizá-lo como altamente produtivo. As campanhas de campo foram marcadas por uma significativa diferença nos valores de concentração de clorofila-a ([Chl-a]) que apresentou variação média entre 7,99 e 119,76 μg L-1 com os maiores valores em BAR1, com decréscimo das SOS em BAR2 em relação a BAR1 e predomínio de material particulado orgânico (MPO) nas duas campanhas de campo; a turbidez variou entre 5,72 e 16,60 NTU. A absorção por matéria orgânica colorida dissolvida (aCDOM) foi predominante nas duas campanhas de campo, sendo mais expre... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The objective of this present work was estimate the euphotic zone (Zeu) and Secchi disk (ZSD) depths from the light attenuation coefficient (kd) using the Operational Land Imager (OLI)/Landsat-8 data in Bariri reservoir. The kd, Zeu and ZSD are important water clarity parameters and are influenced by the optically significant substances (OSS). The optical characterization was carried out with data collected in two field campaigns in the dry period, here called BAR1 (august/2016) and BAR2 (june/2017), that included analysis of the inherent optical properties (IOPs), of the OSS and radiometric data to calculate the remote sensing reflectance (Rrs). The location of Bariri reservoir as the second of the Cascading Reservoir System (CRS) of Tietê River promotes the heterogeneity of the eutrophication levels from upstream to downstream besides characterizes the reservoir as highly productive. The field campaigns presented a significant difference in chlorophyll-a concentrations ([Chl-a]) with mean variation between 7.99 and 119.76 μg L-1 with the highest values in BAR1, with reduce of the OSS in BAR2 in relation to BAR1 and predominance of organic particulate matter (OPM) in both field campaigns and variation in turbidity from 5.72 to 16.60 NTU. The absorption of chromophoric dissolved organic matter (CDOM) was dominant in both field campaigns and more expressive in BAR2. For the kd estimates, nine empirical models and three semi-analytical models based on radiometric data such as r... (Complete abstract click electronic access below)
Mestre
APA, Harvard, Vancouver, ISO, and other styles
41

Photopoulou, Theoni. "Diving and depth use in seals : inferences from telemetry data using regression and random walk movement." Thesis, University of St Andrews, 2012. http://hdl.handle.net/10023/3644.

Full text
Abstract:
This thesis focuses on methods for using telemetry data to make inferences about the diving behaviour of seals, in terms of their use of depth over time. Three species are considered: grey seals (Halichoerus grypus) and elephant seals (Mirounga leonina and Mirounga angustirostris). Data came from Geographic Positioning System phone tags (GPS phone tags) for grey seals, and Conductivity Temperature Depth Satellite Relay Data Loggers (CTD-SRDLs) for southern elephant seals (M.leonina); both are instruments that transmit Information in abstracted form. Data for northern elephant seals (M.angustirostris) came from anarchival prototype SRDL-type instrument that stored tri-axial acceleration information at high resolution and required recovery to obtain the data. The usefulness of maximum dive depth as a measure of depth use in grey seals, known to forage on the seabed, is explored with a logistic regression analysis using a Generalized Additive Model. Often, maximum dive depth will not be a representative measure of the way seals apportion their time in the water column, so a framework for quantifying depth use is developed for abstracted dive data from southern elephant seals and validated with high resolution time-depth data from northern elephant seals. The implications of using a broken-stick model for abstracting dive data on-board CTD-SRDLs are investigated in terms of its performance and uncertainty. A method for obtaining limits on the time-depth area within which these abstracted dives occurred is developed and used as part of a Bayesian state-space random walk model framework to reconstruct dive trajectories and estimate depth use profiles for abstracted dive data.
APA, Harvard, Vancouver, ISO, and other styles
42

HURD, JOHN K. JR. "A GIS MODEL TO ESTIMATE SNOW DEPTH USING DIFFERENTIAL GPS AND HIGH-RESOLUTION DIGITAL ELEVATION DATA." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1177640172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Boyle, John K. "Performance Metrics for Depth-based Signal Separation Using Deep Vertical Line Arrays." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2198.

Full text
Abstract:
Vertical line arrays (VLAs) deployed below the critical depth in the deep ocean can exploit reliable acoustic path (RAP) propagation, which provides low transmission loss (TL) for targets at moderate ranges, and increased TL for distant interferers. However, sound from nearby surface interferers also undergoes RAP propagation, and without horizontal aperture, a VLA cannot separate these interferers from submerged targets. A recent publication by McCargar and Zurk (2013) addressed this issue, presenting a transform-based method for passive, depth-based separation of signals received on deep VLAs based on the depth-dependent modulation caused by the interference between the direct and surface-reflected acoustic arrivals. This thesis expands on that work by quantifying the transform-based depth estimation method performance in terms of the resolution and ambiguity in the depth estimate. Then, the depth discrimination performance is quantified in terms of the number of VLA elements.
APA, Harvard, Vancouver, ISO, and other styles
44

Mueses-Pérez, Auristela. "Generalized non-dimensional depth-discharge rating curves tested on Florida streamflow." Scholar Commons, 2006. http://scholarcommons.usf.edu/etd/2639.

Full text
Abstract:
A generalized non-dimensional mathematical expression has been developed to describe the rating relation of depth and discharge for intermediate and high streamflow of natural and controlled streams. The expressions have been tested against observations from forty-three stations in West-Central Florida. The intermediate-flow region model has also been validated using data from thirty additional stations in the study area. The proposed model for the intermediate flow is a log-linear equation with zero intercept and the proposed model for the high-flow region is a log-linear equation with a variable intercept. The models are normalized by the depth and discharge values at 10 percent exceedance using data published by the U.S. Geological Survey. For un-gauged applications, Q10 and d10 were derived from a relationship shown to be reasonably well correlated to the watershed drainage area with a correlation coefficient of 0.94 for Q10 and 0.86 for d10. The average relative error for this parameter set shows that, for the intermediate-flow range, better than 50% agreement with the USGS rating data can be expected for about 86% of the stations and for the high-flow range, better than 50% for 44% of the stations. Testing the model outside West Central Florida, in some stations at North Florida, and South Alabama and Georgia, show some reasonable relative errors but not as good as the results obtained for West Central Florida. Using a model with a different slope, developed specific for those particular stations improved the results significantly.
APA, Harvard, Vancouver, ISO, and other styles
45

Quaas, Johannes, Bjorn Stevens, Philip Stier, and Ulrike Lohmann. "Interpreting the cloud cover: aerosol optical depth relationship found in satellite data using a general circulation model." Copernicus Publications, 2010. https://ul.qucosa.de/id/qucosa%3A13833.

Full text
Abstract:
Statistical analysis of satellite data shows a positive correlation between aerosol optical depth (AOD) and total cloud cover (TCC). Reasons for this relationship have been disputed in recent literature. The aim of this study is to explore how different processes contribute to one model’s analog of the positive correlation between aerosol optical depth and total cloud cover seen in the satellite retrievals. We compare the slope of the linear regression between the logarithm of TCC and the logarithm of AOD, or the strength of the relationship, as derived from three satellite data sets to the ones simulated by a global aerosol-climate model. We analyse model results from two different simulations with and without a parameterisation of aerosol indirect effects, and using dry compared to humidified AOD. Perhaps not surprisingly we find that no single one of the hypotheses discussed in the literature is able to uniquely explain the positive relationship. However the dominant contribution to the model’s AOD-TCC relationship can be attributed to aerosol swelling in regions where humidity is high and clouds are coincidentally found. This finding leads us to hypothesise that much of the AOD-TCC relationship seen in the satellite data is also carried by such a process, rather than the direct effects of the aerosols on the cloud fields themselves.
APA, Harvard, Vancouver, ISO, and other styles
46

Nell, Raymond D. "Three dimensional depth visualization using image sensing to detect artefact in space." Thesis, Cape Peninsula University of Technology, 2014. http://hdl.handle.net/20.500.11838/1199.

Full text
Abstract:
Thesis submitted in fulfilment of the requirements for the degree Doctor of Technology: Electrical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology 2014
Three-dimensional (3D) artefact detection can provide the conception of vision and real time interaction of electronic products with devices. The orientation and interaction of electrical systems with objects can be obtained. The introduction of electronic vision detection can be used in multiple applications, from industry, in robotics and also to give orientation to humans to their immediate surroundings. An article covering holograms states that these images can provide information about an object that can be examined from different angles. The limitations of a hologram are that there must be absolute immobilization of the object and the image system. Humans are capable of stereoscopic vision where two images are fused together to provide a 3D view of an object. In this research, two digital images are used to determine the artefact position in space. The application of a camera is utilized and the 3D coordinates of the artefact are determined. To obtain the 3D position, the principles of the pinhole camera, a single lens as well as two image visualizations are applied. This study explains the method used to determine the artefact position in space. To obtain the 3D position of an artefact with a single image was derived. The mathematical formulae are derived to determine the 3D position of an artefact in space and these formulae are applied in the pinhole camera setup to determine the 3D position. The application is also applied in the X-ray spectrum, where the length of structures can be obtained using the mathematical principles derived. The XYZ coordinates are determined, a computer simulation as well as the experimental results are explained. With this 3D detection method, devices can be connected to a computer to have real time image updates and interaction of objects in an XYZ coordinate system. Keywords: 3D point, xyz-coordinates, lens, hologram
APA, Harvard, Vancouver, ISO, and other styles
47

Gazkohani, Ali Esmaeily. "Exploring snow information content of interferometric SAR Data." Thèse, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/2793.

Full text
Abstract:
The objective of this research is to explore the information content of repeat-pass cross-track Interferometric SAR (InSAR) with regard to snow, in particular Snow Water Equivalent (SWE) and snow depth. The study is an outgrowth of earlier snow cover modeling and radar interferometry experiments at Schefferville, Quebec, Canada and elsewhere which has shown that for reasons of loss of coherence repeat-pass InSAR is not useful for the purpose of snow cover mapping, even when used in differential InSAR mode. Repeat-pass cross-track InSAR would overcome this problem. As at radar wavelengths dry snow is transparent, the main reflection is at the snow/ground interface. The high refractive index of ice creates a phase delay which is linearly related to the water equivalent of the snow pack. When wet, the snow surface is the main reflector, and this enables measurement of snow depth. Algorithms are elaborated accordingly. Field experiments were conducted at two sites and employ two different types of digital elevation models (DEM) produced by means of cross track InSAR. One was from the Shuttle Radar Topography Mission digital elevation model (SRTM DEM), flown in February 2000. It was compared to the photogrammetrically produced Canadian Digital Elevation Model (CDEM) to examine snow-related effects at a site near Schefferville, where snow conditions are well known from half a century of snow and permafrost research. The second type of DEM was produced by means of airborne cross track InSAR (TOPSAR). Several missions were flown for this purpose in both summer and winter conditions during NASA's Cold Land Processes Experiment (CLPX) in Colorado, USA. Differences between these DEM's were compared to snow conditions that were well documented during the CLPX field campaigns. The results are not straightforward. As a result of automated correction routines employed in both SRTM and AIRSAR DEM extraction, the snow cover signal is contaminated. Fitting InSAR DEM's to known topography distorts the snow information, just as the snow cover distorts the topographic information. The analysis is therefore mostly qualitative, focusing on particular terrain situations. At Schefferville, where the SRTM was adjusted to known lake levels, the expected dry-snow signal is seen near such lakes. Mine pits and waste dumps not included in the CDEM are depicted and there is also a strong signal related to the spatial variations in SWE produced by wind redistribution of snow near lakes and on the alpine tundra. In Colorado, cross-sections across ploughed roads support the hypothesis that in dry snow the SWE is measurable by differential InSAR. They also support the hypothesis that snow depth may be measured when the snow cover is wet. Difference maps were also extracted for a 1 km2 Intensive Study Area (ISA) for which intensive ground truth was available. Initial comparison between estimated and observed snow properties yielded low correlations which improved after stratification of the data set.In conclusion, the study shows that snow-related signals are measurable. For operational applications satellite-borne cross-track InSAR would be necessary. The processing needs to be snow-specific with appropriate filtering routines to account for influences by terrain factors other than snow.
APA, Harvard, Vancouver, ISO, and other styles
48

Lazcano, Vanel. "Some problems in depth enhanced video processing." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/373917.

Full text
Abstract:
In this thesis we tackle two problems, namely, the data interpolation prob- lem in the context of depth computation both for images and for videos, and the problem of the estimation of the apparent movement of objects in image sequences. The rst problem deals with completion of depth data in a region of an image or video where data are missing due to occlusions, unreliable data, damage or lost of data during acquisition. In this thesis we tackle it in two ways. First, we propose a non-local gradient-based energy which is able to complete planes locally. We consider this model as an extension of the bilateral lter to the gradient domain. We have successfully evaluated our model to complete synthetic depth images and also incomplete depth maps provided by a Kinect sensor. The second approach to tackle the problem is an experimental study of the Biased Absolutely Minimizing Lipschitz Extension (biased AMLE in short) for anisotropic interpolation of depth data to big empty regions without informa- tion. The AMLE operator is a cone interpolator, but the biased AMLE is an exponential cone interpolator which makes it more addapted to depth maps of real scenes that usually present soft convex or concave surfaces. Moreover, the biased AMLE operator is able to expand depth data to huge regions. By con- sidering the image domain endowed with an anisotropic metric, the proposed method is able to take into account the underlying geometric information in order not to interpolate across the boundary of objects at di erent depths. We have proposed a numerical model to compute the solution of the biased AMLE which is based on the eikonal operators. Additionally, we have extended the proposed numerical model to video sequences. The second problem deals with the motion estimation of the objects in a video sequence. This problem is known as the optical ow computation. The Optical ow problem is one of the most challenging problems in computer vision. Traditional models to estimate it fail in presence of occlusions and non-uniform illumination. To tackle these problems we proposed a variational model to jointly estimate optical ow and occlusion. Moreover, the proposed model is able to deal with the usual drawback of variational methods in dealing with fast displacements of objects in the scene which are larger than the object it- self. The addition of a term that balance gradient and intensities increases the robustness to illumination changes of the proposed model. The inclusions of a supplementary matches given by exhaustive search in speci cs locations helps to follow large displacements.
En esta tesis se abordan dos problemas: interpolación de datos en el contexto del cálculo de disparidades tanto para imágenes como para video, y el problema de la estimación del movimiento aparente de objetos en una secuencia de imágenes. El primer problema trata de la completación de datos de profundidad en una región de la imagen o video dónde los datos se han perdido debido a oclusiones, datos no confiables, datos dañados o pérdida de datos durante la adquisición. En esta tesis estos problemas se abordan de dos maneras. Primero, se propone una energía basada en gradientes no-locales, energía que puede (localmente) completar planos. Se considera este modelo como una extensión del filtro bilateral al dominio del gradiente. Se ha evaluado en forma exitosa el modelo para completar datos sintéticos y también mapas de profundidad incompletos de un sensor Kinect. El segundo enfoque, para abordar el problema, es un estudio experimental del biased AMLE (Biased Absolutely Minimizing Lipschitz Extension) para interpolación anisotrópica de datos de profundidad en grandes regiones sin información. El operador AMLE es un interpolador de conos, pero el operador biased AMLE es un interpolador de conos exponenciales lo que lo hace estar más adaptado a mapas de profundidad de escenas reales (las que comunmente presentan superficies convexas, concavas y suaves). Además, el operador biased AMLE puede expandir datos de profundidad a regiones grandes. Considerando al dominio de la imagen dotado de una métrica anisotrópica, el método propuesto puede tomar en cuenta información geométrica subyacente para no interpolar a través de los límites de los objetos a diferentes profundidades. Se ha propuesto un modelo numérico, basado en el operador eikonal, para calcular la solución del biased AMLE. Adicionalmente, se ha extendido el modelo numérico a sequencias de video. El cálculo del flujo óptico es uno de los problemas más desafiantes para la visión por computador. Los modelos tradicionales fallan al estimar el flujo óptico en presencia de oclusiones o iluminación no uniforme. Para abordar este problema se propone un modelo variacional para conjuntamente estimar flujo óptico y oclusiones. Además, el modelo propuesto puede tolerar, una limitación tradicional de los métodos variacionales, desplazamientos rápidos de objetos que son más grandes que el tamaño objeto en la escena. La adición de un término para el balance de gradientes e intensidades aumenta la robustez del modelo propuesto ante cambios de iluminación. La inclusión de correspondencias adicionales (obtenidas usando búsqueda exhaustiva en ubicaciones específicas) ayuda a estimar grandes desplazamientos.
APA, Harvard, Vancouver, ISO, and other styles
49

Hammond, Patrick Douglas. "Deep Synthetic Noise Generation for RGB-D Data Augmentation." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7516.

Full text
Abstract:
Considerable effort has been devoted to finding reliable methods of correcting noisy RGB-D images captured with unreliable depth-sensing technologies. Supervised neural networks have been shown to be capable of RGB-D image correction, but require copious amounts of carefully-corrected ground-truth data to train effectively. Data collection is laborious and time-intensive, especially for large datasets, and generation of ground-truth training data tends to be subject to human error. It might be possible to train an effective method on a relatively smaller dataset using synthetically damaged depth-data as input to the network, but this requires some understanding of the latent noise distribution of the respective camera. It is possible to augment datasets to a certain degree using naive noise generation, such as random dropout or Gaussian noise, but these tend to generalize poorly to real data. A superior method would imitate real camera noise to damage input depth images realistically so that the network is able to learn to correct the appropriate depth-noise distribution.We propose a novel noise-generating CNN capable of producing realistic noise customized to a variety of different depth-noise distributions. In order to demonstrate the effects of synthetic augmentation, we also contribute a large novel RGB-D dataset captured with the Intel RealSense D415 and D435 depth cameras. This dataset pairs many examples of noisy depth images with automatically completed RGB-D images, which we use as proxy for ground-truth data. We further provide an automated depth-denoising pipeline which may be used to produce proxy ground-truth data for novel datasets. We train a modified sparse-to-dense depth-completion network on splits of varying size from our dataset to determine reasonable baselines for improvement. We determine through these tests that adding more noisy depth frames to each RGB-D image in the training set has a nearly identical impact on depth-completion training as gathering more ground-truth data. We leverage these findings to produce additional synthetic noisy depth images for each RGB-D image in our baseline training sets using our noise-generating CNN. Through use of our augmentation method, it is possible to achieve greater than 50% error reduction on supervised depth-completion training, even for small datasets.
APA, Harvard, Vancouver, ISO, and other styles
50

Abrehdary, Majid. "Recovering Moho parameters using gravimetric and seismic data." Doctoral thesis, KTH, Geodesi och satellitpositionering, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183577.

Full text
Abstract:
Isostasy is a key concept in geoscience to interpret the state of mass balance between the Earth’s crust and mantle. There are four well-known isostatic models: the classical models of Airy/Heiskanen (A/H), Pratt/Hayford (P/H), and Vening Meinesz (VM) and the modern model of Vening Meinesz-Moritz (VMM). The first three models assume a local and regional isostatic compensation, whereas the latter one supposes a global isostatic compensation scheme. A more satisfactory test of isostasy is to determine the Moho interface. The Moho discontinuity (or Moho) is the surface, which marks the boundary between the Earth’s crust and upper mantle. Generally, the Moho interface can be mapped accurately by seismic observations, but limited coverage of seismic data and economic considerations make gravimetric or combined gravimetric-seismic methods a more realistic technique for imaging the Moho interface either regional or global scales. It is the main purpose of this dissertation to investigate an isostatic model with respect to its feasibility to use in recovering the Moho parameters (i.e. Moho depth and Moho density contrast). The study is mostly limited to the VMM model and to the combined approach on regional and global scales. The thesis briefly includes various investigations with the following specific subjects: 1) to investigate the applicability and quality of satellite altimetry data (i.e. marine gravity data) in Moho determination over the oceans using the VMM model, 2) to investigate the need for methodologies using gravimetric data jointly with seismic data (i.e. combined approach) to estimate both the Moho depth and Moho density contrast over regional and global scales, 3) to investigate the spherical terrain correction and its effect on the VMM Moho determination, 4) to investigate the residual isostatic topography (RIT, i.e. difference between actual topography and isostatic topography) and its effect in the VMM Moho estimation, 5) to investigate the application of the lithospheric thermal-pressure correction and its effect on the Moho geometry using the VMM model, 6) Finally, the thesis ends with the application of the classical isostatic models for predicting the geoid height. The main input data used in the VMM model for a Moho recovery is the gravity anomaly/disturbance corrected for the gravitational contributions of mass density variation due in different layers of the Earth’s crust (i.e. stripping gravity corrections) and for the gravity contribution from deeper masses below the crust (i.e. non-isostatic effects). The corrections are computed using the recent seismic crustal model CRUST1.0. Our numerical investigations presented in this thesis demonstrate that 1) the VMM approach is applicable for estimating Moho geometry using a global marine gravity field derived by satellite altimetry and that the possible mean dynamic topography in the marine gravity model does not significantly affect the Moho determination, 2) the combined approach could help in filling-in the gaps in the seismic models and it also provides good fit to other global and regional models more than 90 per cent of the locations, 3) despite the fact that the lateral variation of the crustal depth is rather smooth, the terrain affects the Moho result most significantly in many areas, 4) the application of the RIT correction improves the agreement of our Moho result with some published global Moho models, 5) the application of the lithospheric thermal-pressure correction improves the agreement of VMM Moho model with some other global Moho models, 6) the geoid height cannot be successfully represented by the classical models due to many other gravitational signals from various mass variations within the Earth that affects the geoid.

QC 20160317

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography