To see the other types of publications on this topic, follow the link: Aeromagnetic data image processing.

Dissertations / Theses on the topic 'Aeromagnetic data image processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Aeromagnetic data image processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Magaia, Luis. "Processing Techniques of Aeromagnetic Data. Case Studies from the Precambrian of Mozambique." Thesis, Uppsala universitet, Geofysik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-183714.

Full text
Abstract:
During 2002-2006 geological field work were carried out in Mozambique. The purpose was to check the preliminary geological interpretations and also to resolve the problems that arose during the compilation of preliminary geological maps and collect samples for laboratory studies. In parallel, airborne geophysical data were collected in many parts of the country to support the geological interpretation and compilation of geophysical maps. In the present work the aeromagnetic data collected in 2004 and 2005 in two small areas northwest of Niassa province and another one in eastern part of Tete province is analysed using GeosoftTM. The processing of aeromagnetic data began with the removal of diurnal variations and corrections for IGRF model of the Earth in the data set. The study of the effect of height variations on recorded magnetic field, levelling and interpolation techniques were also studied. La Porte interpolation showed to be a good tool for interpolation of aeromagnetic data using measured horizontal gradient. Depth estimation techniques are also used to obtain semi-quantitative interpretation of geological bodies. It was showed that many features in the study areas are located at shallow depth (less than 500 m) and few geological features are located at depths greater than 1000 m. This interpretation could be used to draw conclusions about the geology or be incorporated into further investigations in these areas.
APA, Harvard, Vancouver, ISO, and other styles
2

Chintala, Venkatram Reddy. "Digital image data representation." Ohio : Ohio University, 1986. http://www.ohiolink.edu/etd/view.cgi?ohiou1183128563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barker, Kelly. "Improved application of remote referencing data in aeromagnetic processing : insights and applications from global geomagnetic modelling." Thesis, University of Liverpool, 2016. http://livrepository.liverpool.ac.uk/3002173/.

Full text
Abstract:
Magnetic surveys are an important method of understanding subsurface geology, however there are several reasons why correction by remote referencing may fail, including local induced effects, activity levels of the field, and simple distance between survey and base station. We look for ways to improve correction by remote referencing using insights from global field models and comparisons of data from a wide range of observatories. We investigate the conditions in which the behaviour of nearby observatories differ from each other, and where the CM4 comprehensive model fails to match the observed behaviour of the local geomagnetic field. The misfits are separated by cause: those due to the activity level of the geomagnetic field, and the location of the observatory. We see that CM4 is a good match to observatories in the conditions it was designed for (mid-latitudes and Kp up to 2), but also that it can produce a good fit to stations out of this range (up to Kp of 3 or 4). The correlation of misfits to CM4 allows us to separate effects due to latitude, and location on the coast. Further investigation allows us to suggest some corrections that may improve the quality and extent of magnetic data gained by surveys in these locations. High latitude stations show changes in behaviour which fall into latitudinally split groups, most likely due to the presence of induced fields from ionospheric currents. Ensuring base station and survey fall into the same grouping would eliminate many of the problems this causes. Geomagnetic storms often lead to survey data being unusable due to their effects. We find that while X component data contains mostly storm signal, the Y and Z components at many stations contain retrievable data. The recovery period of the storm can, for most stations, be used after a regression is applied. We also consider the effects of induced fields due to the tides and the coast effect -well-known effects that can be seen at many stations. We find a correction for the dominant M2 tidal effect using cosine waves. We also find an approximate correction for the coast effect, using cosine or sine waves of the Sq period as appropriate for the station pair chosen. It is also noted that small differences in location can have a large effect on the induced fields, as seen at GUI and TAM, where storms seem to have a smaller than expected effect.
APA, Harvard, Vancouver, ISO, and other styles
4

Dahlberg, Tobias. "Distributed Storage and Processing of Image Data." Thesis, Linköpings universitet, Databas och informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-85109.

Full text
Abstract:
Systems operating in a medical environment need to maintain high standards regarding availability and performance. Large amounts of images are stored and studied to determine what is wrong with a patient. This puts hard requirements on the storage of the images. In this thesis, ways of incorporating distributed storage into a medical system are explored. Products, inspired by the success of Google, Amazon and others, are experimented with and compared to the current storage solutions. Several “non-relational databases” (NoSQL) are investigated for storing medically relevant metadata of images, while a set of distributed file systems are considered for storing the actual images. Distributed processing of the stored data is investigated by using Hadoop MapReduce to generate a useful model of the images' metadata.
APA, Harvard, Vancouver, ISO, and other styles
5

JULARDZIJA, MIRHET. "Processing RAW image data in mobile units." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-27724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Grosse, Neil G. "Image processing of Red Sea geophysical data." Thesis, University of Newcastle Upon Tyne, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Parker, Greg. "Robust processing of diffusion weighted image data." Thesis, Cardiff University, 2014. http://orca.cf.ac.uk/61622/.

Full text
Abstract:
The work presented in this thesis comprises a proposed robust diffusion weighted magnetic resonance imaging (DW-MRI) pipeline, each chapter detailing a step designed to ultimately transform raw DW-MRI data into segmented bundles of coherent fibre ready for more complex analysis or manipulation. In addition to this pipeline we will also demonstrate, where appropriate, ways in which each step could be optimized for the maxillofacial region, setting the groundwork for a wider maxillofacial modelling project intended to aid surgical planning. Our contribution begins with RESDORE, an algorithm designed to automatically identify corrupt DW-MRI signal elements. While slower than the closest alternative, RESDORE is also far more robust to localised changes in SNR and pervasive image corruptions. The second step in the pipeline concerns the retrieval of accurate fibre orientation distribution functions (fODFs) from the DW-MRI signal. Chapter 4 comprises a simulation study exploring the application of spherical deconvolution methods to `generic' fibre; finding that the commonly used constrained spherical harmonic deconvolution (CSHD) is extremely sensitive to calibration but, if handled correctly, might be able to resolve muscle fODFs in vivo. Building upon this information, Chapter 5 conducts further simulations and in vivo image experimentation demonstrating that this is indeed the case, allowing us to demonstrate, for the first time, anatomically plausible reconstructions of several maxillofacial muscles. To complete the proposed pipeline, Chapter 6 then introduces a method for segmenting whole volume streamline tractographies into anatomically valid bundles. In addition to providing an accurate segmentation, this shape-based method does not require computationally expensive inter-streamline comparisons employed by other approaches, allowing the algorithm to scale linearly with respect to the number of streamlines within the dataset. This is not often true for comparison based methods which in the best case scale in higher linear time but more often by O(N2) complexity.
APA, Harvard, Vancouver, ISO, and other styles
8

Benjamin, Jim Isaac. "Quadtree algorithms for image processing /." Online version of thesis, 1991. http://hdl.handle.net/1850/11078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bhatt, Mittal Gopalbhai. "Detecting glaucoma in biomedical data using image processing /." Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Turkmen, Muserref. "Digital Image Processing Of Remotely Sensed Oceanographic Data." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609948/index.pdf.

Full text
Abstract:
Developing remote sensing instrumentation allows obtaining information about an area rapidly and with low costs. This fact offers a challenge to remote sensing algorithms aimed at extracting information about an area from the available re¬
mote sensing data. A very typical and important problem being interpretation of satellite images. A very efficient approach to remote sensing is employing discrim¬
inant functions to distinguish different landscape classes from satellite images. Various methods on this direction are already studied. However, the efficiency of the studied methods are still not very high. In this thesis, we will improve efficiency of remote sensing algorithms. Besides we will investigate improving boundary detection methods on satellite images.
APA, Harvard, Vancouver, ISO, and other styles
11

Tsanakas, Panagiotis D. "Algorithms and data structures for hierarchical image processing." Ohio : Ohio University, 1985. http://www.ohiolink.edu/etd/view.cgi?ohiou1184075678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Khanna, Rajiv. "Image data compression using multiple bases representation." Thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-12302008-063722/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Dickinson, Keith William. "Traffic data capture and analysis using video image processing." Thesis, University of Sheffield, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kidane, Dawit K. "Rule-based land cover classification model : expert system integration of image and non-image spatial data." Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50445.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2005.
ENGLISH ABSTRACT: Remote sensing and image processing tools provide speedy and up-to-date information on land resources. Although remote sensing is the most effective means of land cover and land use mapping, it is not without limitations. The accuracy of image analysis depends on a number of factors, of which the image classifier used is probably the most significant. It is noted that there is no perfect classifier, but some robust classifiers achieve higher accuracy results than others. For certain land cover/uses, discrimination based only on spectral properties is extremely difficult and often produces poor results. The use of ancillary data can improve the classification process. Some classifiers incorporate ancillary data before or after the classification process, which limits the full utilization of the information contained in the ancillary data. Expert classification, on the other hand, makes better use of ancillary data by incorporating data directly into the classification process. In this study an expert classification model was developed based on spatial operations designed to identify a specific land cover/use, by integrating both spectral and available ancillary data. Ancillary data were derived either from the spectral channels or from other spatial data sources such as DEM (Digital Elevation Model) and topographical maps. The model was developed in ERDAS Imagine image-processing software, using the expert engineer as a final integrator of the different constituent spatial operations. An attempt was made to identify the Level I land cover classes in the South African National Land Cover classification scheme hierarchy. Rules were determined on the basis of expert knowledge or statistical calculations of mean and variance on training samples. Although rules could be determined by using statistical applications, such as the classification analysis regression tree (CART), the absence of adequate and accurate training data for all land cover classes and the fact that all land cover classes do not require the same predictor variables makes this option less desirable. The result of the accuracy assessment showed that the overall classification accuracy was 84.3% and kappa statistics 0.829. Although this level of accuracy might be suitable for most applications, the model is flexible enough to be improved further.
AFRIKAANSE OPSOMMING: Afstandswaameming-en beeldverwerkingstegnieke kan akkurate informasie oorbodemhulpbronne weergee. Alhoewel afstandswaameming die mees effektiewe manier van grondbedekking en grondgebruikkartering is, is dit nie sonder beperkinge nie. Die akkuraatheid van beeldverwerking is afhanklik van verskeie faktore, waarvan die beeld klassifiseerder wat gebruik word, waarskynlik die belangrikste faktor is. Dit is welbekend dat daar geen perfekte klassifiseerder is nie, alhoewel sekere kragtige klassifiseerders hoër akkuraatheid as ander behaal. Vir sekere grondbedekking en -gebruike is uitkenning gebaseer op spektrale eienskappe uiters moeilik en dikwels word swak resultate behaal. Die gebruik van aanvullende data, kan die klassifikasieproses verbeter. Sommige klassifiseerders inkorporeer aanvullende data voor of na die klassifikasieproses, wat die volle aanwending van die informasie in die aanvullende data beperk. Deskundige klassifikasie, aan die ander kant, maak beter gebruik van aanvullende data deurdat dit data direk in die klassifikasieproses inkorporeer. Tydens hierdie studie is 'n deskundige klassifikasiemodel ontwikkel gebaseer op ruimtelike verwerkings, wat ontwerp is om spesifieke grondbedekking en -gebruike te identifiseer. Laasgenoemde is behaal deur beide spektrale en beskikbare aanvullende data te integreer. Aanvullende data is afgelei van, óf spektrale eienskappe, óf ander ruimtelike bronne soos 'n DEM (Digitale Elevasie Model) en topografiese kaarte. Die model is ontwikkel in ERDAS Imagine beeldverwerking sagteware, waar die 'expert engineer' as finale integreerder van die verskillende samestellende ruimtelike verwerkings gebruik is. 'n Poging is aangewend om die Klas I grondbedekkingklasse, in die Suid-Afrikaanse Nasionale Grondbedekking klassifikasiesisteem te identifiseer. Reëls is vasgestel aan die hand van deskundige begrippe of eenvoudige statistiese berekeninge van die gemiddelde en variansie van opleidingsdata. Alhoewel reëls met behulp van statistiese toepassings, soos die 'classification analysis regression tree (CART)' vasgestel kon word, maak die afwesigheid van genoegsame en akkurate opleidingsdata vir al die grondbedekkingsklasse hierdie opsie minder aantreklik. Bykomend tot laasgenoemde, vereis alle grondbedekkingsklasse nie dieselfde voorspellingsveranderlikes nie. Die resultaat van hierdie akkuraatheidsskatting toon dat die algehele klassifikasie-akkuraatheid 84.3% was en die kappa statistieke 0.829. Alhoewel hierdie vlak van akkuraatheid vir die meeste toepassings geskik is, is die model aanpasbaar genoeg om verder te verbeter.
APA, Harvard, Vancouver, ISO, and other styles
15

Bao, Shunxing. "Algorithmic Enhancements to Data Colocation Grid Frameworks for Big Data Medical Image Processing." Thesis, Vanderbilt University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13877282.

Full text
Abstract:

Large-scale medical imaging studies to date have predominantly leveraged in-house, laboratory-based or traditional grid computing resources for their computing needs, where the applications often use hierarchical data structures (e.g., Network file system file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance for laboratory-based approaches reveal that performance is impeded by standard network switches since typical processing can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. On the other hand, the grid may be costly to use due to the dedicated resources used to execute the tasks and lack of elasticity. With increasing availability of cloud-based big data frameworks, such as Apache Hadoop, cloud-based services for executing medical imaging studies have shown promise.

Despite this promise, our studies have revealed that existing big data frameworks illustrate different performance limitations for medical imaging applications, which calls for new algorithms that optimize their performance and suitability for medical imaging. For instance, Apache HBases data distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). Big data medical image processing applications involving multi-stage analysis often exhibit significant variability in processing times ranging from a few seconds to several days. Due to the sequential nature of executing the analysis stages by traditional software technologies and platforms, any errors in the pipeline are only detected at the later stages despite the sources of errors predominantly being the highly compute-intensive first stage. This wastes precious computing resources and incurs prohibitively higher costs for re-executing the application. To address these challenges, this research propose a framework - Hadoop & HBase for Medical Image Processing (HadoopBase-MIP) - which develops a range of performance optimization algorithms and employs a number of system behaviors modeling for data storage, data access and data processing. We also introduce how to build up prototypes to help empirical system behaviors verification. Furthermore, we introduce a discovery with the development of HadoopBase-MIP about a new type of contrast for medical imaging deep brain structure enhancement. And finally we show how to move forward the Hadoop based framework design into a commercialized big data / High performance computing cluster with cheap, scalable and geographically distributed file system.

APA, Harvard, Vancouver, ISO, and other styles
16

Collaer, Marcia Lee. "IMAGE DATA COMPRESSION: DIFFERENTIAL PULSE CODE MODULATION OF TOMOGRAPHIC PROJECTIONS." Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/291412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Connolly, Christine. "Image segmentation from colour data for industrial applications." Thesis, University of Huddersfield, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hague, Darren S. "Neural networks for image data compression : improving image quality for auto-associative feed-forward image compression networks." Thesis, Brunel University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Shen, Shan. "MRI brain tumour classification using image processing and data mining." Thesis, University of Strathclyde, 2004. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=21543.

Full text
Abstract:
Detecting and diagnosing brain tumour types quickly and accurately is essential to any effective treatment. The general brain tumour diagnosis procedure, biopsy, not only causes a great deal of pain to the patient but also raises operational difficulty to the clinician. In this thesis, a non-invasive brain tumour diagnosis system based on MR images is proposed. The first part is image preprocessing applied to original MR images from the hospital. Non-uniformed intensity scales of MR images are standardized relying on their statistic characteristics without requiring prior or post templates. It is followed by a non-brain region removal process using morphologic operations and a contrast enhancement between white matter and grey matter by means of histogram equalization. The second part is image segmentation applied to preprocessed MR images. A new image segmentation algorithm named IFCM is developed based on the traditional FCM algorithm. Neighbourhood attractions considered in IFCM enable this new algorithm insensitive to noise, while a neural network model is designed to determine optimized degrees of attractions. This extension can also estimate inhomogenities. Brain tissue intensities are acquired from segmentation. The final part of the system is brain tumour classification. It extracts hidden diagnosis information from brain tissue intensities using a fuzzy logic based GP algorithm. This novel method imports a fuzzy membership to implement a multi-class classification directly without converting it into several binary classification problems as with most other methods. Two fitness functions are defined to describe the features of medical data precisely. The superiority of image analysis methods in each part was demonstrated on synthetic images and real MR images. Classification rules of three types and two grades of brain tumours were discovered. The final diagnosis accuracy was very promising. The feasibility and capability of the non-invasive diagnosis system were testified comprehensively.
APA, Harvard, Vancouver, ISO, and other styles
20

Greenaway, Richard Scott. "Image processing and data analysis algorithm for application in haemocytometry." Thesis, University of Hertfordshire, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Shih, Daphne Yong-Hsu. "A data path for a pixel-parallel image processing system." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/40570.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 65).
by Daphne Yong-Hsu Shih.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
22

Koutsogiannis, Vassilis. "A study of color image data compression /." Online version of thesis, 1992. http://hdl.handle.net/1850/11060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Marshall, Dana T. "The exploitation of image construction data and temporal/image coherence in ray traced animation /." Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

遲秉壯 and Ping-chong Chee. "Hand-printed Chinese character recognition and image preprocessing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31213972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chee, Ping-chong. "Hand-printed Chinese character recognition and image preprocessing /." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18597579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chung, Vera Yuk Ying. "Real-time image processing techniques using custom computing." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
27

Alevärn, Marcus. "Server-side image processing in native code compared to client- side image processing in WebAssembly." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300138.

Full text
Abstract:
Today, companies are developing processor demanding applications on the web, for example, 3D visualization software and video and audio software. Some of these companies have a native desktop application written in for example C++. These C++ codebases can consist of several hundred thousand lines of code, and companies would therefore like to reuse their codebase in the web version of the software. This thesis makes a performance comparison between two different approaches that can be taken to reuse the C++ codebase. The first approach is to compile the C++ codebase to WebAssembly and run it on the client-side. The second approach is to compile the C++ codebase to native code and run it on the server-side. It is not clear which approach to take if the goal is to achieve low execution times. This thesis will therefore answer the question of whether a client-side solution inWebAssembly is faster than a server-side solution in native code. To answer this question, this project work looked at one use case namely image processing. Two different web applications were developed, one that did image processing on the server-side in native code, and another one that did image processing on the client-side in WebAssembly. Execution time measurements were collected from both web applications. The results showed that for most algorithms WebAssembly was a factor of 1.5 slower than native code, without considering the additional delay from the internet that will affect the web application that performs image processing on the server-side. If this delay is taken into account, the web application that performs image processing on the client-side inWebAssembly will be faster than the server-side solution in native code for most users in the world. If the Round-Trip Time (RTT) is 1 ms the required average throughput needed to make the two web applications equally fast is 249 Mbps (Google Chrome) or 226 Mbps (Firefox). Most users in the world do not have such a high average throughput.
Idag utvecklar företag processorkrävande applikationer på webben, till exempel 3D-visualiseringsprogramvara och video- och ljudprogramvara. Några av dessa företag har även skrivbordsprogram skrivna i exempelvis C++. Dessa C++ kodbaser kan bestå av flera hundra tusen rader kod, och företag vill därför återanvända sin kodbas i webbversionen. Detta projektarbete gör en jämförelse mellan två olika tillvägagångssätt för att återanvända C++ kodbasen. Det första tillvägagångssättet är att kompilera C++ kodbasen till WebAssembly och köra koden på klientsidan. Det andra tillvägagångssättet är att kompilera C++ kodbasen till maskinkod och köra koden på serversidan. Det är inte klart vilken metod man ska ta om målet är att uppnå optimal prestanda. Detta projektarbete kommer därför att besvara frågan om en klientsidslösning i WebAssembly är snabbare än en serversidslösning i maskinkod. För att svara på den här frågan tittade projektarbetet på ett användningsfall, nämligen bildbehandling. Två olika webbapplikationer utvecklades, en som gjorde bildbehandling på serversidan i maskinkod och en annan som gjorde bildbehandling på klientsidan i WebAssembly. Körtidsmätningar samlades in från båda webbapplikationerna. Resultaten visade att för de flesta algoritmer var WebAssembly en faktor 1,5 långsammare än maskin kod, utan att ta hänsyn till den extra fördröjningen från internet som kommer att påverka webbapplikationen som utför bildbehandling på serversidan. Om denna fördröjning tas med i beräkningen kommer webbapplikationen som utför bildbehandling på klientsidan i WebAssembly att vara snabbare än serversidslösningen i maskinkod för de flesta användare i världen. Om tur och retur tiden (RTT) är 1 ms är den genomsnittliga genomströmning som krävs för att göra de två webbapplikationerna lika snabba 249 Mbps (Google Chrome) eller 226 Mbps (Firefox). De flesta användare i världen har inte så hög genomsnittlig genomströmning.
APA, Harvard, Vancouver, ISO, and other styles
28

Yan, Hui. "Data analytics and crawl from hidden web databases." Thesis, University of Macau, 2015. http://umaclib3.umac.mo/record=b3335862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Carter, Leigh. "Image data compression by division of subpictures into classes." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Sander, Samuel Thomas. "Retargetable compilation for variable-grain data-parallel execution in image processing." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/13850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wear, Steven M. "Shift-invariant image reconstruction of speckle-degraded images using bispectrum estimation /." Online version of thesis, 1990. http://hdl.handle.net/1850/11219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ardila, Bernal Pablo Andres. "Augmented Reality Stationary Platform Controlled by Image Processing Interface." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254878.

Full text
Abstract:
The development of dynamic and user-friendly embedded platforms for game rooms requires resource efficient software that successfully delivers a satisfying experience. Noran AB wants to make it inclusive and handicap-oriented. Augmented reality integrated with image recognition powered by OpenCV offers a plausible solution, where a virtual robot can be controlled through head movements. A functional prototype is developed and experiments with users are forwarded and documented. The focus of this work is to identify and evaluate criteria to manipulate the performance of the image processing algorithm for feature detection. This has been established knowing that an individual image is usually scanned by a relatively small filter. Reducing the image size would therefore reduce the area to scan. The time the algorithm would take to go through the whole matrix of pixels would be directly reduced. From user input, the sequence of images has been shrunk from 307 200 down to 117 pixels going through 10 intermediate steps, the average time to detect a face for a specific frame size is registered. A correlation between the size and the time to detect the face should be found. This is compared with the detection rate of each frame size, which confirms if in this case the algorithm was successfully executed with respect to the original resolution. Detection performance can, therefore, be improved by achieving a higher detection speed with no hardware enhancements. The experience is also analyzed through feedback and behavior of the user, both collected or observed during the experiments. Results lead to finding a lower image size that still satisfies the same detection rate as the original resolution and to figure out how the interface can be further improved.
Utvecklingen av dynamiska och användarvänliga plattformar för fysiska spelrum kräver programvara som ger en stabil och bra upplevelse. Noran AB vill erbjuda handikappanpassade funktioner med hjälp av integrerad bildigenkänning som drivs av OpenCV. Systemet visar en virtuell robot kan styras genom huvudrörelser. En funktionell prototyp utvecklas och experiment med användare genomförs och dokumenteras. Inriktningen för detta arbete är att identifiera och utvärdera kriterier för att manipulera bildhanteringsalgoritmens prestanda för funktionsdetektering. Det har fastställts att en enskild digital bild vanligtvis skannas av ett relativt litet filter. Att minska bildstorleken skulle därför också minska området att skanna. Den tid som algoritmen skulle ta för att gå igenom hela matrixen av pixlar skulle minskas direkt. Från användarinmatning har sekvensen av bilder krympt från 307 200 ner till 117 pixlar genom 10 mellansteg, den genomsnittliga tiden för att upptäcka ett ansikte för en viss bildstorlek registreras. En korrelation mellan storlek och tid för att upptäcka ansiktet bör hittas. Detta jämförs med detekteringsgraden för varje ramstorlek, vilket bekräftar om i detta fall exekveringen av algoritmen utfördes med avseende på originalupplösning. Detekteringsprestanda kan därför förbättras genom att uppnå en högre detekteringshastighet utan några hårdvaruförbättringar. Erfarenheten analyseras också genom användarens feedback och beteende, båda samlade eller observerade under experimenten. Resultat leder till att hitta en lägre bildstorlek som fortfarande uppfyller samma detekteringsnivå som den ursprungliga upplösningen och för att se hur gränssnittet kan förbättras ytterligare.
APA, Harvard, Vancouver, ISO, and other styles
33

Kumar, Mrityunjay. "Model based image fusion." Diss., Connect to online resource - MSU authorized users, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
34

Pan, Jian Jia. "EMD/BEMD improvements and their applications in texture and signal analysis." HKBU Institutional Repository, 2013. https://repository.hkbu.edu.hk/etd_oa/75.

Full text
Abstract:
The combination of the well-known Hilbert spectral analysis (HAS) and the recently developed Empirical Mode Decomposition (EMD) designated as the Hilbert-Huang Transform (HHT) by Huang in 1998, represents a paradigm shift of data analysis methodology. The HHT is designed specifically for analyzing nonlinear and nonstationary data. The key part of HHT is EMD with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMFs). For two dimension, bidimensional IMFs (BIMFs) is decomposed by use of bidimensional EMD (BEMD). However, the HHT has some limitations in signal processing and image processing. This thesis addresses the problems of using HHT for signal and image processing. To reduce end effect in EMD, we propose a boundary extend method for EMD. A linear prediction based method combined with boundary extrema points information is employed to extend the signal, which reduces the end effect in EMD sifting process. It is a simple and effective method. In the EMD decomposition, interpolation method is another key point to get ideal components. The envelope mean in EMD is computed from the upper and lower envelopes by cubic spline interpolation, which has overshooting problem and is time-consuming. Based on the linear interpolation (straight line) method, we propose using the extrema points information to get the mean envelope, which is Extrema Mean Empirical Mode Decomposition (EMEMD). The mean envelope taken by EMEMD is smoother than EMD and the undershooting and overshooting problems in cubic spline are reduced compared with EMD. EMEMD also reduces the computation complex. Experimental results show the IMFs of EMEMD present more and clearer time-frequency information than EMD. Hilbert spectral of EMEMD is also clearer and more meaningful than EMD. Furthermore, based on the procedure of EMEMD method, a fast method to detect the frequency change location information of the piecewise stationary signal is also proposed, which is Extrema Points Empirical Mode Decomposition (EPEMD). Later, two applications based on the improved EMD/BEMD methods are proposed. One application is texture classification in image processing. A saddle points added BEMD is developed to supply multi-scale components (BIMFs) and Riesz transform is used to get the frequency domain characters of these BIMFs. Based on local descriptor Local Binary Pattern (LBP), two new features (based on BIMFs and based on Monogenic-BIMFs signals) are developed. In these new multi-scale components and frequency domain components, the LBP descriptor can achieve better performance than in original image. Experimental results show the texture images recognition rate based on our methods are better than other texture features methods. Another application is signal forecasting in one dimensional time series. EMEMD combined with Local Linear Wavelet Neural Network (LLWNN) for signal forecasting is proposed. The architecture is a decomposition-trend detection-forecasting-ensemble methodology. The EMEMD based decomposition forecasting method decomposed the time series into its basic components, and more accurate forecasts are obtained. In short, the main contributions of this thesis are summarized as following: 1. A boundary extension method is developed for one dimensional EMD. This extension method is based on linear prediction and end points adjusting. This extension method can reduce the end effect in EMD. 2. A saddle points added BEMD is developed to analysis and classify the texture images. This new BEMD detected more high oscillation in BIMFs and contributed for texture analysis. 3. A new texture analysis and classification method is proposed, which is based on BEMD (no/with saddle points), LBP and Riesz transform. The texture features based on BIMFs and BIMFs’ frequency domain 2D monogenic phase are developed. The performances and comparisons on the Brodatz, KTH-TIPS2a, CURet and Outex databases are reported. 4. An improved EMD method, EMEMD, is proposed to overcome the shortcoming in interpolation. EMEMD can provide more meaningful IMFs and it is also a fast decomposition method. The decomposition result and analysis in simulation temperature signal compare with Fourier transform, Wavelet transform are reported. 5. A forecasting methodology based on EMEMD and LLWNN is proposed. The architecture is a decomposition-trend detection-forecasting-ensemble methodology. The predicted results of Hong Kong Hang Seng Index and Global Land-Ocean Temperature Index are reported.
APA, Harvard, Vancouver, ISO, and other styles
35

van, der Gracht Joseph. "Partially coherent image enhancement by source modification." Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/13379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Jae-Min. "Characterization of spatial and temporal brain activation patterns in functional magnetic resonance imaging data." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0013024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mainguy, Yves. "A robust variable order facet model for image data." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-10222009-124949/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zhu, Hui, and 朱暉. "Deformable models and their applications in medical image processing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31238075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Pivovarník, Marek. "New Approaches in Airborne Thermal Image Processing for Landscape Assessment." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-263356.

Full text
Abstract:
Letecká termální hyperspektrální data přinášejí řadu informací o teplotě a emisivitě zemského povrchu. Při odhadování těchto parametrů z dálkového snímání tepelného záření je třeba řešit nedourčený systém rovnic. Bylo navrhnuto několik přístupů jak tento problém vyřešit, přičemž nejrozšířenější je algoritmus označovaný jako Temperature and Emissivity Separation (TES). Tato práce má dva hlavní cíle: 1) zlepšení algoritmu TES a 2) jeho implementaci do procesingového řetězce pro zpracování obrazových dat získaných senzorem TASI. Zlepšení algoritmu TES je možné dosáhnout nahrazením používaného modulu normalizování emisivity (tzv. Normalized Emissivity Module) částí, která je založena na vyhlazení spektrálních charakteristik nasnímané radiance. Nový modul je pak označen jako Optimized Smoothing for Temperature Emissivity Separation (OSTES). Algoritmus OSTES je připojen k procesingovému řetězci pro zpracování obrazových dat ze senzoru TASI. Testování na simulovaných datech ukázalo, že použití algoritmu OSTES vede k přesnějším odhadům teploty a emisivity. OSTES byl dále testován na datech získaných ze senzorů ASTER a TASI. V těchto případech však není možné pozorovat výrazné zlepšení z důvodu nedokonalých atmosférických korekcí. Nicméně hodnoty emisivity získané algoritmem OSTES vykazují více homogenní vlastnosti než hodnoty ze standardního produktu senzoru ASTER.
APA, Harvard, Vancouver, ISO, and other styles
40

Goldschneider, Jill R. "Lossy compression of scientific data via wavelets and vector quantization /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/5881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Martin, Ian John. "Multi-spectral image segmentation and compression." Thesis, University of Warwick, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lindstrom, Peter. "Model simplification using image and geometry-based metrics." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/8208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

ALMEIDA, Marcos Antonio Martins de. "Statistical analysis applied to data classification and image filtering." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/25506.

Full text
Abstract:
Submitted by Fernanda Rodrigues de Lima (fernanda.rlima@ufpe.br) on 2018-08-03T20:52:13Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Marcos Antonio Martins de Almeida.pdf: 11555397 bytes, checksum: db589d39915a5dda1d8b9e763a9cf4c0 (MD5)
Approved for entry into archive by Alice Araujo (alice.caraujo@ufpe.br) on 2018-08-09T20:49:00Z (GMT) No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Marcos Antonio Martins de Almeida.pdf: 11555397 bytes, checksum: db589d39915a5dda1d8b9e763a9cf4c0 (MD5)
Made available in DSpace on 2018-08-09T20:49:01Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Marcos Antonio Martins de Almeida.pdf: 11555397 bytes, checksum: db589d39915a5dda1d8b9e763a9cf4c0 (MD5) Previous issue date: 2016-12-21
Statistical analysis is a tool of wide applicability in several areas of scientific knowledge. This thesis makes use of statistical analysis in two different applications: data classification and image processing targeted at document image binarization. In the first case, this thesis presents an analysis of several aspects of the consistency of the classification of the senior researchers in computer science of the Brazilian research council, CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico. The second application of statistical analysis developed in this thesis addresses filtering-out the back to front interference which appears whenever a document is written or typed on both sides of translucent paper. In this topic, an assessment of the most important algorithms found in the literature is made, taking into account a large quantity of parameters such as the strength of the back to front interference, the diffusion of the ink in the paper, and the texture and hue of the paper due to aging. A new binarization algorithm is proposed, which is capable of removing the back-to-front noise in a wide range of documents. Additionally, this thesis proposes a new concept of “intelligent” binarization for complex documents, which besides text encompass several graphical elements such as figures, photos, diagrams, etc.
Análise estatística é uma ferramenta de grande aplicabilidade em diversas áreas do conhecimento científico. Esta tese faz uso de análise estatística em duas aplicações distintas: classificação de dados e processamento de imagens de documentos visando a binarização. No primeiro caso, é aqui feita uma análise de diversos aspectos da consistência da classificação de pesquisadores sêniores do CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico, na área de Ciência da Computação. A segunda aplicação de análise estatística aqui desenvolvida trata da filtragem da interferência frente-verso que surge quando um documento é escrito ou impresso em ambos os lados da folha de um papel translúcido. Neste tópico é inicialmente feita uma análise da qualidade dos mais importantes algoritmos de binarização levando em consideração parâmetros tais como a intensidade da interferência frente-verso, a difusão da tinta no papel e a textura e escurecimento do papel pelo envelhecimento. Um novo algoritmo para a binarização eficiente de documentos com interferência frente-verso é aqui apresentado, tendo se mostrado capaz de remover tal ruído em uma grande gama de documentos. Adicionalmente, é aqui proposta a binarização “inteligente” de documentos complexos que envolvem diversos elementos gráficos (figuras, diagramas, etc).
APA, Harvard, Vancouver, ISO, and other styles
44

Beaton, Duncan. "Integration of data description and quality information using metadata for spatial data and spatial information systems." Thesis, University of Newcastle Upon Tyne, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Waite, Martin. "Data structures for the reconstruction of engineering drawings." Thesis, Nottingham Trent University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hudson, James. "Processing large point cloud data in computer graphics." Connect to this title online, 2003. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1054233187.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2003.
Title from first page of PDF file. Document formatted into pages; contains xix, 169 p.; also includes graphics (some col.). Includes bibliographical references (p. 159-169). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
47

Klintström, Eva. "Image Analysis for Trabecular Bone Properties on Cone-Beam CT Data." Doctoral thesis, Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-142066.

Full text
Abstract:
Trabecular bone structure as well as bone mineral density (BMD) have impact on the biomechanical competence of bone. In osteoporosis-related fractures, there have been shown to exist disconnections in the trabecular network as well as low bone mineral density. Imaging of bone parameters is therefore of importance in detecting osteoporosis. One available imaging device is cone-beam computed tomography (CBCT). This device is often used in pre-operative imaging of dental implants, for which the trabecular network also has great importance. Fourteen or 15 trabecular bone specimens from the radius were imaged for conducting this in vitro project. The imaging data from one dual-energy X-ray absorptiometry (DXA), two multi-slice computed tomography (MSCT), one high-resolution peripheral quantitative computed tomography (HR-pQCT) and four CBCT devices were segmented using an in-house developed code based on homogeneity thresholding. Seven trabecular microarchitecture parameters, as well as two trabecular bone stiffness parameters, were computed from the segmented data. Measurements from micro-computed tomography (micro-CT) data of the same bone specimens were regarded as gold standard. Correlations between MSCT and micro-CT data showed great variations, depending on device, imaging parameters and between the bone parameters. Only the bone-volume fraction (BV/TV) parameter was stable with strong correlations. Regarding both HR-pQCT and CBCT, the correlations to micro-CT were strong for bone structure parameters as well as bone stiffness parameters. The CBCT device 3D Accuitomo showed the strongest correlations, but overestimated BV/TV more than three times compared to micro-CT. The imaging protocol most often used in clinical imaging practice at our clinic demonstrated strong correlations as well as low radiation dose. CBCT data of trabecular bone can be used for analysing trabecular bone properties, like bone microstructure and bone biomechanics, showing strong correlations to the reference method of micro-CT. The results depend on choice of CBCT device as well as segmentation method used. The in-house developed code based on homogeneity thresholding is appropriate for CBCT data. The overestimations of BV/TV must be considered when estimating bone properties in future clinical dental implant and osteoporosis research.
APA, Harvard, Vancouver, ISO, and other styles
48

Liao, Haiyong. "Computational methods for bioinformatics and image restoration." HKBU Institutional Repository, 2010. http://repository.hkbu.edu.hk/etd_ra/1103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Burger, James. "Hyperspectral NIR image analysis : data exploration, correction, and regression /." Umeå : Unit of Biomass Technology and Chemistry, Swedish University of Agricultural Sciences, 2006. http://epsilon.slu.se/200660.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Strohbeck, Uwe. "A new approach in image data compression by multiple resolution frame-processing." Thesis, Northumbria University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography