To see the other types of publications on this topic, follow the link: Pattern dimensions.

Dissertations / Theses on the topic 'Pattern dimensions'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Pattern dimensions.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lopez-Bonilla, Roman Ernesto. "Object recognition in three-dimensions for robotic applications." Thesis, University of Bradford, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grant, Jeremy. "Wavelet-Based Segmentation of Fluorescence Microscopy Images in Two and Three Dimensions." Fogler Library, University of Maine, 2008. http://www.library.umaine.edu/theses/pdf/GrantJ2008.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Orwin, Claire Nicola. "An evaluation of the performance of an optical measurement system for the three-dimensional capture of the shape and dimensions of the human body." Thesis, De Montfort University, 2000. http://hdl.handle.net/2086/4908.

Full text
Abstract:
As the clothing industry moves away from traditional models of mass production there has been increased interest towards customised clothing. The technology to produce cost effective customised clothing is already in place however the prerequisite to customised clothing is accurate body dimensional data. In response, image capture systems have been developed which are capable of recording a three-dimensional image of the body, from which measurements and shape information may be extracted. The use of these systems for customised clothing has, to date, been limited due to issues of inaccuracy, cost and portability. To address the issue of inaccuracy a diagnostic procedure has been developed through the performance evaluation of an image capture system. By systematically evaluating physical and instrumental parameters the more relevant sources of potential error were identified and quantified and subsequently corrected to form a `closed loop' experimental procedure. A systematic test procedure is therefore presented which may be universally applied to image capture systems working on the same principle. The methodology was based upon the isolation and subsequent testing of variables that were thought to be potential sources of error. The process therefore included altering the physical parameters of the target object in relation to the image capture system and amending the configuration and calibration settings within the system. From the evaluation the most relevant sources of error were identified as the cosine effect, measurement point displacement, the dimensional differences between views and the influence of the operator in measurement. The test procedure proved to be effective in both evaluating the performance of the system under investigation and in enabling the quantification of errors. Both random and systematic errors were noted which may be quantified or corrected to enable improved accuracy in the measured results. Recommendations have been made for the improvement of the performance of the current image capture system these include the integration of a cosine effect correction algorithm and suggestions for the automation of the image alignment process. The limitations of the system such as its reliance on manual intervention for both the measurement and stitching processes, are discussed, as is its suitability for providing dimensional information for bespoke clothing production. Recommendations are also made for the creation of an automated test procedure for testing the performance of alternative image capture systems, which involves evaluating the accuracy of object replication both for multiple and single image capture units using calibration objects which combine a range of surfaces.
APA, Harvard, Vancouver, ISO, and other styles
4

Zegarra-Baquerizo, Hugo, Katica Moreno-Sékula, Leslie Casas-Apayco, and Hugo Ghersi-Miranda. "Mandibular condyle dimensions in Peruvian patients with Class II and Class III skeletal patterns." Universidad de Concepcion, 2017. http://hdl.handle.net/10757/622425.

Full text
Abstract:
Objective: To compare condylar dimensions of young adults with Class II and Class III skeletal patterns using cone-beam computed tomography (CBCT). Materials and methods: 124 CBCTs from 18-30 year-old patients, divided into 2 groups according to skeletal patterns (Class II and Class III) were evaluated. Skeletal patterns were classified by measuring the ANB angle of each patient. The anteroposterior diameter (A and P) of the right and left mandibular condyle was assessed from a sagittal view by a line drawn from point A (anterior) to P (posterior). The coronal plane allowed the evaluation of the medio-lateral diameter by drawing a line from point M (medium) to L (lateral); all distances were measured in mm. Results: In Class II the A-P diameter was 9.06±1.33 and 8.86±1.56 for the right and left condyles respectively, in Class III these values were 8.71±1.2 and 8.84±1.42. In Class II the M-L diameter was 17.94±2.68 and 17.67±2.44 for the right and left condyles respectively, in Class III these values were 19.16±2.75 and 19.16±2.54. Conclusion: Class III M-L dimensions showed higher values than Class II, whereas these differences were minimal in A-P.
APA, Harvard, Vancouver, ISO, and other styles
5

Benamrane, Nacéra. "Contribution à la vision stéréoscopique par mise en correspondance de régions." Valenciennes, 1994. https://ged.uphf.fr/nuxeo/site/esupversions/f861a6a0-1e2f-489c-8859-05c0368d8969.

Full text
Abstract:
La maîtrise de la vision 3D est le préalable de la vision artificielle des machines; la stéréovision s'appuie sur l'appariement de primitives issues des deux images de la même scène 3D. Dans ce mémoire, les primitives choisies sont les régions, plus facilement détectables que les segments et les points. La mise en correspondance des deux images est basée sur deux méthodes originales, segmentation (division-fusion) d'une part et d'appariement d'autre part. La segmentation est obtenue par optimisations locales et une hiérarchie de critères d'homogénéité. La segmentation traduisant chaque image en un arbre d'adjacence, la mise en correspondance est obtenue à l'aide d'une fonction à paramètres multiples: photométriques, topologiques et morpho-géométriques sous forme d'un graphe relationnel où est recherchée une compatibilité maximale (ensemble de règles) d'hypothèses d'appariement traduites par ce graphe d'appariement. Par ailleurs, calibrage et incertitudes des points homologues sont analysés et les résultats comparés à ceux obtenus par diverses autres méthodes.
APA, Harvard, Vancouver, ISO, and other styles
6

Alexandre, Salles. "Contemporary financial globalisation in historical perspective : dimensions, preconditions and consequences of the recent and unprecedented surge in global financial activity." Thesis, University of Hertfordshire, 2008. http://hdl.handle.net/2299/2629.

Full text
Abstract:
The subject of this thesis is financial globalisation in historical perspective, and its key contribution is to demonstrate the J-curve as an alternative depiction of financial globalisation since the classical Gold Standard period. As a preliminary and essential step, some definitions and clarifications on globalisation are provided in a literature review. Then, fundamental issues are considered to assess financial globalisation, so that both the goals and the boundaries of the thesis are clearly stated. Throughout the historical period in debate, there were two waves of financial globalisation: the first one occurring during the 1870-1914 period, and the second lasting from the end of the Bretton Woods agreements until the present day. The dominant approach in economics asserts that the degree of commercial and financial integration corresponds over time to a U-shaped pattern, i.e. markets presented high levels of integration during the forty years before WWI. Then, this integration collapsed in the years between the wars, recovering gradually after the Bretton Woods agreements until it reached again in the 1990s the same pre-1914 level of integration. The thesis approaches this model focusing on the financial side. Then, according to the U-curve, contemporary financial globalisation is not unprecedented. This thesis proposes an alternative view. In contrast to the mainstream U-curve, the empirical data provided indicates that today’s financial integration is unprecedented and more pervasive in some key financial markets than it was during the pre-1914 era. The empirical evidence provided proposes that a J-shaped pattern is a more appropriate way to interpret how financial markets have evolved since the late 19th century. The Jshape suggests that in some financial achieved a huge surge from the 1990s to 2005, surpassing the previous level of integration. So, in these markets, contemporary financial globalisation is unprecedented from the 1990s onwards. The J-curve does not mean that all financial markets became more globalised during the late 20th century in comparison to the Gold Standard era, but only some that presented the U-shape from 1870 to 1995. Qualitative aspects of the J-curve are examined. The different institutional frameworks underlying each historical period are discussed revealing that new institutional arrangements, policy changes, technological advances in ICT and a wide range of financial innovations are the key driving forces that have spurred today’s financial globalisation to higher levels than in the past. Finally, the last chapter assesses the key macroeconomic implications of this new era for the world economy.
APA, Harvard, Vancouver, ISO, and other styles
7

Liao, Guo-Jun [Verfasser], Sabine [Akademischer Betreuer] Klapp, Sabine [Gutachter] Klapp, and Felix [Gutachter] Höfling. "Self-assembly and pattern formation of complex active colloids in two dimensions / Guo-Jun Liao ; Gutachter: Sabine Klapp, Felix Höfling ; Betreuer: Sabine Klapp." Berlin : Technische Universität Berlin, 2021. http://d-nb.info/1238143105/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jones, Mary Elizabeth Song Il-Yeol. "Dimensional modeling : identifying patterns, classifying patterns, and evaluating pattern impact on the design process /." Philadelphia, Pa. : Drexel University, 2006. http://dspace.library.drexel.edu/handle/1860/743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

ALAHMAD, MOUHAMAD. "Developpement de methodes de vision par ordinateur : extraction de primitives geometriques." Université Louis Pasteur (Strasbourg) (1971-2008), 1986. http://www.theses.fr/1986STR13192.

Full text
Abstract:
Cette these concerne le developpement de methodes de vision par ordinateur, destinees a l'extraction de caracteristiques geometriques (barycentre, surface, perimetre, axe principal et orientation) pour identifier, localiser et comparer des objets a partir d'images en deux dimensions
APA, Harvard, Vancouver, ISO, and other styles
10

Šmíd, Jiří. "Možnosti uplatnění moderních metod při výrobě prototypových odlitků." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-229708.

Full text
Abstract:
The introduction part of the thesis focuses on the overview of rapid prototyping in foundry industry. Principles of the most important RP methods are described and the FDM method is analyzed in more detail. This method was used in the practical part for the production of wax patterns with silicone moulds. The wax patterns were used for the production of castings using the lost wax method. The result of this work is determination of dimensional changes during the whole process of casting manufacture from the drawing to the final casting.
APA, Harvard, Vancouver, ISO, and other styles
11

Lindholm, Maria. "La Commission européenne et ses pratiques communicatives : Étude des dimensions linguistiques et des enjeux politiques des communiqués de presse." Doctoral thesis, Linköpings universitet, Institutionen för språk och kultur, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9818.

Full text
Abstract:
I den här avhandlingen studeras Europeiska kommissionens kommunikativa praktiker i ljuset av de pressmeddelanden som dagligen distribueras till världens största presskår i Bryssel, men också via internet till andra journalister och allmänheten. Övergripande syften med avhandlingen är att beskriva textproduktionen i denna en av världens största textproducenter och att lyfta fram den, hittills förvånansvärt osynliga, språkliga dimensionen av kommissionens kommunikation. Avhandlingen tar avstamp i ett dialogiskt perspektiv på kommunikation, där kommunikation förstås som en dynamisk process i vilken människor (sam)agerar i ett givet sammanhang. Avgörande blir således att se pressmeddelandena som en del av den produktions- och distributionskontext de ingår i, både på lokal nivå och på en mer övergripande institutionell nivå. Empiriskt bygger avhandlingen på fältstudier vid Europeiska kommissionen och textanalyser av pressmeddelanden från kommissionen och från franska och svenska departement. Pressmeddelandena studeras både som process och produkt: formuleringsprocesser å ena sidan och textmönster och tempusbruk å den andra. Som ett exempel detaljstuderas produktionen av två pressmeddelanden mot bakgrund av skribenternas förklaringar och motiveringar till sina ändringar. Med sin unika inblick i hur ett pressmeddelande blir till steg för steg och av olika aktörer utgör denna del ett viktigt bidrag till forskningen om pressmeddelanden, som först på senare år blivit mer processinriktad. De olika delstudierna ger alla vid handen att kommissionen, enkelt uttryckt, måste arbeta mer för att underbygga sin argumentation och för att göra sina initiativ mer begripliga, legitima och motiverade. Detta kan i stor utsträckning tillskrivas den mer komplicerade kommunikationssituationen som gäller för kommissionen i förhållande till de nationella departement som är jämförelsematerial i studien.
The thesis investigates the European Commission’s communicative practices in the light of the press releases that are distributed daily to the world’s largest press corps in Brussels and on the Internet to other journalists and the general public. The overall aim of the thesis is to describe the text production of one of the largest text producers in the world and to highlight the linguistic dimensions of the Commission’s communicative practices, which until now have received little scholarly attention. The study adopts a dialogical perspective on communication, where communication is understood as a dynamic process in which people interact in a given context. This means that the press releases are seen as parts of the production and distribution context in which they are embedded, both on a local level and on a more general institutional level. The empirical data on which the study is based comprise field studies at the European Commission and text analyses of press releases issued by the Commission and French and Swedish ministries. The press releases are analysed on different linguistic levels, text pattern and the use of tense, on the one hand, and composition processes on the other. As an example, the production of two press releases is studied in detail, in view of the authors’ comments to and motivations for changes to the texts. With its unique insight into how a press release is drafted step by step and by the different parties involved this part of the thesis is an important contribution to research on press releases, which only recently has become more oriented towards the production process. The results of the analyses highlight the fact that the Commission, to a greater extent than the national ministries, must substantiate its argumentation and make its initiatives more comprehensible, legitimate, and motivated. This finding may be ascribed to the more complex communication situation of the Commission, compared to the national ministries, which served as material for comparison in the study.
APA, Harvard, Vancouver, ISO, and other styles
12

Žuja, Jaroslav. "Optimalizace technologie výroby voskových modelů ve firmě Fimes." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232109.

Full text
Abstract:
This thesis is to evaluate currently used wax and compare its properties with of newly purchased one. Further, it is to evaluate whether the quality is reflected by using "boiled wax'', which is being used as a runner wax. All tests were carried out at the Fimes a. s. In the first part of this thesis, there is a brief description of the lost wax method. This is followed by a decription of wax blends from technology point of view. The experimental part describes the operations from the production of wax patterns, through their dimensional inspection, up until dewaxing process. All the results are noted and compared.
APA, Harvard, Vancouver, ISO, and other styles
13

Hassan, Tahir Mohammed. "Data-independent vs. data-dependent dimension reduction for pattern recognition in high dimensional spaces." Thesis, University of Buckingham, 2017. http://bear.buckingham.ac.uk/199/.

Full text
Abstract:
There has been a rapid emergence of new pattern recognition/classification techniques in a variety of real world applications over the last few decades. In most of the pattern recognition/classification applications, the pattern of interest is modelled by a data vector/array of very high dimension. The main challenges in such applications are related to the efficiency of retrieval, analysis, and verifying/classifying the pattern/object of interest. The “Curse of Dimension” is a reference to these challenges and is commonly addressed by Dimension Reduction (DR) techniques. Several DR techniques has been developed and implemented in a variety of applications. The most common DR schemes are dependent on a dataset of “typical samples” (e.g. the Principal Component Analysis (PCA), and Linear Discriminant Analysis (LDA)). However, data-independent DR schemes (e.g. Discrete Wavelet Transform (DWT), and Random Projections (RP)) are becoming more desirable due to lack of density ratio of samples to dimension. In this thesis, we critically review both types of techniques, and highlight advantages and disadvantages in terms of efficiency and impact on recognition accuracy. We shall study the theoretical justification for the existence of DR transforms that preserve, within tolerable error, distances between would be feature vectors modelling objects of interest. We observe that data-dependent DRs do not specifically attempts to preserve distances, and the problems of overfitting and biasness are consequences of low density ratio of samples to dimension. Accordingly, the focus of our investigations is more on data-independent DR schemes and in particular on the different ways of generating RPs as an efficient DR tool. RPs suitable for pattern recognition applications are only restricted by a lower bound on the reduced dimension that depends on the tolerable error. Besides, the known RPs that are generated in accordance to some probability distributions, we investigate and test the performance of differently constructed over-complete Hadamard mxn (m<
APA, Harvard, Vancouver, ISO, and other styles
14

Burgarella, Denis. "Automatisation du traitement de spectres bidimensionnels a l'aide d'algorithmes de reconnaissance de formes : application a des spectres de galaxies (mkn 171) et d'etoiles sous-naines." Nice, 1987. http://www.theses.fr/1987NICE4161.

Full text
Abstract:
Traitement des spectres de moyenne dispersion obtenus avec des spectrographes a longue fente equipes d'un tube image a 2 dimensions. Un algorithme de reconnaissance de formes permet de traiter et corriger geometriquement les donnees sans se preoccuper de leur origine. L'application a des objets etendus (galaxies) et compacts (etoiles) sert de test pour ce nouveau traitement spectral automatique. Une etoile sous naine (cpd-71 **(o) 172) observee par le satellite iue en uv est etudiee en detail
APA, Harvard, Vancouver, ISO, and other styles
15

Dannenberg, Matthew. "Pattern Recognition in High-Dimensional Data." Scholarship @ Claremont, 2016. https://scholarship.claremont.edu/hmc_theses/76.

Full text
Abstract:
Vast amounts of data are produced all the time. Yet this data does not easily equate to useful information: extracting information from large amounts of high dimensional data is nontrivial. People are simply drowning in data. A recent and growing source of high-dimensional data is hyperspectral imaging. Hyperspectral images allow for massive amounts of spectral information to be contained in a single image. In this thesis, a robust supervised machine learning algorithm is developed to efficiently perform binary object classification on hyperspectral image data by making use of the geometry of Grassmann manifolds. This algorithm can consistently distinguish between a large range of even very similar materials, returning very accurate classification results with very little training data. When distinguishing between dissimilar locations like crop fields and forests, this algorithm consistently classifies more than 95 percent of points correctly. On more similar materials, more than 80 percent of points are classified correctly. This algorithm will allow for very accurate information to be extracted from these large and complicated hyperspectral images.
APA, Harvard, Vancouver, ISO, and other styles
16

Villegas, Santamaría Mauricio. "Contributions to High-Dimensional Pattern Recognition." Doctoral thesis, Universitat Politècnica de València, 2011. http://hdl.handle.net/10251/10939.

Full text
Abstract:
This thesis gathers some contributions to statistical pattern recognition particularly targeted at problems in which the feature vectors are high-dimensional. Three pattern recognition scenarios are addressed, namely pattern classification, regression analysis and score fusion. For each of these, an algorithm for learning a statistical model is presented. In order to address the difficulty that is encountered when the feature vectors are high-dimensional, adequate models and objective functions are defined. The strategy of learning simultaneously a dimensionality reduction function and the pattern recognition model parameters is shown to be quite effective, making it possible to learn the model without discarding any discriminative information. Another topic that is addressed in the thesis is the use of tangent vectors as a way to take better advantage of the available training data. Using this idea, two popular discriminative dimensionality reduction techniques are shown to be effectively improved. For each of the algorithms proposed throughout the thesis, several data sets are used to illustrate the properties and the performance of the approaches. The empirical results show that the proposed techniques perform considerably well, and furthermore the models learned tend to be very computationally efficient.
Villegas Santamaría, M. (2011). Contributions to High-Dimensional Pattern Recognition [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/10939
Palancia
APA, Harvard, Vancouver, ISO, and other styles
17

Cartwright, Paul. "Realisation of computer generated integral three dimensional images." Thesis, De Montfort University, 2000. http://hdl.handle.net/2086/13289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Cheetham, Andrew. "Simulation of a multi-dimensional pattern classifier." Thesis, Nottingham Trent University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Baumgarten, Matthias. "Multi-dimensional sequential and associative pattern mining." Thesis, University of Ulster, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hoyle, Rebecca Bryony. "Instabilities of three-dimensional patterns." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Zhen. "An investigation into computerised pattern grading and 3-dimensional pattern representation for garments." Thesis, University of Leeds, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.273545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Jocelyn Hua-Chu. "An investigation into 3-dimensional garment pattern design." Thesis, Nottingham Trent University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kurra, Goutham. "Pattern Recognition in Large Dimensional and Structured Datasets." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1014322308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kanneganti, Raghuveer. "CLASSIFICATION OF ONE-DIMENSIONAL AND TWO-DIMENSIONAL SIGNALS." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/dissertations/892.

Full text
Abstract:
This dissertation focuses on the classification of one-dimensional and two-dimensional signals. The one-dimensional signal classification problem involves the classification of brain signals for identifying the emotional responses of human subjects under given drug conditions. A strategy is developed to accurately classify ERPs in order to identify human emotions based on brain reactivity to emotional, neutral, and cigarette-related stimuli in smokers. A multichannel spatio-temporal model is employed to overcome the curse of dimensionality that plagues the design of parametric multivariate classifiers for multi-channel ERPs. The strategy is tested on the ERPs of 156 smokers who participated in a smoking cessation program. One half of the subjects were given nicotine patches and the other half were given placebo patches. ERPs were collected from 29 channel in response to the presentation of the pictures with emotional (pleasant and unpleasant), neutral/boring, and cigarette-related content. It is shown that human emotions can be classified accurately and the results also show that smoking cessation causes a drop in the classification accuracies of emotions in the placebo group, but not in the nicotine patch group. Given that individual brain patterns were compared with group average brain patterns, the findings support the view that individuals tend to have similar brain reactions to different types of emotional stimuli. Overall, this new classification approach to identify differential brain responses to different emotional types could lead to new knowledge concerning brain mechanisms associated with emotions common to most or all people. This novel classification technique for identifying emotions in the present study suggests that smoking cessation without nicotine replacement results in poorer differentiation of brain responses to different emotional stimuli. Future, directions in this area would be to use these methods to assess individual differences in responses to emotional stimuli and to different drug treatments. Advantages of this and other brain-based assessment include temporal precision (e.g, 400-800 ms post stimulus), and the elimination of biases related to self-report measures. The two-dimensional signal classification problems include the detection of graphite in testing documents and the detection of fraudulent bubbles in test sheets. A strategy is developed to detect graphite responses in optical mark recognition (OMR) documents using inexpensive visible light scanners. The main challenge in the formulation of the strategy is that the detection should be invariant to the numerous background colors and artwork in typical optical mark recognition documents. A test document is modeled as a superposition of a graphite response image and a background image. The background image in turn is modeled as superposition of screening artwork, lines, and machine text components. A sequence of image processing operations and a pattern recognition algorithm are developed to estimate the graphite response image from a test document by systematically removing the components of the background image. The proposed strategy is tested on a wide range of scanned documents and it is shown that the estimated graphite response images are visually similar to those scanned by very expensive infra-red scanners currently employed for optical mark recognition. The robustness of the detection strategy is also demonstrated by testing a large number of simulated test documents. A procedure is also developed to autonomously determine if cheating has occurred by detecting the presence of aberrant responses in scanned OMR test books. The challenges introduced by the significant imbalance in the numbers of typical and aberrant bubbles were identified. The aberrant bubble detection problem is formulated as an outlier detection problem. A feature based outlier detection procedure in conjunction with a one-class SVM classifier is developed. A multi-criteria rank-of-rank-sum technique is introduced to rank and select a subset of features from a pool of candidate features. Using the data set of 11 individuals, it is shown that a detection accuracy of over 90% is possible. Experiments conducted on three real test books flagged for suspected cheating showed that the proposed strategy has the potential to be deployed in practice.
APA, Harvard, Vancouver, ISO, and other styles
25

Lindén, Fredrik. "Fractal pattern recognition and recreation." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-181224.

Full text
Abstract:
It speaks by itself that in order to find oil, one must know where to look for it. In this thesis I have investigated and created new tools to find salt in the bedrock, and to recreate images according to some parameters, (fractal dimension and lacunarity). The oil prospecting company Schlumberger gathers nowadays a huge amount of seismic information. It is very time consuming to interpret the seismic data by hand. My task is to find a good way to detect salt in the seismic images of the underworld, that can then be used to classify the seismic data. The theory indicates that the salt behaves as fractals, and by studying the fractal dimension and lacunarity we can make a prediction of where the salt can be located. I have also investigated three different recreation techniques, so that one can go from parameters values (fractal dimension and lacunarity) back to a possible recreation. It speaks by itself that in order to find oil, one must know where to look for it. In this thesis I have investigated and created new tools to find salt in the bedrock, and to recreate images according to some parameters, (fractal dimension and lacunarity). The oil prospecting company Schlumberger gathers nowadays a huge amount of seismic information. It is very time consuming to interpret the seismic data by hand. My task is to find a good way to detect salt in the seismic images of the underworld, that can then be used to classify the seismic data. The theory indicates that the salt behaves as fractals, and by studying the fractal dimension and lacunarity we can make a prediction of where the salt can be located. I have also investigated three different recreation techniques, so that one can go from parameters values (fractal dimension and lacunarity) back to a possible recreation.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Yi Dong. "Three dimensional measurement based on theodolite-CCD cameras." Thesis, University College London (University of London), 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Abdullahi, Fatimah Binta. "Banded pattern mining for N-dimensional zero-one data." Thesis, University of Liverpool, 2016. http://livrepository.liverpool.ac.uk/3003955/.

Full text
Abstract:
The objective of the research presented in this thesis is to investigate and evaluate a series of techniques directed at identifying banded patterns in zero-one data, starting from 2D data and then increasing the number of dimensions to be considered to 3D and then ND. To this end the term Banded Pattern Mining (BPM) has been coined; the process of extracting hidden banded patterns from data. BPM has wide applicability in areas where the domain of interest can be represented in the form of a matrix holding 1s and 0s. Five BPM algorithms are proposed (and a number of variations) directed at finding bandings in zero-one data: (i) 2D-BPM, (ii) Approximate 3D-BPM, (iii) Exact 3D-BPM, (iv) Approximate ND (AND) and (iv) Exact ND (END) BPM. This thesis describes and discusses each of these algorithms in detail. The main challenges of BPM are: (i) how best to identify bandings in 2D data sets without the need to consider large numbers of permutations, (ii) how to address situations where there is a possibility of multiple dots being located at individual locations in a ND zero-one (dot) data space of interest and (iii) how best to identify bandings with respect to large ND data sets that cannot be held in primary storage. To address the first issue a banding score mechanism is proposed that avoids the need to consider large numbers of permutations. This has been incorporated into the 2D-BPM algorithm. To address the second issue, the idea was to use a 'multiple dot' mechanism; a mechanism in the context of both the approximate and exact BPM algorithms that includes the possibility of some cells in the data space of interest holding more than one dot. To address the third issue, sampling and segmentation techniques were proposed to identify bandings in large ND data sets. Full evaluations of each of the BPM algorithms are presented. For evaluation purpose the data sets used were categorised as follows: (i) randomly generated synthetic data sets, (ii) UCI data sets and (iii) a specific application; the Great Britain (GB) Cattle Tracing System (CTS) database in operation in GB, from which 5D binary valued data sets were extracted and used. In the latter case the dimensions were: (i) records (number of animal movements), (ii) attributes, (iii) sender easting (x coordinate holding area), (iv) sender northing (y-coordinate holding areas) and (v) time (month). Other Banding application domains that could have been considered include: (i) network analysis, (ii) co-occurence analysis, (iii) VLSI chip design and (iv) graph drawing. An independent metric, Average Band Width (ABW), was proposed and used to measure the quality of bandings and provide a mechanism for the comparison of BPM algorithms. More specifically, data sets from 2003 to 2006 across four specific counties in GB were used; Aberdeenshire, Cornwall, Lancashire and Norfolk. The reported evaluation indicates that the use of approximate BPM (rather than exact BPM) produces more efficient results in terms of run-time, whilst the use of exact BPM provided promising results in terms of the quality of the bandings produced. The reported evaluation also indicates that a sound foundation has been established for future work with respect to high performance computing variations of the proposed BPM algorithms.
APA, Harvard, Vancouver, ISO, and other styles
28

Vezard, Laurent. "Réduction de dimension en apprentissage supervisé : applications à l’étude de l’activité cérébrale." Thesis, Bordeaux 1, 2013. http://www.theses.fr/2013BOR15005/document.

Full text
Abstract:
L'objectif de ce travail est de développer une méthode capable de déterminer automatiquement l'état de vigilance chez l'humain. Les applications envisageables sont multiples. Une telle méthode permettrait par exemple de détecter automatiquement toute modification de l'état de vigilance chez des personnes qui doivent rester dans un état de vigilance élevée (par exemple, les pilotes ou les personnels médicaux).Dans ce travail, les signaux électroencéphalographiques (EEG) de 58 sujets dans deux états de vigilance distincts (état de vigilance haut et bas) ont été recueillis à l'aide d'un casque à 58 électrodes posant ainsi un problème de classification binaire. Afin d'envisager une utilisation de ces travaux sur une application du monde réel, il est nécessaire de construire une méthode de prédiction qui ne nécessite qu'un faible nombre de capteurs (électrodes) afin de limiter le temps de pose du casque à électrodes ainsi que son coût. Au cours de ces travaux de thèse, plusieurs approches ont été développées. Une première approche propose d'utiliser un pré-traitement des signaux EEG basé sur l'utilisation d'une décomposition en ondelettes discrète des signaux EEG afin d'extraire les contributions de chaque fréquence dans le signal. Une régression linéaire est alors effectuée sur les contributions de certaines de ces fréquences et la pente de cette régression est conservée. Un algorithme génétique est utilisé afin d'optimiser le choix des fréquences sur lesquelles la régression est réalisée. De plus, cet algorithme génétique permet la sélection d'une unique électrode.Une seconde approche est basée sur l'utilisation du Common Spatial Pattern (CSP). Cette méthode permet de définir des combinaisons linéaires des variables initiales afin d'obtenir des signaux synthétiques utiles pour la tâche de classification. Dans ce travail, un algorithme génétique ainsi que des méthodes de recherche séquentielle ont été proposés afin de sélectionner un sous groupes d'électrodes à conserver lors du calcul du CSP.Enfin, un algorithme de CSP parcimonieux basé sur l'utilisation des travaux existant sur l'analyse en composantes principales parcimonieuse a été développé.Les résultats de chacune des approches sont détaillés et comparés. Ces travaux ont aboutit sur l'obtention d'un modèle permettant de prédire de manière rapide et fiable l'état de vigilance d'un nouvel individu
The aim of this work is to develop a method able to automatically determine the alertness state of humans. Such a task is relevant to diverse domains, where a person is expected or required to be in a particular state. For instance, pilots, security personnel or medical personnel are expected to be in a highly alert state, and this method could help to confirm this or detect possible problems. In this work, electroencephalographic data (EEG) of 58 subjects in two distinct vigilance states (state of high and low alertness) were collected via a cap with $58$ electrodes. Thus, a binary classification problem is considered. In order to use of this work on a real-world applications, it is necessary to build a prediction method that requires only a small number of sensors (electrodes) in order to minimize the time needed by the cap installation and the cap cost. During this thesis, several approaches have been developed. A first approach involves use of a pre-processing method for EEG signals based on the use of a discrete wavelet decomposition in order to extract the energy of each frequency in the signal. Then, a linear regression is performed on the energies of some of these frequencies and the slope of this regression is retained. A genetic algorithm (GA) is used to optimize the selection of frequencies on which the regression is performed. Moreover, the GA is used to select a single electrode .A second approach is based on the use of the Common Spatial Pattern method (CSP). This method allows to define linear combinations of the original variables to obtain useful synthetic signals for the task classification. In this work, a GA and a sequential search method have been proposed to select a subset of electrode which are keep in the CSP calculation.Finally, a sparse CSP algorithm, based on the use of existing work in the sparse principal component analysis, was developed.The results of the different approaches are detailed and compared. This work allows us to obtaining a reliable model to obtain fast prediction of the alertness of a new individual
APA, Harvard, Vancouver, ISO, and other styles
29

Leibovici, Matthieu. "Pattern-integrated interference lithography for two-dimensional and three-dimensional periodic-lattice-based microstructures." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54410.

Full text
Abstract:
Two-dimensional (2D) and three-dimensional (3D) periodic-lattice-based microstructures have found multifaceted applications in photonics, microfluidics, tissue engineering, biomedical engineering, and mechanical metamaterials. To fabricate functional periodic microstructures, in particular in 3D, current available technologies have proven to be slow and thus, unsuitable for rapid prototyping or large-volume manufacturing. To address this shortcoming, the new innovative field of pattern-integrated interference lithography (PIIL) was introduced. PIIL enables the rapid, single-exposure fabrication of 2D and 3D custom-modified periodic microstructures through the non-intuitive combination of multi-beam interference lithography and photomask imaging. The research in this thesis aims at quantifying PIIL’s fundamental capabilities and limitations through modeling, simulations, prototype implementation, and experimental demonstrations. PIIL is first conceptualized as a progression from optical interference and holography. Then, a comprehensive PIIL vector model is derived to simulate the optical intensity distribution produced within a photoresist film during a PIIL exposure. Using this model, the fabrication of representative photonic-crystal devices by PIIL is simulated and the performance of the PIIL-produced devices is studied. Photomask optimization strategies for PIIL are also studied to mitigate distortions within the periodic lattice. The innovative field of 3D-PIIL is also introduced. Exposures of photomask-integrated, photomask-shaped, and microcavity-integrated 3D interference patterns are simulated to illustrate the richness and potential of 3D-PIIL. To demonstrate PIIL experimentally, a prototype pattern-integrated interference exposure system is designed, analyzed with the optical design program ZEMAX, and used to fabricate pattern-integrated 2D square- and hexagonal-lattice periodic microstructures. To validate the PIIL vector model, the proof-of-concept results are characterized by scanning-electron microscopy and atomic force microscopy and compared to simulated PIIL exposures. As numerous PIIL underpinnings remain unexplored, research avenues are finally proposed. Future research paths include the design of new PIIL systems, the development of photomask optimization strategies, the fabrication of functional devices, and the experimental demonstration of 3D-PIIL.
APA, Harvard, Vancouver, ISO, and other styles
30

Agrafiotis, Dimitris. "Three dimensional coding and visualisation of volumetric medical images." Thesis, University of Bristol, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.271864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bressan, Marco José Miguel. "Statistical Independence for classification for High Dimensional Data." Doctoral thesis, Universitat Autònoma de Barcelona, 2003. http://hdl.handle.net/10803/3034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wellard, Richard. "Manipulation of four-dimensional objects represented within a virtual environment." Thesis, University of Warwick, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

DOU, JIAYUN. "Harmonious Storm." Thesis, Högskolan i Borås, Institutionen Textilhögskolan, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20320.

Full text
Abstract:
The intention with this imaginary storm in the form of a decorative lamp ‘Harmonious Storm’ is meant to hang from the ceiling, sculptural and often glowing with artificial light. Meanwhile it enhances the attractive value in the specific public space. The relation between importance of aesthetics and interior public environment is discussed in this thesis, with a purpose of achieving emotional and social needs that are excitement and happiness.
Program: Konstnärligt masterprogram i mode- och textildesign
APA, Harvard, Vancouver, ISO, and other styles
34

Gibson, Patricia. "Patterns of caring for older people : an ethnic dimension." Thesis, University of Northampton, 1999. http://nectar.northampton.ac.uk/2694/.

Full text
Abstract:
Care in the community increasingly means care by the community, i.e. the family. The present focus on informal carers by policy makers reflects their importance to the success of community care legislation (NHS and Community Care Act, 1990; Carers (Recognition and Services) Act, 1995). Much of the published information on the impact of caring has neglected the circumstances of carers from minority ethnic groups. Hence, this research explores the caring situations and experiences of informal carers of older people from Gujarati, Punjabi and white indigenous communities. Semi-structured interview schedules were used to elicit both quantitative and qualitative information from each of the three groups. Overall, the data confirmed some universal features of caring in that it is the family, and in particular women, who care for older people. However, the motivation for caring dffered between the three cultural groups. Findings also showed that many of the socio-demo graphic characteristics in the Gujarati and Punjabi groups were similar in that they tended to be co-resident, were younger and cared for a younger age group than white indigenous carers. However, a closer look at the data on the psychosocial aspects of caring revealed some distinct differences between the two South Asian groups. Gujarati and white indigenous carers reported higher levels of morale, significantly lower levels of stress and significantly higher perceived coping abilities than the Punjabi group of carers. This latter group reported using a proportionately lower number of active coping strategies and more avoidance coping techniques than Gujarati and white indigenous carers. Punjabi carers were also significantly less satisfied with any help received from formal and informal sources than the other two groups. In light of these findings, which emphasize some distinct dfferences in caring circumstances, a number of recommendations are made for both policy makers and future research
APA, Harvard, Vancouver, ISO, and other styles
35

Taslimitehrani, Vahid. "Contrast Pattern Aided Regression and Classification." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1459377694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wu, Yongfeng. "New Statistical Methods to Get the Fractal Dimension of Bright Galaxies Distribution from the Sloan Digital Sky Survey Data." Fogler Library, University of Maine, 2007. http://www.library.umaine.edu/theses/pdf/WuY2007.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

蔡纓 and Ying Choi. "Improved data structures for two-dimensional library management and dictionary problems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31213029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ruiz, Evandro Eduardo Seron. "Static and dynamic contour definition in left ventricular two-dimensional echocardiography." Thesis, University of Kent, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.294311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Choi, Ying. "Improved data structures for two-dimensional library management and dictionary problems /." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18037240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

McMahon, Michelle J. "Three-dimensional integration of remotely sensed imagery and subsurface geological data." Thesis, University of Aberdeen, 1993. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=216342.

Full text
Abstract:
The standard approach to integration of satellite imagery and sub-surface geological data has been the comparison of a map-view (two-dimensional) image interpretation with a selection of sub-surface cross-sections. The relationship between surface and subsurface geology can be better understood through quantitative three-dimensional (3-D) computer modelling. This study tests techniques to integrate a 3-D digital terrain model with 3-D sub-surface interpretations. Data types integrated, from a portion of the Paradox Basin, SE Utah, USA, include Landsat TM imagery, digital elevation data (DEM), sub-surface gravity and magnetic data, and wellbore data. Models are constructed at a variety of data resolutions. Combined modelling of basement and topographic features suggests the traditional lineament analysis approach to structural interpretation is over-simplistic. Integration of DEM and image data displayed in 3-D proved more effective for lithology discrimination than a map-view approach. Automated strike and dip interpretation algorithms require DEM data at resolutions of the order of 30 metres or better. Methods are described for the creation of fault-plane maps from three-dimensional displays of surface and subsurface data. The approach used in this study of linking existing software packages (Erdas image processing system, CPS3 mapping package and SGM and GTM three-dimensional geological modelling packages) is recommended for future studies. The methodology developed in this study is beneficial to interpretation of imagery data in frontier exploration areas.
APA, Harvard, Vancouver, ISO, and other styles
41

Healey, Timothy James. "A study of three dimensional effects in induced current impedance imaging." Thesis, University of Sheffield, 1995. http://etheses.whiterose.ac.uk/14774/.

Full text
Abstract:
Previous studies of the induced current impedance imaging technique have been unable to reconstruct images of three dimensional (3-D) structures. In this study the cause of the problem is identified and the reconstruction algorithm of Purvis is adapted to facilitate the correct reconstruction of images of a limited class of structures which have the form of a long cylinder. The images produced by the algorithm are improved by a data filter based on that of Barber, Brown and Avis. By consideration of the underlying field equations which govern 3-D induced current Electrical Impedance Tomography (EIT) systems, the finite element method (FEM) is used for the computation of the potential field for arbitrary conductivity distributions excited by various coil configurations. A phantom system is built to test the results of the FEM and particular attention is paid to the improvement of the instrumentation. A statistical comparison of the results of measurement and simulation is unable to detect any error in the FEM model. The FEM model is consequently used to develop the reconstruction algorithm but physical measurements are also used to test the algorithm in the presence of noise. The behaviour of the 3-D algorithm is tested for its plane selectivity showing Similar characteristics to those of injected current systems developed by other workers. A possible approach which could both reduce the volume to which the system is sensitive and generate extra measurements for the possible reconstruction of multi-layered images is investigated.
APA, Harvard, Vancouver, ISO, and other styles
42

Deshpande, Monali A. "Automating Multiple Schema Generation using Dimensional Design Patterns." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1242762457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Yuen, P. C. "Multi-scale representation and recognition of three dimensional surfaces using geometric invariants." Thesis, University of Surrey, 2001. http://epubs.surrey.ac.uk/979/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Acevedo, Oscar. "On the Computation of LFSR Characteristic Polynomials for One-Dimensional and Two-Dimensional Test Pattern Generation." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/dissertations/893.

Full text
Abstract:
Current methodologies for built-in test pattern generation usually employ a predetermined linear feedback shift register (LFSR) in order to generate or decompress deterministic test patterns. As a direct consequence, the test pattern computation and the fault coverage are constrained to the preselected architecture. Work has been done to determine desirable characteristics in the LFSR to be used. Also, work has been done in the use of these predefined architectures, in order to compact the test data. In general, these methodologies take advantage of the large amount of don't care bits present in the test patterns, to accommodate the few specified bits to the output generated by the predefined LFSR. This dissertation explores the design of the LFSR as a built-in mechanism for test pattern generation in integrated circuits. The advantage of designing such devices is that the test set generation process is not constrained to a predefined LFSR mechanism, and the fault coverage is not affected. The methodologies presented in this work are based on cryptography concepts and heuristics to perform its computation. First, it is shown how these concepts can be adapted to test pattern generation. After this, methodologies are presented to generate one-dimensional and two-dimensional test sets. For the case of two-dimensional test set, the design of phase shifters is included.
APA, Harvard, Vancouver, ISO, and other styles
45

Thomas, Diana M. "Pattern formation in cellular automata and three dimensional lattice dynamical systems." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/28010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Blumenschein, Michael [Verfasser]. "Pattern-Driven Design of Visualizations for High-Dimensional Data / Michael Blumenschein." Konstanz : KOPS Universität Konstanz, 2020. http://d-nb.info/1222437333/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Zi. "Sparse multivariate models for pattern detection in high-dimensional biological data." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/25762.

Full text
Abstract:
Recent advances in technology have made it possible and affordable to collect biological data of unprecedented size and complexity. While analysing such data, traditional statistical methods and machine learning algorithms suffer from the curse of dimensionality. Parsimonious models, which may refer to parsimony in model structure and/or model parameters, have been shown to improve both biological interpretability of the model and the generalisability to new data. In this thesis we are concerned with model selection in both supervised and unsupervised learning tasks. For supervised learnings, we propose a new penalty called graphguided group lasso (GGGL) and employ this penalty in penalised linear regressions. GGGL is able to integrate prior structured information with data mining, where variables sharing similar biological functions are collected into groups and the pairwise relatedness between groups are organised into a network. Such prior information will guide the selection of variables that are predictive to a univariate response, so that the model selects variable groups that are close in the network and important variables within the selected groups. We then generalise the idea of incorporating network-structured prior knowledge to association studies consisting of multivariate predictors and multivariate responses and propose the network-driven sparse reduced-rank regression (NsRRR). In NsRRR, pairwise relatedness between predictors and between responses are represented by two networks, and the model identifies associations between a subnetwork of predictors and a subnetwork of responses such that both subnetworks tend to be connected. For unsupervised learning, we are concerned with a multi-view learning task in which we compare the variance of high-dimensional biological features collected from multiple sources which are referred as “views”. We propose the sparse multi-view matrix factorisation (sMVMF) which is parsimonious in both model structure and model parameters. sMVMF can identify latent factors that regulate variability shared across all views and the variability which is characteristic to a specific view, respectively. For each novel method, we also present simulation studies and an application on real biological data to illustrate variable selection and model interpretability perspectives.
APA, Harvard, Vancouver, ISO, and other styles
48

Schodl, Arno. "Multi-dimensional exemplar-based texture synthesis." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/9166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Smith, Benjamin Andrew. "Determination of Normal or Abnormal Gait Using a Two-Dimensional Video Camera." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/31795.

Full text
Abstract:
The extraction and analysis of human gait characteristics using image sequences and the subsequent classification of these characteristics are currently an intense area of research. Recently, the focus of this research area has turned to the realm of computer vision as an unobtrusive way of performing this analysis. With such systems becoming more common, a gait analysis system that will quickly and accurately determine if a subject is walking normally becomes more valuable. Such a system could be used as a preprocessing step in a more sophisticated gait analysis system or could be used for rehabilitation purposes. In this thesis a system is proposed which utilizes a novel fusion of spatial computer vision operations as well as motion in order to accurately and efficiently determine if a subject moving through a scene is walking normally or abnormally. Specifically this system will yield a classification of the type of motion being observed, whether it is a human walking normally or some other kind of motion taking place within the frame. Experimental results will show that the system provides accurate detection of normal walking and can distinguish abnormalities as subtle as limping or walking with a straight leg reliably.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
50

Manousakas, Ioannis. "A comparative study of segmentation algorithms applied to 2- and 3- dimensional medical images." Thesis, University of Aberdeen, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360340.

Full text
Abstract:
A method that enables discriminating between CSF-grey matter edges and grey-white matter edges separately has been suggested. It was obvious that edges from this method are more complete that those resolved by the original method and have fewer artifacts. Some edges that were undetected before, are now detected because they do not have any influence from stronger nearby edges. Texture noise is also suppressed and this allows us to work at higher space scales. These 3D edge detection methods proved to be superior to the equivalent 2D methods because they can calculate the gradient more accurately and the edges detected have better continuity which is uniformly preserved along all three directions. The split and merge technique was the second method that has been examined. The existing algorithms need data structures that have dimensions that are powers of two (quadtrees). Such a 3D method would not a practical for volume analysis because of memory limitations. For example, a 256x256x256 array of bytes is about 17Mbytes and since the method requires about 14 bytes per voxel, memory sizes that computers usually have are exceeded. In order to solve this problem an algorithm that applies a split and merge technique on non cubic datasets has been developed. Along the x,y axes, the data must have dimensions that are powers of 2 but along the z axis it is possible have any dimension that may meet the current memory limits. The method consist of three main steps a) splitting of an initial cutset, b) merging and c) grouping of the resulting nodes. An extra boundary elimination step is necessary to reduce the number of the resulting regions. The original method is controlled mainly by a parameter ε that is kept constant during the process. Currently, improvements that could be achieved introducing a level of optimisation during the grouping step are being examined. Here, the grouping is done in a way that stimulates the formation of a crystal during anealing by a progressive increase (relaxing) of the parameter ε. Such method has given different results from a method that consist of a split and merge step with ε = ε1 and a step of grouping with constant ε = ε1 and a step of grouping with constant ε = ε2 > ε1. At the moment, it has been difficult to establish quantitative ways of measuring any level of improving since there is no objective segmentation to compare with. So far, the method has processed adequately up to a block of 32 56c256 sized slice and can produce 3D objects representing regions like the ventricles, the white or the grey matter.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!