To see the other types of publications on this topic, follow the link: Master thesis. Image retrieval.

Dissertations / Theses on the topic 'Master thesis. Image retrieval'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 31 dissertations / theses for your research on the topic 'Master thesis. Image retrieval.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Brolin, Morgan. "Automatic Change Detection in Visual Scenes." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301611.

Full text
Abstract:
This thesis proposes a Visual Scene Change Detector(VSCD) system which is a system which involves four parts, image retrieval, image registration, image change detection and panorama creation. Two prestudies are conducted in order to find a proposed image registration method and a image retrieval method. The two found methods are then combined with a proposed image registration method and a proposed panorama creation method to form the proposed VSCD. The image retrieval prestudy evaluates a SIFT related method with a bag of words related method and finds the SIFT related method to be the superior method. The image change detection prestudy evaluates 8 different image change detection methods. Result from the image change detection prestudy shows that the methods performance is dependent on the image category and an ensemble method is the least dependent on the category of images. An ensemble method is found to be the best performing method followed by a range filter method and then a Convolutional Neural Network (CNN) method. Using a combination of the 2 image retrieval methods and the 8 image change detection method 16 different VSCD are formed and tested. The final result show that the VSCD comprised of the best methods from the prestudies is the best performing method.<br>Detta exjobb föreslår ett Visual Scene Change Detector(VSCD) system vilket är ett system som har 4 delar, image retrieval, image registration, image change detection och panorama creation. Två förstudier görs för att hitta en föreslagen image registration metod och en föreslagen panorama creation metod. De två föreslagna delarna kombineras med en föreslagen image registration och en föreslagen panorama creation metod för att utgöra det föreslagna VSCD systemet. Image retrieval förstudien evaluerar en ScaleInvariant Feature Transform (SIFT) relaterad method med en Bag of Words (BoW) relaterad metod och hittar att den SIFT relaterade methoden är bäst. Image change detection förstudie visar att metodernas prestanda är beroende av catagorin av bilder och att en enemble metod är minst beroende av categorin av bilder. Enemble metoden är hittad att vara den bästa presterande metoden följt av en range filter metod och sedan av en CNN metod. Genom att använda de 2 image retrieval metoder kombinerat med de 8 image change detection metoder är 16 st VSCD system skapade och testade. Sista resultatet visar att den VSCD som använder de bästa metoderna från förstudien är den bäst presterande VSCD.
APA, Harvard, Vancouver, ISO, and other styles
2

Nahar, Vikas. "Content based image retrieval for bio-medical images." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2010. http://scholarsmine.mst.edu/thesis/pdf/Nahar_09007dcc80721e0b.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2010.<br>Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed Dec. 23, 2009). Includes bibliographical references (p. 82-83).
APA, Harvard, Vancouver, ISO, and other styles
3

Andersson, Kristina. "Evaluation of uncertainties in sub-volume based image registration : master of science thesis in medical radiation physics." Thesis, Umeå universitet, Institutionen för fysik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-38638.

Full text
Abstract:
Physicians often utilize different imaging techniques to provide clear, visual information about internal parts of the patient. Since the different imaging modalities give different types of information, the combination of them serves as a powerful tool while determining the diagnosis, planning of treatment or during therapy follow-up. To simplify the interpretation of the image information, image registration is often used. The goal of the registration is to put different images in a common coordinate system. It is essential that the registration between the images is accurate. Normalized Mutual Information (NMI) is a metric that quantifies the conformity between images. Even though NMI is a robust method it is often dominated by large structures as the external contour of the patient as well as by the structures of the bones. The prostate is an organ that does not have a fixed position relative to the other organs and host small amounts of image information. The accuracy of the registration is therefore limited with respect to the prostate when using the whole image volume. This master thesis investigates the possibility to restrict the part of the image used for registration to a small volume around the prostate with goal to receive a better registration of the prostate than if full sized images are used. A registration program, utilizing NMI, was written and optimized in MatLab. Four Magnetic Resonance (MR) series and one Computed Tomographic (CT) series where taken over the pelvic area of five patients with the diagnosis prostate cancer. The prostate were delineated by a physician. By adding margin to the delineations five different sized Regions of Interest (ROI) where created.  The smallest ROI precisely covered the prostate while the largest covered the whole image. The deviation in Center of Mass (CoM) between the images and the Percentage Volume Overlap (PVO) were calculated and used as a measure of alignment. The registrations performed with sub-volumes showed an improvement compared to those that used full-volume while registering a MR image to another MR image. In one third of the cases a 2 cm margin to the prostate is preferable. A 3 cm margin is the most favorable option in another third of the cases. The use of sub-volumes to register MR images to CT series turned out to be unpredictable with poor accuracy. Full sized image registration between two MR image pairs has a high precision but, due to the motion of the prostate, poor accuracy. As a result of the high information content in the MR images both high precision as well as high accuracy can be achieved by the use of sub-volume registration. CT images do not contain the same amount of image information around the prostate and the sub-volume based registrations between MR and CT images are hence inconsistent with a low precision.  
APA, Harvard, Vancouver, ISO, and other styles
4

Almquist, Camilla. "Implementation of an automated,personalized model of the cardiovascularsystem using 4D Flow MRI." Thesis, Linköpings universitet, Avdelningen för kardiovaskulär medicin, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154496.

Full text
Abstract:
A personalized cardiovascular lumped parameter model of the left-sided heart and thesystemic circulation has been developed by the cardiovascular medicine research groupat Linköping University. It provides information about hemodynamics, some of whichcould otherwise only have been retrieved by invasive measurements. The framework forpersonalizing the model is made using 4D Flow MRI data, containing volumes describinganatomy and velocities in three directions. Thus far, the inputs to this model have beengenerated manually for each subject. This is a slow and tedious process, unpractical touse clinically, and unfeasible for many subjects.This project aims to develop a tool to calculate the inputs and run the model for mul-tiple subjects in an automatic way. It has its basis in 4D Flow MRI data sets segmentedto identify the locations of left atrium (LA), left ventricle (LV), and aorta, along with thecorresponding structures on the right side.The process of making this tool started by calculation of the inputs. Planes were placedin the relevant positions, at the mitral valve, aortic valve (AV) and in the ascending aortaupstream the brachiocephalic branches, and flow rates were calculated through them. TheAV plane was used to calculate effective orifice area of AV and aortic cross-sectional area,while the LV end systolic and end diastolic volumes were extracted form the segmentation.The tool was evaluated by comparison with manually created inputs and outputs,using 9 healthy volunteers and one patient deemed to have normal left ventricular func-tion. The patient was chosen from a subject group diagnosed with chronic ischemic heartdisease, and/or a history of angina, together with fulfillment of the high risk score ofcardiovascular diseases of the European Society of Cardiology. This data was evaluatedusing coefficient of variation, Bland-Altman plots and sum squared error. The tool wasalso evaluated visually on some subjects with pathologies of interest.This project shows that it is possible to calculate inputs fully automatically fromsegmented 4D Flow MRI and run the cardiovascular avatar in an automatic way, withoutuser interaction. The method developed seems to be in good to moderate agreement withthose obtained manually, and could be the basis for further development of the model.
APA, Harvard, Vancouver, ISO, and other styles
5

Southard, Spencer. "Designing 2D Interfaces For 3D Gesture Retrieval Utilizing Deep Learning." UNF Digital Commons, 2017. https://digitalcommons.unf.edu/etd/774.

Full text
Abstract:
Gesture retrieval can be defined as the process of retrieving the correct meaning of the hand movement from a pre-assembled gesture dataset. The purpose of the research discussed here is to design and implement a gesture interface system that facilitates retrieval for an American Sign Language gesture set using a mobile device. The principal challenge discussed here will be the normalization of 2D gestures generated from the mobile device interface and the 3D gestures captured from video samples into a common data structure that can be utilized by deep learning networks. This thesis covers convolutional neural networks and auto encoders which are used to transform 2D gestures into the correct form, before being classified by a convolutional neural network. The architecture and implementation of the front-end and back-end systems and each of their respective responsibilities are discussed. Lastly, this thesis covers the results of the experiment and breakdown the final classification accuracy of 83% and how this work could be further improved by using depth based videos for the 3D data.
APA, Harvard, Vancouver, ISO, and other styles
6

Wright, Tracy L. "Body Image and Healthy Lifestyle Behavior Among University Students." UNF Digital Commons, 2012. http://digitalcommons.unf.edu/etd/402.

Full text
Abstract:
Children develop beliefs about ideal body image and carry these perceptions into adulthood. Consequences of poor body image may include decreased self-esteem, depression, unhealthy lifestyle, and eating disorders. Understanding healthy lifestyle behaviors and the relationship between body image and these behaviors can empower individuals to engage in behaviors to improve health. Pender’s health promotion model provided the theoretical framework for this study. The purpose of this study was to identify the relationship between body image and healthy lifestyle behaviors among undergraduate university students. An email was sent to undergraduate students, providing a link to the survey that included: demographic, body dissatisfaction, and screen time questions; Prochaska’s physical activity screening measure; and a lifestyle profile by Walker, Sechrist, and Pender. A total of 1056 usable surveys were returned. The majority (71%) were satisfied with their body image, although many (60.3%) wanted to alter it. Most (65.1%) had a normal BMI. Sedentary activity was more than the recommended amount, with only 23.3% meeting physical activity guidelines. Healthy lifestyle behaviors were engaged in “sometimes” and “often, but not routinely.” Body image was correlated with healthy lifestyle behaviors. There was a moderate correlation between activity and body image, and a negative correlation between sedentary activity and healthy lifestyle behaviors.
APA, Harvard, Vancouver, ISO, and other styles
7

Bishell, Aaron. "Designing application-specific processors for image processing : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science, Massey University, Palmerston North, New Zealand." Massey University, 2008. http://hdl.handle.net/10179/1024.

Full text
Abstract:
Implementing a real-time image-processing algorithm on a serial processor is difficult to achieve because such a processor cannot cope with the volume of data in the low-level operations. However, a parallel implementation, required to meet timing constraints for the low-level operations, results in low resource utilisation when implementing the high-level operations. These factors suggested a combination of parallel hardware, for the low-level operations, and a serial processor, for the high-level operations, for implementing a high-level image-processing algorithm. Several types of serial processors were available. A general-purpose processor requires an extensive instruction set to be able to execute any arbitrary algorithm resulting in a relatively complex instruction decoder and possibly extra FUs. An application-specific processor, which was considered in this research, implements enough FUs to execute a given algorithm and implements a simpler, and more efficient, instruction decoder. In addition, an algorithms behaviour on a processor could be represented in either hardware (i.e. hardwired logic), which limits the ability to modify the algorithm behaviour of a processor, or “software” (i.e. programmable logic), which enables external sources to specify the algorithm behaviour. This research investigated hardware- and software- controlled application-specific serial processors for the implementation of high-level image-processing algorithms and compared these against parallel hardware and general-purpose serial processors. It was found that application-specific processors are easily able to meet the timing constraints imposed by real-time high-level image processing. In addition, the software-controlled processors had additional flexibility, a performance penalty of 9.9% and 36.9% and inconclusive footprint savings (and costs) when compared to hardwarecontrolled processors.
APA, Harvard, Vancouver, ISO, and other styles
8

Paul, Nathan J. "Creating a user-friendly multiple natural disaster database with a functioning display using Google mapping systems a thesis presented to the Department of Geology and Geography in candidacy for the degree of Master of Science /." Diss., Maryville, Mo. : Northwest Missouri State University, 2009. http://www.nwmissouri.edu/library/theses/paulnathanj/index.htm.

Full text
Abstract:
Thesis (M.S.)--Northwest Missouri State University, 2009.<br>The full text of the thesis is included in the pdf file. Title from title screen of full text.pdf file (viewed on April 9, 2010) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
9

Giles, James A. "Analysis of Digital Logic Schematics Using Image Recognition." UNF Digital Commons, 1997. http://digitalcommons.unf.edu/etd/425.

Full text
Abstract:
This thesis presents the results of research in the area of automated recognition of digital logic schematics. The adaptation of a number of existing image processing techniques for use with this kind of image is discussed, and the concept of using sets of tokens to represent the overall drawing i s explained in detail. Methods are given for using tokens to describe schematic component shapes, to represent the connections between components, and to provide sufficient information to a parser so that an equation can be generated. A Microsoft Windows-based test program which runs under Windows 95 or Windows NT has been written to implement the ideas presented. This program accepts either scanned images of digital schematics, or computer-generated images in Microsoft Windows bitmap format as input. It analyzes the input schematic image for content, and produces a corresponding logical equation as output. It also provides the functionality necessary to build and maintain an image token library.
APA, Harvard, Vancouver, ISO, and other styles
10

Weston, Stuart Duncan. "Development of Very Long Baseline Interferometry (VLBI) techniques in New Zealand array simulation, image synthesis and analysis : a thesis submitted to Auckland University of Technology in fulfilment of the requirements for the degree of Master of Philosophy (MPhil), 2008 /." Click here to access this resource online, 2008. http://hdl.handle.net/10292/449.

Full text
Abstract:
This thesis presents the design and development of a process to model Very Long Base Line Interferometry (VLBI) aperture synthesis antenna arrays. In line with the Auckland University of Technology (AUT) Institute for Radiophysics and Space Research (IRSR) aims to develop the knowledge, skills and experience within New Zealand, extensive use of existing radio astronomical software has been incorporated into the process namely AIPS (Astronomical Imaging Processing System), MIRIAD (a radio interferometry data reduction package) and DIFMAP (a program for synthesis imaging of visibility data from interferometer arrays of radio telescopes). This process has been used to model various antenna array configurations for two proposed New Zealand sites for antenna in a VLBI array configuration with existing Australian facilities and a passable antenna at Scott Base in Antarctica; and the results are presented in an attempt to demonstrate the improvement to be gained by joint trans-Tasman VLBI observation. It is hoped these results and process will assist the planning and placement of proposed New Zealand radio telescopes for cooperation with groups such as the Australian Long Baseline Array (LBA), others in the Pacific Rim and possibly globally; also potential future involvement of New Zealand with the SKA. The developed process has also been used to model a phased building schedule for the SKA in Australia and the addition of two antennas in New Zealand. This has been presented to the wider astronomical community via the Royal Astronomical Society of New Zealand Journal, and is summarized in this thesis with some additional material. A new measure of quality (“figure of merit”) for comparing the original model image and final CLEAN images by utilizing normalized 2-D cross correlation is evaluated as an alternative to the existing subjective visual operator image comparison undertaken to date by other groups. This new unit of measure is then used in the presentation of the results to provide a quantative comparison of the different array configurations modelled. Included in the process is the development of a new antenna array visibility program which was based on a Perl code script written by Prof Steven Tingay to plot antenna visibilities for the Australian Square Kilometre Array (SKA) proposal. This has been expanded and improved removing the hard coded fixed assumptions for the SKA configuration, providing a new useful and flexible program for the wider astronomical community. A prototype user interface using html/cgi/perl was developed for the process so that the underlying software packages can be served over the web to a user via an internet browser. This was used to demonstrate how easy it is to provide a friendlier interface compared to the existing cumbersome and difficult command line driven interfaces (although the command line can be retained for more experienced users).
APA, Harvard, Vancouver, ISO, and other styles
11

Choi, Mun Ga. "The impact of cultural context on corporate web sites a New Zealand and South Korean comparison : a thesis submitted to Auckland University of Technology in fulfilment of the requirements for the degree of Master of Philosophy (MPhil), 2008 /." Click here to access this resource online, 2008. http://hdl.handle.net/10292/541.

Full text
Abstract:
This study examines the impact of national culture on the content of corporate Web sites, and Web users’ attitudes and intentions toward culturally congruent or incongruent Web sites. In this work, culturally bipolar clusters based on Hofstede’s (1991) and Hall’s (1976) cultural dimensions are conceptualised. New Zealand and Korea are chosen as representatives of the respective bipolar clusters. This research utilises both content analysis and experimental research to provide deep insight into an area which has not yet been explored. Two studies are undertaken, Study One, focusing on the content analysis, examines how the use of visual communication and Web features differs between the two countries and between industry types. Study Two assesses Web users’ predispositions to respond favourably or unfavourably to the Web site. Web users’ perceptions, measured by experimental research with four culturally manipulated Web sites, are assumed to be the most suitable concept for studying the effectiveness of Web sites. Three ethnic groups are involved: Korean university students, New Zealand university students, and English-Korean bilingual university students. The findings reveal differences in the content of corporate Web sites from the two countries. However, these results do not support the findings of extant research. The results show that the corporate Web sites studied can be distinguished not only by the two national cultures, but also by other significant factors such as a company’s characteristics, its online presence strategy, national broadband infrastructure, and unique Internet culture. Additionally, the segment of young adults shows a convergence of cultural value systems which can be attributed to the fact that young adults in both countries have similar perceptions toward corporate Web sites regardless of their nationalities. Language structure and local terminology on the Web sites, however, are still important. This study contributes to knowledge by providing critical insights into the effectiveness and cultural congruence of Web sites. The results will benefit both academics and practitioners.
APA, Harvard, Vancouver, ISO, and other styles
12

Fuzzell, Lindsay Nicole. "Cosmetic Surgery Pictures: Does Type of Picture Affect Acceptance of Cosmetic Surgery and/or Body Image?" UNF Digital Commons, 2010. http://digitalcommons.unf.edu/etd/424.

Full text
Abstract:
The researcher investigates the effect of viewing positive and negative cosmetic surgery images, with short descriptive scenarios, on acceptance of cosmetic surgery. Two hundred ninety-nine participants were assigned to view one of three conditions: positive before/after cosmetic surgery pictures and an accompanying scenario, negative pictures and scenario, or no pictures or scenario (control), followed by the Acceptance of Cosmetic Surgery Scale (ACSS, Henderson-King & Henderson-King, 2005), the Body Parts Satisfaction Scale (Berscheid, Walster, & Bohrstedt, 1973), and the Physical Self Description Questionnaire (Marsh, Richards, Johnson, Roche, & Tremayne, 1994). There was a significant relationship between ACSS Intrapersonal subscale and picture/scenario type, specifically that the positive picture/scenario type participants had a higher Intrapersonal Acceptance of Cosmetic Surgery score. There was also a significant relationship between picture/scenario type & physicality, with four of the 11 subscales, physical activity, sport competence, strength, and endurance, being significantly related to acceptance of cosmetic surgery. Results show significant bivariate correlations between cosmetic surgery acceptance and the physicality aspect of body image as measured by the PSDQ, and total body image as measured by the BPSS. Ethnicity and gender were also significant indicators of cosmetic surgery acceptance. The researcher expects that these results could generalize to society as a whole because of the many people that view cosmetic surgery makeover shows on television. Viewing cosmetic surgery images in the media could possibly decrease body image and alter intrapersonal beliefs toward cosmetic surgery.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, MingHui. "Navel orange blemish identification for quality grading system : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Computer Science at Massey University, Albany, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/1175.

Full text
Abstract:
Each year, the world’s top orange producers output millions of oranges for human consumption. This production is projected to grow by as much as 64 million in 2010 and so the demand for fast, low-cost and precise automated orange fruit grading systems is only deemed to become more increasingly important. There is however an underlying limit to most orange blemish detection algorithms. Most existing statistical-based, structural-based, model-based and transform-based orange blemish detection algorithms are plagued by the following problem: any pixels in an image of an orange having about the same magnitudes for the red, green and blue channels will almost always be classified as belonging to the same category (either a blemish or not). This however presents a big problem as the RGB components of the pixels corresponding to blemishes are very similar to pixels near the boundary of an orange. In light of this problem, this research utilizes a priori knowledge of the local intensity variations observed on rounded convex objects to classify the ambiguous pixels correctly. The algorithm has the effect of peeling-off layers of the orange skin according to gradations of the intensity. Therefore, any abrupt discontinuities detected along successive layers would significantly help identifying skin blemishes more accurately. A commercial-grade fruit inspection and distribution system was used to collect 170 navel orange images. Of these images, 100 were manually classified as good oranges by human inspection and the rest are blemished ones. We demonstrate the efficacy of the algorithm using these images as the benchmarking test set. Our results show that the system garnered 96% correctly classified good oranges and 97% correctly classified blemished oranges. The proposed system is easily customizable as it does not require any training. The fruit quality bands can be adjusted to meet the requirements set by the market standards by specifying an agreeable percentage of blemishes for each band.
APA, Harvard, Vancouver, ISO, and other styles
14

Gao, Yi. "Bursting bubbles a moving image exploration of contemporary Chinese individuality : an thesis submitted to Auckland University of Technology in partial fulfilment of the requirements for the degree of Master of Art and Design (MA&D), 2008 /." Click here to access this resource online, 2008. http://hdl.handle.net/10292/485.

Full text
Abstract:
This thesis is a practical project which involves moving images and paintings together as a medium that explores phenomena of contemporary China relating to personal identity, independence and its relationship with the traditional importance on collective groups, group centredness and interdependence. The project’s approach draws on sociological research on Western thought, values and beliefs naturally occurring in China since the “Open Door” policy as raw data to focus on the transition and transformation of contemporary Chinese individuality, and translates these data to form the concepts underpinning the metaphoric method of my artwork. Bubbles are the main visual symbols that metaphorically imply the incessantly transformable Chinese individuality and social cultural identity. My aim has been to portray this phenomenon through artistic practices on screens. By reflecting and engaging with moving images and paintings, underpinned by theoretical research and methods including data collecting, self-reflecting on data, practical manifestation and self-inquiry, I have attempted to unfold the phenomenon of contemporary Chinese individuality through my art practice. The thesis is composed as a creative work of moving images accompanied by an exegesis component. The moving image represents a nominal 80%, and the exegesis 20% of the final submission.
APA, Harvard, Vancouver, ISO, and other styles
15

Pomajambo, Shane. "A Complex For Computer Technologies." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/35590.

Full text
Abstract:
The building which manifests itself is a direct reaction to the desires of the site and more importantly to the functions it houses. The purpose of this thesis though is to make an addition to the desires and to solve a desire which is not so evident to the naked eye. This desire is to eliminate the product which will manifest itself in the near future if nothing is done to change its clear and certain direction. The product I speak of is decentralized society where human interaction is almost elimininated. The thesis manifests a new building type one which is labeled "a complex for computer technologies" this complex looks at the relationship between architecture and the computer image in three different levels but always promoting human interaction.<br>Master of Architecture
APA, Harvard, Vancouver, ISO, and other styles
16

Chiang, Shun Fan. "The development of a low-cost robotic visual tracking system : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering, Mechatronics at Massey University, Albany, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/996.

Full text
Abstract:
This thesis describes a system which is able to track and imitate human motion. The system is divided into two major parts: computer vision system and robot arm motion control system. Through the use of two real-time video cameras, computer vision system identifies the moving object depending on the colour features, as the object colour is matched within the colour range in the current image frame, a method that employs two vectors is used to calculate the coordinates of the object. After the object is detected and tracked coordinates are saved to a pre-establish database in the purpose of further data processing, a mathematical algorithm is performed to the data in order to give a better robotic motion control. Robot arm manipulator responds with a move within its workspace which corresponds to a consequential human-type motion. Experimental outcomes have shown that the system is reliable and can successfully imitate a human hand motion in most cases.
APA, Harvard, Vancouver, ISO, and other styles
17

Susnjak, Teo. "Accelerating classifier training using AdaBoost within cascades of boosted ensembles : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Sciences at Massey University, Auckland, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/1002.

Full text
Abstract:
This thesis seeks to address current problems encountered when training classifiers within the framework of cascades of boosted ensembles (CoBE). At present, a signifi- cant challenge facing this framework are inordinate classifier training runtimes. In some cases, it can take days or weeks (Viola and Jones, 2004; Verschae et al., 2008) to train a classifier. The protracted training runtimes are an obstacle to the wider use of this framework (Brubaker et al., 2006). They also hinder the process of producing effective object detection applications and make the testing of new theories and algorithms, as well as verifications of others research, a considerable challenge (McCane and Novins, 2003). An additional shortcoming of the CoBE framework is its limited ability to train clas- sifiers incrementally. Presently, the most reliable method of integrating new dataset in- formation into an existing classifier, is to re-train a classifier from beginning using the combined new and old datasets. This process is inefficient. It lacks scalability and dis- cards valuable information learned in previous training. To deal with these challenges, this thesis extends on the research by Barczak et al. (2008), and presents alternative CoBE frameworks for training classifiers. The alterna- tive frameworks reduce training runtimes by an order of magnitude over common CoBE frameworks and introduce additional tractability to the process. They achieve this, while preserving the generalization ability of their classifiers. This research also introduces a new framework for incrementally training CoBE clas- sifiers and shows how this can be done without re-training classifiers from beginning. However, the incremental framework for CoBEs has some limitations. Although it is able to improve the positive detection rates of existing classifiers, currently it is unable to lower their false detection rates.
APA, Harvard, Vancouver, ISO, and other styles
18

Birchell, Shannon Lloyd. "Trapping ACO applied to MRI of the Heart." UNF Digital Commons, 2019. https://digitalcommons.unf.edu/etd/862.

Full text
Abstract:
The research presented here supports the ongoing need for automatic heart volume calculation through the identification of the left and right ventricles in MRI images. The need for automated heart volume calculation stems from the amount of time it takes to manually processes MRI images and required esoteric skill set. There are several methods for region detection such as Deep Neural Networks, Support Vector Machines and Ant Colony Optimization. In this research Ant Colony Optimization (ACO) will be the method of choice due to its efficiency and flexibility. There are many types of ACO algorithms using a variety of heuristics that provide advantages in different environments and knowledge domains. All ACO algorithms share a foundational attribute, a heuristic that acts in conjunction with pheromones. These heuristics can work in various ways, such as dictating dispersion or the interpretation of pheromones. In this research a novel heuristic to disperse and act on pheromone is presented. Further, ants are applied to more general problem than the normal objective of finding edges, highly qualified region detection. The reliable application of heuristic oriented algorithms is difficult in a diverse environment. Although the problem space here is limited to MRI images of the heart, there are significant difference among them: the topology of the heart is different by patient, the angle of the scans changes and the location of the heart is not known. A thorough experiment is conducted to support algorithm efficacy using randomized sampling with human subjects. It will be shown during the analysis the algorithm has both prediction power and robustness.
APA, Harvard, Vancouver, ISO, and other styles
19

Kaufman, Jaime C. "A Hybrid Approach to Music Recommendation: Exploiting Collaborative Music Tags and Acoustic Features." UNF Digital Commons, 2014. http://digitalcommons.unf.edu/etd/540.

Full text
Abstract:
Recommendation systems make it easier for an individual to navigate through large datasets by recommending information relevant to the user. Companies such as Facebook, LinkedIn, Twitter, Netflix, Amazon, Pandora, and others utilize these types of systems in order to increase revenue by providing personalized recommendations. Recommendation systems generally use one of the two techniques: collaborative filtering (i.e., collective intelligence) and content-based filtering. Systems using collaborative filtering recommend items based on a community of users, their preferences, and their browsing or shopping behavior. Examples include Netflix, Amazon shopping, and Last.fm. This approach has been proven effective due to increased popularity, and its accuracy improves as its pool of users expands. However, the weakness with this approach is the Cold Start problem. It is difficult to recommend items that are either brand new or have no user activity. Systems that use content-based filtering recommend items based on extracted information from the actual content. A popular example of this approach is Pandora Internet Radio. This approach overcomes the Cold Start problem. However, the main issue with this approach is its heavy demand on computational power. Also, the semantic meaning of an item may not be taken into account when producing recommendations. In this thesis, a hybrid approach is proposed by utilizing the strengths of both collaborative and content-based filtering techniques. As proof-of-concept, a hybrid music recommendation system was developed and evaluated by users. The results show that this system effectively tackles the Cold Start problem and provides more variation on what is recommended.
APA, Harvard, Vancouver, ISO, and other styles
20

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
21

Batchelor, O. W. "Ray traversal for incremental voxel colouring : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Science in the University of Canterbury /." 2006. http://library.canterbury.ac.nz/etd/adt-NZCU20070330.145801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Saunders, Karen. "Queer intercorporeality bodily disruption of straight space : a thesis submitted for the degree of Master of Arts in Gender Studies at the University of Canterbury, Christchurch, Aotearoa/New Zealand /." 2008. http://hdl.handle.net/10092/1028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Manion, Steve Lawrence. "Fluency enhancement : applications to machine translation : thesis for Master of Engineering in Information & Telecommunications Engineering, Massey University, Palmerston North, New Zealand." 2009. http://hdl.handle.net/10179/1237.

Full text
Abstract:
The quality of Machine Translation (MT) can often be poor due to it appearing incoherent and lacking in fluency. These problems consist of word ordering, awkward use of words and grammar, and translating text too literally. However we should not consider translations such as these failures until we have done our best to enhance their quality, or more simply, their fluency. In the same way various processes can be applied to touch up a photograph, various processes can also be applied to touch up a translation. This research outlines the improvement of MT quality through the application of Fluency Enhancement (FE), which is a process we have created that reforms and evaluates text to enhance its fluency. We have tested our FE process on our own MT system which operates on what we call the SAM fundamentals, which are as follows: Simplicity - to be simple in design in order to be portable across different languages pairs, Adaptability - to compensate for the evolution of language, and Multiplicity - to determine a final set of translations from as many candidate translations as possible. Based on our research, the SAM fundamentals are the key to developing a successful MT system, and are what have piloted the success of our FE process.
APA, Harvard, Vancouver, ISO, and other styles
24

Samsell, Molly. "Camera and image : mediator and interface : a thesis presented in partial fulfilment of the requirements for the Master of Fine Arts at Massey University, Wellington, Aotearoa New Zealand." 2009. http://hdl.handle.net/10179/1020.

Full text
Abstract:
How can art, specifically photography, illustrate the limitations of vision? What do those limits reveal about perception and knowing? To explore these questions two distinct mechanisms need to be discussed in relation to creative practice, Paul Virilio’s augmenting lens that forever changes the photographer’s perception and the image acting as an object for both Maurice Merleau-Ponty’s embodied experience and Jean Baudrillard’s simulacrum. The photographic image becomes an index by exposing the relationship between photographer and image. The camera is a tool, to Virilio a prosthetic eye, which immediately affects the photographer’s perception of her environment. The phenomenal world is the one that is photographed, a subjective experience. The tension between surface and reality, image and object, removes the photographic experience from an experience of the real. The making of the image closely parallels the act of viewing the image. A dual experience emerges from the photograph, the creation of the image and the viewer’s act of reading, inferring. An image, as an index, is open to multiple interpretations, placing equal weight on each participant, viewer, and creator, so that there is no hierarchy of interpretation, experience, or meaning. In this thesis these questions are explored in relation to a creative practice embedding theory with process and outcome.
APA, Harvard, Vancouver, ISO, and other styles
25

Blakeley, Marissa J. "An investigation of encoding and retrieval processes in children's false memories in the DRM paradigm : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Science in Psychology /." 2006. http://library.canterbury.ac.nz/etd/adt-NZCU20060711.123811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Berg, Crispen James. "High speed digital image capture method for a digital image-based elasto-tomography breast cancer screening system : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Mechanical Engineering in the University of Canterbury /." 2006. http://library.canterbury.ac.nz/etd/adt-NZCU20070613.093413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lin, Yifeng. "House of a dreamer : poetics of interior space : an image-based approach : [a thesis submitted in fulfilment of the requirements for the degree of Master of Design at Victoria University of Wellington] /." 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
28

Zhu, Jihai. "Low-complexity block dividing coding method for image compression using wavelets : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University, Palmerston North, New Zealand." 2007. http://hdl.handle.net/10179/704.

Full text
Abstract:
Image coding plays a key role in multimedia signal processing and communications. JPEG2000 is the latest image coding standard, it uses the EBCOT (Embedded Block Coding with Optimal Truncation) algorithm. The EBCOT exhibits excellent compression performance, but with high complexity. The need to reduce this complexity but maintain similar performance to EBCOT has inspired a significant amount of research activity in the image coding community. Within the development of image compression techniques based on wavelet transforms, the EZW (Embedded Zerotree Wavelet) and the SPIHT (Set Partitioning in Hierarchical Trees) have played an important role. The EZW algorithm was the first breakthrough in wavelet based image coding. The SPIHT algorithm achieves similar performance to EBCOT, but with fewer features. The other very important algorithm is SBHP (Sub-band Block Hierarchical Partitioning), which attracted significant investigation during the JPEG2000 development process. In this thesis, the history of the development of wavelet transform is reviewed, and a discussion is presented on the implementation issues for wavelet transforms. The above mentioned four main coding methods for image compression using wavelet transforms are studied in detail. More importantly the factors that affect coding efficiency are identified. The main contribution of this research is the introduction of a new low-complexity coding algorithm for image compression based on wavelet transforms. The algorithm is based on block dividing coding (BDC) with an optimised packet assembly. Our extensive simulation results show that the proposed algorithm outperforms JPEG2000 in lossless coding, even though it still leaves a narrow gap in lossy coding situations
APA, Harvard, Vancouver, ISO, and other styles
29

McGurk, Ross J. "Variation of image counts with patient anatomy and development of a Monte Carlo simulation system for whole-body bone scans : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Science in Medical Physics, University of Canterbury /." 2007. http://hdl.handle.net/10092/1586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Albrow, S. J. "The significance of the atypical samurai image : a study of three novellas by Fujisawa Shūhei and the film Tasogare Seibei by Yamada Yōji : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Arts in Japanese at the University of Canterbury /." 2007. http://library.canterbury.ac.nz/etd/adt-NZCU20080313.144546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Shin, Heesang. "Finding near optimum colour classifiers : genetic algorithm-assisted fuzzy colour contrast fusion using variable colour depth : a thesis presented to the Institute of Information and Mathematical Sciences in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at Massey University, Albany, Auckland, New Zealand." 2009. http://hdl.handle.net/10179/1096.

Full text
Abstract:
This thesis presents a complete self-calibrating illumination intensity-invariant colour classification system. We extend a novel fuzzy colour processing tech- nique called Fuzzy Colour Contrast Fusion (FCCF) by combining it with a Heuristic- assisted Genetic Algorithm (HAGA) for automatic fine-tuning of colour descriptors. Furthermore, we have improved FCCF’s efficiency by processing colour channels at varying colour depths in search for the optimal ones. In line with this, we intro- duce a reduced colour depth representation of a colour image while maintaining efficient colour sensitivity that suffices for accurate real-time colour-based object recognition. We call the algorithm Variable Colour Depth (VCD) and we propose a technique for building and searching a VCD look-up table (LUT). The first part of this work investigates the effects of applying fuzzy colour contrast rules to vary- ing colour depths as we extract the optimal rule combination for any given target colour exposed under changing illumination intensities. The second part introduces the HAGA-based parameter-optimisation for automatically constructing accurate colour classifiers. Our results show that for all cases, the VCD algorithm, combined with HAGA for parameter optimisation improve colour classification via a pie-slice colour classifier.For 6 different target colours, the hybrid algorithm was able to yield 17.63% higher overall accuracy as compared to the pure fuzzy approach. Fur- thermore, it was able to reduce LUT storage space by 78.06% as compared to the full-colour depth LUT.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography