To see the other types of publications on this topic, follow the link: Segmentation technology.

Dissertations / Theses on the topic 'Segmentation technology'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Segmentation technology.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lundström, Claes. "Segmentation of Medical Image Volumes." Thesis, Linköping University, Linköping University, Computer Vision, 1997. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54357.

Full text
Abstract:

Segmentation is a process that separates objects in an image. In medical images, particularly image volumes, the field of application is wide. For example 3D visualisations of the anatomy could benefit enormously from segmentation. The aim of this thesis is to construct a segmentation tool.

The project consist three main parts. First, a survey of the actual need of segmentation in medical image volumes was carried out. Then a unique three-step model for a segmentation tool was implemented, tested and evaluated.

The first step of the segmentation tool is a seed-growing method that uses the intensity and an orientation tensor estimate to decide which voxels that are part of the project. The second step uses an active contour, a deformable “balloon”. The contour is shrunk to fit the segmented border from the first step, yielding a surface suitable for visualisation. The last step consists of letting the contour reshape according to the orientation tensor estimate.

The use evaluation establishes the usefulness of the tool. The model is flexible and well adapted to the users’ requests. For unclear objects the segmentation may fail, but the cause is mostly poor image quality. Even though much work remains to be done on the second and third part of the tool, the results are most promising.

APA, Harvard, Vancouver, ISO, and other styles
2

Farnebäck, Gunnar. "Motion-based segmentation of image sequences." Thesis, Linköping University, Linköping University, Computer Vision, 1996. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54351.

Full text
Abstract:

This Master's Thesis addresses the problem of segmenting an image sequence with respect to the motion in the sequence. As a basis for the motion estimation, 3D orientation tensors are used. The goal of the segmentation is to partition the images into regions, characterized by having a coherent motion. The motion model is affine with respect to the image coordinates. A method to estimate the parameters of the motion model from the orientation tensors in a region is presented. This method can also be generalized to a large class of motion models.

Two segmentation algorithms are presented together with a postprocessing algorithm. All these algorithms are based on the competitive algorithm, a general method for distributing points between a number of regions, without relying on arbitrary threshold values. The first segmentation algorithm segments each image independently, while the second algorithm recursively takes advantage of the previous segmentation. The postprocessing algorithm stabilizes the segmentations of a whole sequence by imposing continuity constraints.

The algorithms have been implemented and the results of applying them to a test sequence are presented. Interesting properties of the algorithms are that they are robust to the aperture problem and that they do not require a dense velocity ¯eld.

It is finally discussed how the algorithms can be developed and improved. It is straightforward to extend the algorithms to base the segmentations on alternative or additional features, under not too restrictive conditions on the features.

APA, Harvard, Vancouver, ISO, and other styles
3

Anusha, Anusha. "Word Segmentation for Classification of Text." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-396969.

Full text
Abstract:
Compounding is a highly productive word-formation process in some languages that is often problematic for natural language processing applications. Word segmentation is the problem of splitting a string of written language into its component words. The purpose of this research is to do a comparative study on different techniques of word segmentation and to identify the best technique that would aid in the extraction of keyword from the text. English was chosen as the language. Dictionary-based and Machine learning approaches were used to split the compound words. This research also aims at evaluating the quality of a word segmentation by comparing it with the segmentation of reference. Results indicated that Dictionary-based word segmentation showed better results in segmenting a compound word compared to the Machine learning segmentation when technical words were involved. Also, to improve the results for the text classification, improving the quality of the text alone is not the key
APA, Harvard, Vancouver, ISO, and other styles
4

Akinyemi, Akinola Olanrewaju. "Atlas-based segmentation of medical images." Thesis, University of Glasgow, 2011. http://theses.gla.ac.uk/2623/.

Full text
Abstract:
Atlas-Based Segmentation of medical images is an image analysis task which involves labelling a desired anatomy or set of anatomy from images generated by medical imaging modalities. The overall goal of atlas-based segmentation is to assist radiologists in the detection and diagnosis of diseases. By extracting the relevant anatomy from medical images and presenting it in an appropriate view, their work-flow can be optimised. This portfolio-style thesis discusses the research projects carried out in order to evaluate the applicability of atlas-based methods to a variety of medical imaging problems. The thesis describes how atlas-based methods have been applied to heart segmentation, to extract the heart for further cardiac analysis from cardiac CT images, to kidney segmentation, to prepare the kidney for automated perfusion measurements, and to coronary vessel tracking, in order to improve on the quality of tracking algorithms. This thesis demonstrates how state of the art atlas-based segmentation techniques can be applied successfully to a range of clinical problems in different imaging modalities. Each application has been tested using not only standard experimentation principles, but also by clinically-trained personnel to evaluate its efficacy. The success of these methods is such that some of the described applications have since been deployed in commercial products. While exploring these applications, several techniques based on published literature were explored and tailored to suit each individual application. This thesis describes in detail the methods used for each application in turn, recognising the state of the art, and outlines the author's contribution in every application.
APA, Harvard, Vancouver, ISO, and other styles
5

Aziz, Andrew. "Customer Segmentation basedon Behavioural Data in E-marketplace." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-330461.

Full text
Abstract:
In the past years, research in the fields of big data analysis, machine learning anddata mining techniques is getting more frequent. This thesis describes a customersegmentation approach in a second hand vintage clothing E-marketplace Plick.These customer groups are based on user interactions with items in themarketplace such as views and "likes". A major goal of this thesis was to constructa personal feed for each user where the items are derived from the user groups.The customer segmentation method discussed in this paper is based on theclustering algorithm K-means using cosine similarity as the similarity measure. Theinput matrix used by the K-means algorithm is a User-Brand ratings matrix whereeach brand is given a rating by each user. A visualization tool was also constructedin order to get a better picture of the data and the resulting clusters. In order tovisualize the highly dimensional User-Brand matrix, Principal Component Analysis isused as a dimensionality reduction algorithm.
APA, Harvard, Vancouver, ISO, and other styles
6

Samuels, Mark Lee. "Reconsidering the superstore workplace : a Sheffield case study of segmentation and technology." Thesis, Sheffield Hallam University, 2002. http://shura.shu.ac.uk/20321/.

Full text
Abstract:
Retailing is back on the research agenda and the analysis of consumption processes is providing a fertile source of insightful geographical literature. Yet despite this interest, the retail workplace remains on the margins of disciplinary concerns. Given this situation, it is time that the retail workplace was reconsidered. The reconsideration within this thesis concentrates on the superstore workplace and attempts to challenge existing applications of labour market segmentation theory. This challenge is driven by an interest in information and communication technology (ICT) and a realisation that these technologies must be understood with reference to human interaction. The empirical analysis centres on one case study, a food-selling superstore in Sheffield. As an empirical link between theory and qualitative analysis, secondary human resource statistics are analysed to provide a guide to segmentation within the store. Qualitative research techniques are used to build an in-depth understanding of different employees activities and experiences. The secondary data suggests that segmentation remains an important framework for organisation within the retail superstore. However, qualitative research illustrates how existing theoretical conceptualisations of the segmented superstore might be problematised by a series of power relationships (dictation, delegation and authority) that are, in part, facilitated by the use of ICTs. These power relationships are in turn reinterpreted within individual worker strategies of manipulation and resistance. Here, workers regularly use ICTs in different ways than the remote head office might have originally intended. It is also suggested that the consent to work for many disadvantaged workers has to be understood by reference to a series of social concerns from outside the workplace (childcare, other domestic relationships, financial survival, lifestyle choice, social experience and self-esteem). These findings suggest a rich vein for additional research and the retail workplace should be pushed to the centre of geographical debate for further analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Fang, Jian. "Optical Imaging and Computer Vision Technology for Corn Quality Measurement." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/733.

Full text
Abstract:
The official U.S. standards for corn have been available for almost one hundred years. Corn grading system has been gradually updated over the years. In this thesis, we investigated a fast corn grading system, which includes the mechanical part and the computer recognition part. The mechanical system can deliver the corn kernels onto the display plate. For the computer recognition algorithms, we extracted common features from each corn kernel, and classified them to measure the grain quality.
APA, Harvard, Vancouver, ISO, and other styles
8

Grönberg, Axel. "Image Mosaicking Using Vessel Segmentation for Application During Fetoscopic Surgery." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-438422.

Full text
Abstract:
Twin-to-twin-transfusion syndrome is a condition where there is an imbalance in the shared blood circulation between monochorionic twin fetuses due to certaininter-twin vascular connections (anastomoses) in the placenta which has very high morbidity and mortality rate for both fetuses. Fetoscopic laser occlusive coagulation(FLOC) surgery is commonly used to treat the condition which uses a fetoscope to explore the placenta and a laser to occlude the anastomoses causing the imbalance inblood circulation. In order to deal with the navigational difficulties caused by the limited field of view of the fetoscope, this thesis is part of a work towards an application which main purpose is to build a global map of the placenta as well as display position of the fetoscope on that map. A combination of segmentation by neural networks are combined with direct sequential registration techniques are applied to fetoscopic data from FLOC surgeries at Karolinska University Hospital Huddinge and resulting in a proof-of-concept of this mosaicking pipeline setup for the creation of a global map of the placenta during such a surgery. It was however also found that more work is needed to make the system more reliable and among other things less sensitive to poor visual conditions and drift, which can result in low quality mosaics with artifacts due to misaligned images.
APA, Harvard, Vancouver, ISO, and other styles
9

Holmberg, Joakim. "Targeting the zebrafish eye using deep learning-based image segmentation." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428325.

Full text
Abstract:
Researchers studying cardiovascular and metabolic disease in humans commonly usecomputer vision techniques to segment internal structures of the zebrafish animalmodel. However, there are no current image segmentation methods to target theeyes of the zebrafish. Segmenting the eyes is essential for accurate measurement ofthe eyes' size and shape following the experimental intervention. Additionally,successful segmentation of the eyes functions as a good starting point for futuresegmentation of other internal organs. To establish an effective segmentation method,the deep learning neural network architecture, Deeplab, was trained using 275 imagesof the zebrafish embryo. Besides model architecture, the training was refined withproper data pre-processing, including data augmentation to add variety and toartificially increase the training data. Consequently, the results yielded a score of95.88 percent when applying augmentations, and 95.30 percent withoutaugmentations. Despite this minor improvement in accuracy score when using theaugmented training dataset, it also produced visibly better predictions on a newdataset compared to the model trained without augmentations. Therefore, theimplemented segmentation model trained with augmentations proved to be morerobust, as the augmentations gave the model the ability to produce promising resultswhen segmenting on new data.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Yong. "Topic-based segmentation of web pages." Thesis, University of Macau, 2005. http://umaclib3.umac.mo/record=b1445895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Roussel, Nicolas. "Denoising of Dual Energy X-ray Absorptiometry Images and Vertebra Segmentation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233845.

Full text
Abstract:
Dual Energy X-ray Absorptiometry (DXA) is amedical imaging modality used to quantify bone mineral density and to detect fractures. It is widely used due to its cheap cost and low radiation dose, however it produces noisy images that can be difficult to interpret for a human expert or a machine. In this study, we investigate denoising on DXA lateral spine images and automatic vertebra segmentation in the resulting images. For denoising, we design adaptive filters to avoid the frequent apparition of edge artifacts (cross contamination), and validate our results with an observer experiment. Segmentation is performed using deep convolutional neural networks trained on manually segmented DXA images. Using few training images, we focus on depth of the network and on the amount of training data. At the best depth, we report a 94 % mean Dice on test images, with no post-processing. We also investigate the application of a network trained on one of our databases to the other (different resolution). We show that in some cases, cross contamination can degrade the segmentation results and that the use of our adaptive filters helps solving this problem. Our results reveal that even with little data and a short training, neural networks produce accurate segmentations. This suggests they could be used for fracture classification. However, the results should be validated on bigger databases with more fracture cases and other pathologies.
Dual Energy X-ray Absorptiometry (DXA) är en medicinsk bildbehandlingmodalitetsom används för att kvantifiera bentäthet och upptäckafrakturer. Det används i stor utsträckning tack vare sin låga kostnadoch sin låga exponering, men producerar brusiga bilder som kanvara svåra att förstå för en mänsklig expert eller en maskin. I den här studien undersöker vi avbrusning i DXA i laterala ryggradsbilderoch automatisk segmentering av ryggkotorna i de resulterandebilderna. För avbrusning skapar vi adaptiva filter för att förhindrafrekventa kantartefakter (korskontaminering), och validerar våraresultat med ett observatörsexperiment. Segmentering utförs medanvändning av djupa konvolutionella neuronnät tränade på manuelltsegmenterade DXA-bilder. Med få träningsbilder fokuserar vi pånätverksdjup och mängden träningsdata. På bästa djup rapporterarvi 94% medel-Dice på testbilder utan efterbehandling. Vi undersökerockså tillämpning av ett nätverk tränat på en av våra databaser till enannan databas (annan upplösning). Vi visar att i vissa fall kan korskontamineringförsämra segmenteringsresultatet och att användningenav våra adaptiva filter hjälper till att lösa problemet. Våra resultatvisar att även med få data och korta träningar så producerar neuuronnätkor- rekta segmenteringar. Detta tyder på att de kunde användasför frak- turklassificering. Dock, resultaten bör valideras på större databasermed fler fall av frakturer och andra patologier.
APA, Harvard, Vancouver, ISO, and other styles
12

Enlund, Åström Isabelle. "Attention P-Net for Segmentation of Post-operative Glioblastoma in MRI." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-397009.

Full text
Abstract:
Segmentation of post-operative glioblastoma is important for follow-up treatment. In this thesis, Fully Convolutional Networks (FCN) are utilised together with attention modules for segmentation of post-operative glioblastoma in MR images. Attention-based modules help the FCN to focus on relevant features to improve segmentation results. Channel and spatial attention combines both the spatial context as well as the semantic information in MR images. P-Net is used as a backbone for creating an architecture with existing bottleneck attention modules and was named attention P-Net. The proposed network and competing techniques were evaluated on a Uppsala University database containing T1-weighted MR images of brain from 12 subjects. The proposed framework shows substantial improvement over the existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
13

Mitra, Bhargav Kumar. "Scene segmentation using miliarity, motion and depth based cues." Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2480/.

Full text
Abstract:
Segmentation of complex scenes to aid surveillance is still considered an open research problem. In this thesis a computational model (CM) has been developed to classify a scene into foreground, moving-shadow and background regions. It has been demonstrated how the CM, with the optional use of a channel ratio test, can be applied to demarcate foreground shadow regions in indoor scenes illuminated by a fixed incandescent source of light. A combined approach, involving the CM working in tandem with a traditional motion cue based segmentation method, has also been constructed. In the combined approach, the CM is applied to segregate the foreground shaded regions in a current frame based on a binary mask generated using a standard background subtraction process (BSP). Various popular outlier detection strategies have been investigated to assess their suitabilities in generating a threshold automatically, required to develop a binary mask from a difference frame, the outcome of the BSP. To evaluate the full scope of the pixel labeling capabilities of the CM and to estimate the associated time constraints, the model is deployed for foreground scene segmentation in recorded real-life video streams. The observations made validate the satisfactory performance of the model in most cases. In the second part of the thesis depth based cues have been exploited to perform the task of foreground scene segmentation. An active structured light based depthestimating arrangement has been modeled in the thesis; the choice of modeling an active system over a passive stereovision one has been made to alleviate some of the difficulties associated with the classical correspondence problem. The model developed not only facilitates use of the set-up but also makes possible a method to increase the working volume of the system without explicitly encoding the projected structured pattern. Finally, it is explained how scene segmentation can be accomplished based solely on the structured pattern disparity information, without generating explicit depthmaps. To de-noise the difference frames, generated using the developed method, two median filtering schemes have been implemented. The working of one of the schemes is advocated for practical use and is described in terms of discrete morphological operators, thus facilitating hardware realisation of the method to speed-up the de-noising process.
APA, Harvard, Vancouver, ISO, and other styles
14

Möller, Sebastian. "Image Segmentation and Target Tracking using Computer Vision." Thesis, Linköpings universitet, Datorseende, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-68061.

Full text
Abstract:
In this master thesis the possibility of detecting and tracking objects in multispectral infrared video sequences is investigated. The current method  with fix-sized rectangles have significant disadvantages. These disadvantages will be solved using image segmentation to estimate the shape of the object. The result of the image segmentation is used to determine the infrared contrast of the object. Our results show how some objects will give very good segmentation, tracking as well as shape detection. The objects that perform best are the flares and countermeasures. But especially helicopters seen from the side, with significant movements, is better detected with our method. The motion of the object is very important since movement is the main component in successful shape detection. This is so because helicopters are much colder than flares and engines. Detecting the presence and position of moving objects is easier and can be done quite successfully even with helicopters. But using structure tensors we can also detect the presence and estimate the position for stationary objects.
I detta examensarbete undersöks möjligheterna att detektera och spåra intressanta objekt i multispektrala infraröda videosekvenser. Den nuvarande metoden, som använder sig av rektanglar med fix storlek, har sina nackdelar. Dessa nackdelar kommer att lösas med hjälp av bildsegmentering för att uppskatta formen på önskade mål.Utöver detektering och spårning försöker vi också att hitta formen och konturen för intressanta objekt för att kunna använda den exaktare passformen vid kontrastberäkningar. Denna framsegmenterade kontur ersätter de gamla fixa rektanglarna som använts tidigare för att beräkna intensitetskontrasten för objekt i de infraröda våglängderna. Resultaten som presenteras visar att det för vissa objekt, som motmedel och facklor, är lättare att få fram en bra kontur samt målföljning än vad det är med helikoptrar, som var en annan önskad måltyp. De svårigheter som uppkommer med helikoptrar beror till stor del på att de är mycket svalare vilket gör att delar av helikoptern kan helt döljas i bruset från bildsensorn. För att kompensera för detta används metoder som utgår ifrån att objektet rör sig mycket i videon så att rörelsen kan användas som detekteringsparameter. Detta ger bra resultat för de videosekvenser där målet rör sig mycket i förhållande till sin storlek.
APA, Harvard, Vancouver, ISO, and other styles
15

Samuelsson, Emil. "Classification of skin pixels in images : Using feature recognition and threshold segmentation." Thesis, Umeå universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-155400.

Full text
Abstract:
The purpose of this report is to investigate and answer the research question: How can current skin segmentation thresholding methods be improved in terms of precision, accuracy, and efficiency by using feature recognition, pre- and post-processing? In this work, a novel algorithm is presented for classification of skin pixels in images. Different pre-processing methods were evaluated to improve the overall performance of the algorithm. Mainly, the methods of image smoothing, and histogram equalization were tested. Using a Gaussian kernel and contrast limited adaptive histogram equalization (CLAHE) was found to give the best result. A face recognition technique based on learned face features were used to identify a skin color range for each image. Threshold segmentation was then used, based on the obtained skin color range, to extract a skin map for each image. The skin maps were improved by testing a morphology method called closing and by using contour detection for an elimination of large false skin structures within skin regions. The skin maps were then evaluated by calculating the precision, recall, accuracy, and f-measure using a ground truth dataset called Pratheepan. This novel approach was compared to previous work in the field and obtained a considerable higher result. Thus, the algorithm is an improvement compared to previous work within the field.
APA, Harvard, Vancouver, ISO, and other styles
16

Isaac, Andreas. "Evaluation of word segmentation algorithms applied on handwritten text." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-414609.

Full text
Abstract:
The aim of this thesis is to build and evaluate how a word segmentation algorithm performs when extracting words from historical handwritten documents. Since historical documents often consist of background noise, the aim will also be to investigate whether applying a background removal algorithm will affect the final result or not. Three different types of historical handwritten documents are used to be able to compare the output when applying two different word segmentation algorithms. The result attained indicates that the background removal algorithm increases the accuracy obtained when using the word segmentation algorithm. The word segmentation algorithm developed successfully manages to extract a majority of the words while the obtained algorithm has difficulties for some documents. A conclusion made was that the type of document plays the key role in whether a poor result will be obtained or not. Hence, different algorithms may be needed rather than using one for all types of documents.
APA, Harvard, Vancouver, ISO, and other styles
17

Yusupujiang, Zulipiye. "Using Unsupervised Morphological Segmentation to Improve Dependency Parsing for Morphologically Rich Languages." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-354459.

Full text
Abstract:
In this thesis, we mainly investigate the influence of using unsupervised morphological segmentation as features on the dependency parsing of morphologically rich languages such as Finnish, Estonian, Hungarian, Turkish, Uyghur, and Kazakh. Studying the morphology of these languages is of great importance for the dependency parsing of morphologically rich languages since dependency relations in a sentence of these languages mostly rely on morphemes rather than word order. In order to investigate our research questions, we have conducted a large number of parsing experiments both on MaltParser and UDPipe. We have generated the supervised morphology and the predicted POS tags from UDPipe, and obtained the unsupervised morphological segmentation from Morfessor, and have converted the unsupervised morphological segmentation into features and added them to the UD treebanks of each language. We have also investigated the different ways of converting the unsupervised segmentation into features and studied the result of each method. We have reported the Labeled Attachment Score (LAS) for all of our experimental results. The main finding of this study is that dependency parsing of some languages can be improved simply by providing unsupervised morphology during parsing if there is no manually annotated or supervised morphology available for such languages. After adding unsupervised morphological information with predicted POS tags, we get improvement of 4.9%, 6.0%, 8.7%, 3.3%, 3.7%, and 12.0% on the test set of Turkish, Uyghur, Kazakh, Finnish, Estonian, and Hungarian respectively on MaltParser, and the parsing accuracies have been improved by 2.7%, 4.1%, 8.2%, 2.4%, 1.6%, and 2.6% on the test set of Turkish, Uyghur, Kazakh, Finnish, Estonian, and Hungarian respectively on UDPipe when comparing the results from the models which do not use any morphological information during parsing.
APA, Harvard, Vancouver, ISO, and other styles
18

Park, YoungAh. "Work and Non-work Boundary Management: Using Communication and Information Technology." Bowling Green State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1254771170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sandelin, Fredrik. "Semantic and Instance Segmentation of Room Features in Floor Plans using Mask R-CNN." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-393348.

Full text
Abstract:
Machine learning techniques within Computer Vision are rapidly improving computers' high-level understanding of images, thus revealing new opportunities to accomplish tasks that previously required manual intervention from humans. This paper aims to study where the current machine learning state-of-the-art is when it comes to parsing and segmenting bitmap images of floor plans. To assess the above, this paper evaluates one of the state-of-the-art models within instance segmentation, Mask R-CNN, on a size-limited and challenging floor plan dataset. The model handles detecting both objects and generating a high-quality segmentation map for each object, allowing for complete image segmentation using only a single network. Additionally, in order to extend the dataset, synthetic data generation was explored, and results indicate that it aids the network with floor plan generalization. The network is evaluated on both semantic and instance segmentation metrics and results show that the network yields good, almost completely segmented floor plans on smaller blueprints with little noise, while yielding decent but not completely segmented floor plans on large blueprints with a large amount of noise. Based on the results and them being achieved from a limited dataset, Mask R-CNN shows that it has potential in both accuracy and robustness for floor plans segmentation, either as a standalone network or alternatively as part of a pipeline of several methods and techniques.
APA, Harvard, Vancouver, ISO, and other styles
20

Hedblom, Anders. "Blood vessel segmentation for neck and head computed tomography angiography." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-101988.

Full text
Abstract:
This thesis presents tests and discussions evaluating different methods for doing automatic or semi automatic blood vessel segmentation on single CT data volumes of the head and neck. The two approaches being closest to accomplish this are a bone subtracting registration process, and a more advanced region growing combined with morphology.
APA, Harvard, Vancouver, ISO, and other styles
21

Mattsson, Per, and Andreas Eriksson. "Segmentation of Carotid Arteries from 3D and 4D Ultrasound Images." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1141.

Full text
Abstract:

This thesis presents a 3D semi-automatic segmentation technique for extracting the lumen surface of the Carotid arteries including the bifurcation from 3D and 4D ultrasound examinations.

Ultrasound images are inherently noisy. Therefore, to aid the inspection of the acquired data an adaptive edge preserving filtering technique is used to reduce the general high noise level. The segmentation process starts with edge detection with a recursive and separable 3D Monga-Deriche-Canny operator. To reduce the computation time needed for the segmentation process, a seeded region growing technique is used to make an initial model of the artery. The final segmentation is based on the inflatable balloon model, which deforms the initial model to fit the ultrasound data. The balloon model is implemented with the finite element method.

The segmentation technique produces 3D models that are intended as pre-planning tools for surgeons. The results from a healthy person are satisfactory and the results from a patient with stenosis seem rather promising. A novel 4D model of wall motion of the Carotid vessels has also been obtained. From this model, 3D compliance measures can easily be obtained.

APA, Harvard, Vancouver, ISO, and other styles
22

Spina, Sandro. "Graph-based segmentation and scene understanding for context-free point clouds." Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/76651/.

Full text
Abstract:
The acquisition of 3D point clouds representing the surface structure of real-world scenes has become common practice in many areas including architecture, cultural heritage and urban planning. Improvements in sample acquisition rates and precision are contributing to an increase in size and quality of point cloud data. The management of these large volumes of data is quickly becoming a challenge, leading to the design of algorithms intended to analyse and decrease the complexity of this data. Point cloud segmentation algorithms partition point clouds for better management, and scene understanding algorithms identify the components of a scene in the presence of considerable clutter and noise. In many cases, segmentation algorithms operate within the remit of a specific context, wherein their effectiveness is measured. Similarly, scene understanding algorithms depend on specific scene properties and fail to identify objects in a number of situations. This work addresses this lack of generality in current segmentation and scene understanding processes, and proposes methods for point clouds acquired using diverse scanning technologies in a wide spectrum of contexts. The approach to segmentation proposed by this work partitions a point cloud with minimal information, abstracting the data into a set of connected segment primitives to support efficient manipulation. A graph-based query mechanism is used to express further relations between segments and provide the building blocks for scene understanding. The presented method for scene understanding is agnostic of scene specific context and supports both supervised and unsupervised approaches. In the former, a graph-based object descriptor is derived from a training process and used in object identification. The latter approach applies pattern matching to identify regular structures. A novel external memory algorithm based on a hybrid spatial subdivision technique is introduced to handle very large point clouds and accelerate the computation of the k-nearest neighbour function. Segmentation has been successfully applied to extract segments representing geographic landmarks and architectural features from a variety of point clouds, whereas scene understanding has been successfully applied to indoor scenes on which other methods fail. The overall results demonstrate that the context-agnostic methods presented in this work can be successfully employed to manage the complexity of ever growing repositories.
APA, Harvard, Vancouver, ISO, and other styles
23

Shen, Jiannan. "Application of image segmentation in inspection of welding : Practical research in MATLAB." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-16871.

Full text
Abstract:
As one of main methods in modern steel production, welding plays a very important role in our national economy, which has been widely applied in many fields such as aviation, petroleum, chemicals, electricity, railways and so on. The craft of welding can be improved in terms of welding tools, welding technology and welding inspection. However, so far welding inspection has been a very complicated problem. Therefore, it is very important to effectively detect internal welding defects in the welded-structure part and it is worth to furtherly studying and researching.In this paper, the main task is research about the application of image segmentation in welding inspection. It is introduced that the image enhancement techniques and image segmentation techniques including image conversion, noise removal as well as threshold, clustering, edge detection and region extraction. Based on the MATLAB platform, it focuses on the application of image segmentation in ray detection of steeled-structure, found out the application situation of three different image segmentation method such as threshold, clustering and edge detection.Application of image segmentation is more competitive than image enhancement because that:1. Gray-scale based FCM clustering of image segmentation performs well, which can exposure pixels in terms of grey value level so as that it can show hierarchical position of related defects by grey value.2. Canny detection speeds also fast and performs well, that gives enough detail information around edges and defects with smooth lines.3. Image enhancement only could improve image quality including clarity and contrast, which can’t give other helpful information to detect welding defects.This paper comes from the actual needs of the industrial work and it proves to be practical at some extent. Moreover, it also demonstrates the next improvement direction including identification of welding defects based on the neural networks, and improved clustering algorithm based on the genetic ideas.
Program: Magisterutbildning i informatik
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Wei. "Image Segmentation Using Deep Learning Regulated by Shape Context." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-227261.

Full text
Abstract:
In recent years, image segmentation by using deep neural networks has made great progress. However, reaching a good result by training with a small amount of data remains to be a challenge. To find a good way to improve the accuracy of segmentation with limited datasets, we implemented a new automatic chest radiographs segmentation experiment based on preliminary works by Chunliang using deep learning neural network combined with shape context information. When the process was conducted, the datasets were put into origin U-net at first. After the preliminary process, the segmented images were then repaired through a new network with shape context information. In this experiment, we created a new network structure by rebuilding the U-net into a 2-input structure and refined the processing pipeline step. In this proposed pipeline, the datasets and shape context were trained together through the new network model by iteration. The proposed method was evaluated on 247 posterior-anterior chest radiographs of public datasets and n-folds cross-validation was also used. The outcome shows that compared to origin U-net, the proposed pipeline reaches higher accuracy when trained with limited datasets. Here the "limited" datasets refer to 1-20 images in the medical image field. A better outcome with higher accuracy can be reached if the second structure is further refined and shape context generator's parameter is fine-tuned in the future.
Under de senaste åren har bildsegmentering med hjälp av djupa neurala nätverk gjort stora framsteg. Att nå ett bra resultat med träning med en liten mängd data kvarstår emellertid som en utmaning. För att hitta ett bra sätt att förbättra noggrannheten i segmenteringen med begränsade datamängder så implementerade vi en ny segmentering för automatiska röntgenbilder av bröstkorgsdiagram baserat på tidigare forskning av Chunliang. Detta tillvägagångssätt använder djupt lärande neurala nätverk kombinerat med "shape context" information. I detta experiment skapade vi en ny nätverkstruktur genom omkonfiguration av U-nätverket till en 2-inputstruktur och förfinade pipeline processeringssteget där bilden och "shape contexten" var tränade tillsammans genom den nya nätverksmodellen genom iteration.Den föreslagna metoden utvärderades på dataset med 247 bröströntgenfotografier, och n-faldig korsvalidering användes för utvärdering. Resultatet visar att den föreslagna pipelinen jämfört med ursprungs U-nätverket når högre noggrannhet när de tränas med begränsade datamängder. De "begränsade" dataseten här hänvisar till 1-20 bilder inom det medicinska fältet. Ett bättre resultat med högre noggrannhet kan nås om den andra strukturen förfinas ytterligare och "shape context-generatorns" parameter finjusteras.
APA, Harvard, Vancouver, ISO, and other styles
25

Rydell, Joakim. "Perception-based second generation image coding using variable resolution." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1675.

Full text
Abstract:

In ordinary image coding, the same image quality is obtained in all parts of an image. If it is known that there is only one viewer, and where in the image that viewer is focusing, the quality can be degraded in other parts of the image without incurring any perceptible coding artefacts. This master's thesispresents a coding scheme where an image is segmented into homogeneous regions which are then separately coded, and where knowledge about the user's focus point is used to obtain further data reduction. It is concluded that the coding performance does not quite reach the levels attained when applying focus-based quality degradation to coding schemes not based on segmentation.

APA, Harvard, Vancouver, ISO, and other styles
26

Lareau, David. "Haptic Image Exploration." Thesis, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/20556.

Full text
Abstract:
The haptic exploration of 2-D images is a challenging problem in computer haptics. Research on the topic has primarily been focused on the exploration of maps and curves. This thesis describes the design and implementation of a system for the haptic exploration of photographs. The system builds on various research directions related to assistive technology, computer haptics, and image segmentation. An object-level segmentation hierarchy is generated from the source photograph to be rendered haptically as a contour image at multiple levels-of-detail. A tool for the authoring of object-level hierarchies was developed, as well as an innovative type of user interaction by region selection for accurate and efficient image segmentation. According to an objective benchmark measuring how the new method compares with other interactive image segmentation algorithms shows that our region selection interaction is a viable alternative to marker-based interaction. The hierarchy authoring tool combined with precise algorithms for image segmentation can build contour images of the quality necessary for the images to be understood by touch with our system. The system was evaluated with a user study of 24 sighted participants divided in different groups. The first part of the study had participants explore images using haptics and answer questions about them. The second part of the study asked the participants to identify images visually after haptic exploration. Results show that using a segmentation hierarchy supporting multiple levels-of-detail of the same image is beneficial to haptic exploration. As the system gains maturity, it is our goal to make it available to blind users.
APA, Harvard, Vancouver, ISO, and other styles
27

Kok, Emre Hamit. "Developing An Integrated System For Semi-automated Segmentation Of Remotely Sensed Imagery." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606107/index.pdf.

Full text
Abstract:
Classification of the agricultural fields using remote sensing images is one of the most popular methods used for crop mapping. Most recent classification techniques are based on per-field approach that works as assigning a crop label for each field. Commonly, the spatial vector data is used for the boundaries of the fields for applying the classification within them. However, crop variation within the fields is a very common problem. In this case, the existing field boundaries may be insufficient for performing the field-based classification and therefore, image segmentation is needed to be employed to detect these homogeneous segments within the fields. This study proposed a field-based approach to segment the crop fields in an image within the integrated environment of Geographic Information System (GIS) and Remote Sensing. In this method, each field is processed separately and the segments within each field are detected. First, an edge detection is applied to the images, and the detected edges are vectorized to generate the straight line segments. Next, these line segments are correlated with the existing field boundaries using the perceptual grouping techniques to form the closed regions in the image. The closed regions represent the segments each of which contain a distinct crop type. To implement the proposed methodology, a software was developed. The implementation was carried out using the 10 meter spatial resolution SPOT 5 and the 20 meter spatial resolution SPOT 4 satellite images covering a part of Karacabey Plain, Turkey. The evaluations of the obtained results are presented using different band combinations of the images.
APA, Harvard, Vancouver, ISO, and other styles
28

Bilgin, Arda. "Selection And Fusion Of Multiple Stereo Algorithms For Accurate Disparity Segmentation." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610133/index.pdf.

Full text
Abstract:
Fusion of multiple stereo algorithms is performed in order to obtain accurate disparity segmentation. Reliable disparity map of real-time stereo images is estimated and disparity segmentation is performed for object detection purpose. First, stereo algorithms which have high performance in real-time applications are chosen among the algorithms in the literature and three of them are implemented. Then, the results of these algorithms are fused to gain better performance in disparity estimation. In fusion process, if a pixel has the same disparity value in all algorithms, that disparity value is assigned to the pixel. Other pixels are labelled as unknown disparity. Then, unknown disparity values are estimated by a refinement procedure where neighbourhood disparity information is used. Finally, the resultant disparity map is segmented by using mean shift segmentation. The proposed method is tested in three different stereo data sets and several real stereo pairs. The experimental results indicate an improvement for the stereo analysis performance by the usage of fusion process and refinement procedure. Furthermore, disparity segmentation is realized successfully by using mean shift segmentation for detecting objects at different depth levels.
APA, Harvard, Vancouver, ISO, and other styles
29

Befus, Chad R., and University of Lethbridge Faculty of Arts and Science. "Design and evaluation of dynamic feature-based segmentation on music." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science, c2010, 2010. http://hdl.handle.net/10133/2531.

Full text
Abstract:
Segmentation is an indispensable step in the field of Music Information Retrieval (MIR). Segmentation refers to the splitting of a music piece into significant sections. Classically there has been a great deal of attention focused on various issues of segmentation, such as: perceptual segmentation vs. computational segmentation, segmentation evaluations, segmentation algorithms, etc. In this thesis, we conduct a series of perceptual experiments which challenge several of the traditional assumptions with respect to segmentation. Identifying some deficiencies in the current segmentation evaluation methods, we present a novel standardized evaluation approach which considers segmentation as a supportive step towards feature extraction in the MIR process. Furthermore, we propose a simple but effective segmentation algorithm and evaluate it utilizing our evaluation approach.
viii, 94 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
30

Hovda, Sigve. "New Doppler-Based Imaging Methods in Echocardiography with Applications in Blood/Tissue Segmentation." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1500.

Full text
Abstract:

Part 1: The bandwidth of the ultrasound Doppler signal is proposed as a classification function of blood and tissue signal in transthoracial echocardiography of the left ventricle. The new echocardiographic mode, Bandwidth Imaging, utilizes the difference in motion between tissue and blood. Specifically, Bandwidth Imaging is the absolute value of the normalized autocorrelation function with lag one. Bandwidth Imaging is therefore linearly dependent on the the square of the bandwidth estimated from the Doppler spectrum. A 2-tap Finite Impulse Response high-pass filter is used prior to autocorrelation calculation to account for the high level of DC clutter noise in the apical regions. Reasonable pulse strategies are discussed and several images of Bandwidth Imaging are included. An in vivo experiment is presented, where the apparent error rate of Bandwidth Imaging is compared with apparent error rate of Second-Harmonic Imaging on 15 healthy men. The apparent error rate is calculated from signal from all myocardial wall segments defined in \cite{Cer02}. The ground truth of the position of the myocardial wall segments is determined by manual tracing of endocardium in Second-Harmonic Imaging. A hypotheses test of Bandwidth Imaging having lower apparent error rate than

Second-Harmonic Imaging is proved for a p-value of 0.94 in 3 segments of end diastole and 1 segment in end systole on non averaged data. When data is averaged by a structural element of 5 radial, 3 lateral and 4 temporal samples, the numbers of segments are increased to 9 in end diastole and to 6 in end systole. These segments are mostly located in apical and anterior wall regions. Further, a global measure GM is defined as the proportion of misclassified area in the regions close to endocardium in an image. The hypothesis test of Second-Harmonic Imaging having lower GM than Bandwidth Imaging is proved for a p-value of 0.94 in the four-chamber view in end systole in any type of averaging. On the other side, the hypothesis test of Bandwidth Imaging having lower GM than Second-Harmonic Imaging is proved for a p-value of 0.94 in long-axis view in end diastole in any type of averaging. Moreover, if images are averaged by the above structural element the test indicates that Bandwidth Imaging has a lower apparent error rate than Second-Harmonic Imaging in all views and times (end diastole or end systole), except in four-chamber view in end systole. This experiment indicates that Bandwidth Imaging can supply additional information for automatic border detection routines on endocardium.

Part 2: Knowledge Based Imaging is suggested as a method to distinguish blood from tissue signal in transthoracial echocardiography. This method utilizes the maximum likelihood function to classify blood and tissue signal. Knowledge Based Imaging uses the same pulse strategy as Bandwidth Imaging, but is significantly more difficult to implement. Therefore, Knowledge Based Imaging and Bandwidth Imaging are compared with Fundamental Imaging by a computer simulation based on a parametric model of the signal. The rate apparent error rate is calculated in any reasonable tissue to blood signal ratio, tissue to white noise ratio and clutter to white noise ratio. Fundamental Imaging classifies well when tissue to blood signal ratio is high and tissue to white noise ratio is higher than clutter to white noise ratio. Knowledge Based Imaging classifies also well in this environment. In addition, Knowledge Based Imaging classifies well whenever blood to white noise ratio is above 30 dB. This is the case, even when clutter to white noise ratio is higher than tissue to white noise ratio and tissue to blood signal ratio is zero. Bandwidth Imaging performs similar to Knowledge Based Imaging, but blood to white noise ratio has to be 20 dB higher for a reasonable classification. Also the highpass filter coefficient prior to Bandwidth Imaging calculation is discussed by the simulations. Some images of different parameter settings of Knowledge Based Imaging are visually compared with Second-Harmonic Imaging, Fundamental Imaging and Bandwidth Imaging. Changing parameters of Knowledge Based Imaging can make the image look similar to both Bandwidth Imaging and Fundamental Imaging.

APA, Harvard, Vancouver, ISO, and other styles
31

Rogers, Wendy Laurel. "A Mahalanobis-distance-based image segmentation error measure with applications in automated microscopy /." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Holm, Per. "Automatic landmark detection on Trochanter Minor in x-ray images." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2791.

Full text
Abstract:

During pre-operative planning for hip replacement, the choice of prosthesis can be aided by measurements in x-ray images of the hip. Some measurements can be done automatically but this require robust and precise image processing algorithms which can detect anatomical features. The Trochanter minor is an important landmark on the femoral shaft. In this thesis, three di.erent image processing algorithms are explained and tested for automatic landmark detection on Trochanter minor. The algorithms handled are Active Shape Models, Shortest path algorithm and a segmentation technique based on cumulated cost maps. The results indicate that cumulated cost maps are an e.ective tool for rough segmentation of the Trochanter Minor. A snake algorithm was then applied which could .nd the edge of the Trochanter minor in all images used in the test. The edge can be used to locate a curvature extremum which can be used as a landmark point.

APA, Harvard, Vancouver, ISO, and other styles
33

Ouji, Asma. "Segmentation et classification dans les images de documents numérisés." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00749933.

Full text
Abstract:
Les travaux de cette thèse ont été effectués dans le cadre de l'analyse et du traitement d'images de documents imprimés afin d'automatiser la création de revues de presse. Les images en sortie du scanner sont traitées sans aucune information a priori ou intervention humaine. Ainsi, pour les caractériser, nous présentons un système d'analyse de documents composites couleur qui réalise une segmentation en zones colorimétriquement homogènes et qui adapte les algorithmes d'extraction de textes aux caractéristiques locales de chaque zone. Les informations colorimétriques et textuelles fournies par ce système alimentent une méthode de segmentation physique des pages de presse numérisée. Les blocs issus de cette décomposition font l'objet d'une classification permettant, entre autres, de détecter les zones publicitaires. Dans la continuité et l'expansion des travaux de classification effectués dans la première partie, nous présentons un nouveau moteur de classification et de classement générique, rapide et facile à utiliser. Cette approche se distingue de la grande majorité des méthodes existantes qui reposent sur des connaissances a priori sur les données et dépendent de paramètres abstraits et difficiles à déterminer par l'utilisateur. De la caractérisation colorimétrique au suivi des articles en passant par la détection des publicités, l'ensemble des approches présentées ont été combinées afin de mettre au point une application permettant la classification des documents de presse numérisée par le contenu.
APA, Harvard, Vancouver, ISO, and other styles
34

Savino, Alessandra L. "Insuring the Success of Microfinance: The Application of Cluster Analysis to Conduct Customer Segmentation on Microcredit Borrowers." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/cmc_theses/1282.

Full text
Abstract:
Microfinance aims to develop a financial ecosystem that serves the various financial needs of the poor, in hopes of providing them with the tools to sustainably elevate their economic and social well-being. This paper observes the evolution of financial inclusion over the past 40 years. Although considerable strides have been made to increase the impact of microfinance services, inherent challenges continue to plague the success of the industry. These fundamental deficiencies in microfinance initiatives (MFI’s) include the inability to scale, operate profitably and contribute to their clients’ economic and social betterment. This paper observes two fundamental changes that need to be made in order to insure the longevity and success of the industry. First, the industry needs to better integrate the use of innovative technology, which will allow organizations to be increasingly dynamic and targeted in their implementation. Businesses are quickly evolving to be data-centric to increase their profitability and customer base; if MFI’s were able to better understand their clients, they would be able to develop product offerings, delivery mechanisms and outreach efforts that are specifically focused to the needs of their target markets. The second fundamental change essential to success is that microfinance services need to be more fully integrated into the formal financial sector, and governments need to create an environment that encourages businesses and financial institutions to develop products to serve the poor. Cluster analysis aims to identify natural shapes within high-dimensional data and can be applied to numerous fields. As businesses have become more adept at keeping track of their customer data, a common application has been to conduct customer segmentation to better understand and serve their clients. This paper conducts clustering analysis on borrower data from The Lending Club, an online market place for micro-credit in order to better understand the various customer segments.
APA, Harvard, Vancouver, ISO, and other styles
35

Sun, Q. "Strategic market planning in China : a means-end chain approach to market segmentation within the Beijin mobile phone market." Thesis, University of Salford, 2007. http://usir.salford.ac.uk/14902/.

Full text
Abstract:
With a dramatic economic growth rate of 10% per year, China, as one of the Big Emerging Markets, has drawn increasing attention from both academia and industry. Its market potential and growth rate is believed to be the top attraction for global investment. In many sectors, the increasing number of options available to consumers has led to the emergence of a consumer society in China and has further fed the development of variance in consumer behaviour. This has imposed imperatives of consumer research in China, especially market segmentation research, on both foreign multinational companies and indigenous 5 manufacturers, in order i) to identify the unique needs of consumers, to provide more desirable product/service packages, and iii) to communicate brand value via more appropriate messages to targeted consumers.
APA, Harvard, Vancouver, ISO, and other styles
36

Ha, Simon. "Construction industry market segmentation: Foresight of needs and priorities of the urban mining segment." Thesis, Blekinge Tekniska Högskola, Institutionen för maskinteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1017.

Full text
Abstract:
Context: Current unsustainable practices have resulted in the depletion of natural resources and a prevailing material scarcity. Urban Mining has emerged in this context and suggests the “mining” of cities or other sources in urban areas to retrieve valuable resources. It raises the topic of how urban mining as a market segment of the construction industry is like today and in the future.  Objective: The thesis sets out to study what firms in the urban mining market segment desires in terms of needs and priorities. Furthermore, what could be prioritized in the future (2030), what future scenarios could be expected and what implications these can have on organizations within the segment and on the construction industry.  Method: A foresight methodology was applied as a framework for the research design. Interview with representatives from 10 firms, including observations of their operations, resulted in a number of mutual needs shared across the urban mining segment. These were prioritized in relative importance based on a questionnaire of 67 respondents representing 44 different firms in Sweden. A combination of these studies and a review of technology trends further enabled the extrapolation of future scenarios.  Results: The findings shows that firms within the urban mining market segment prioritizes and emphasizes needs related concerns in optimization, cost control, safety, environmental and social care today. Needs related to safety, environmental and social care are indicated to remain top prioritized as a result of the future market circumstances. A holistic and lifecycle approach in urban mining practices was deemed of low priority today but was indicated to grow significantly in relative importance in the future.  Conclusion: Technology, urbanization and globalization indicates stricter and more competitive market circumstances in the future. Especially related to safety, lifecycle consideration, environmental, and social care. The research suggests that firms concerned and those operating within the urban mining segment may need to undergo transformational changes in their organization to meet what the market segment expects in the future. Moreover, the findings opens up the possibility for actors and stakeholders concerned with the construction industry to proactively go into a desired future by knowing how the future market could unfold.
Stanford University, ME310: Urban Mining
APA, Harvard, Vancouver, ISO, and other styles
37

Narayan, Chaya. "Polarimetric Stokes Imaging for the Detection of Tumor Margins and Segmentation." University of Akron / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=akron1386785379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Jaeger, Garland. "WOMEN AND THEIR “FOOD TIME” AN INVESTIGATION INTO FOOD PURCHASES, PREPARATION, AND CONSUMPTION ATMOSPHERE USING SMARTPHONE SURVEY TECHNOLOGY." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/968.

Full text
Abstract:
Women’s food purchasing and eating habits have been studied in detail, but are still not entirely understood. Prior research has sought to segment the female food shopper market, but typically use only demographic characteristics. In this study, fifty females were recruited in San Luis Obispo, CA from March 2012 to May 2012 to keep an electronic food-time diary for one week. By collecting information through surveys distributed using a smartphone application, SurveySwipe, the study investigated the amount of time expended for each meal, as well as the manner in which the meal was prepared or purchased, and the context surrounding the eating situation, for a period of seven days. A segmentation of these female food consumers was then formed in order to demonstrate that by using attitudinal and behavioral data, a unique segmentation scheme may be achieved, different than would have resulted using only demographic information. For the data analysis, four principal components analyses were conducted followed by subsequent cluster analyses, followed by ANOVA and Chi-Square tests. Study participants were segmented in four distinct sets of clusters, or consumer groups. Of the four sets of clusters formed, one was created using solely demographic variables, whereas the other three used “food time” variables comprised of behavioral and attitudinal information. It may be inferred from the results that the behavior of the participants within each cluster was similar regarding a particular variable being tested, while it differed from the behavior of participants in other clusters (regarding the same variable being tested). Specifically, an abundance of key, significant differences were found with the “food time” variables. The study supports the use of variables related to “food time” allocation and the context of the eating situation as they relate to the purchase, preparation, and consumption of food, instead of only demographic attributes. The results will be useful for food marketers and product developers seeking to understand how food fits into the lives of female consumers with diverse roles and behaviors, in addition to being valuable for segmenting a select market or targeting a particular customer type.
APA, Harvard, Vancouver, ISO, and other styles
39

González, García Jaime. "Proposal For a Vision-Based Cell Morphology Analysis System." Thesis, Linköping University, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15872.

Full text
Abstract:

One of the fields where image processing finds its application but that remains as anunexplored territory is the analysis of cell morphology. This master thesis proposes a systemto carry out this research and sets the necessary technical basis to make it feasible, rangingfrom the processing of time-lapse sequences using image segmentation to the representation,description and classification of cells in terms of morphology.

Due to the highly variability of cell morphological characteristics several segmentationmethods have been implemented to face each of the problems encountered: Edge-detection,region-growing and marked watershed were found to be successful processing algorithms.This variability inherent to cells and the fact that human eye has a natural disposition to solvesegmentation problems finally lead to the development of a user-friendly interactiveapplication, the Time Lapse Sequence Processor (TLSP). Although it was initially consideredas a mere interface to perform cell segmentation, TLSP concept has evolved into theconstruction of a complete multifunction tool to perform cell morphology analysis:segmentation, morphological data extraction, analysis and management, cell tracking andrecognition system, etc. In its last version, TLSP v0.2 Alpha contains several segmentationtools, improved user interface and, data extraction and management capabilities.

Finally, a wide set of recommendations and improvements have been discussed, pointing the path for future development in this area.

APA, Harvard, Vancouver, ISO, and other styles
40

Berjass, Hisham. "Hardware Implementation Of An Object Contour Detector Using Morphological Operators." Thesis, Linköpings universitet, Institutionen för systemteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-66353.

Full text
Abstract:
The purpose of this study was the hardware implementation of a real time moving object contour extraction.Segmentation of image frames to isolate moving objects followed by contour extraction using digitalmorphology was carried out in this work. Segmentation using temporal difference with median thresholdingapproach was implemented, experimental methods were used to determine the suitable morphological operatorsalong with their structuring elements dimensions to provide the optimum contour extraction.The detector with image resolution of 1280 x1024 pixels and frame rate of 60 Hz was successfully implemented,the results indicate the effect of proper use of morphological operators for post processing and contourextraction on the overall efficiency of the system. An alternative segmentation method based on Stauffer & Grimson algorithm was investigated and proposed which promises better system performance at the expense ofimage resolution and frame rate
APA, Harvard, Vancouver, ISO, and other styles
41

Bjurström, Håkan, and Jon Svensson. "Assessment of Grapevine Vigour Using Image Processing." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1342.

Full text
Abstract:

This Master’s thesis studies the possibility of using image processing as a tool to facilitate vine management, in particular shoot counting and assessment of the grapevine canopy. Both are areas where manual inspection is done today. The thesis presents methods of capturing images and segmenting different parts of a vine. It also presents and evaluates different approaches on how shoot counting can be done. Within canopy assessment, the emphasis is on methods to estimate canopy density. Other possible assessment areas are also discussed, such as canopy colour and measurement of canopy gaps and fruit exposure. An example of a vine assessment system is given.

APA, Harvard, Vancouver, ISO, and other styles
42

Törnblom, Nicklas. "Uppskattning av Ytkurvatur och CFD-simuleringar i Mänskliga Bukaortor." Thesis, Linköping University, Department of Mechanical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2906.

Full text
Abstract:

By applying a segmentation procedure to two different sets of computed tomography scans, two geometrical models of the abdominal aorta, containing one inlet and two outlets have been constructed. One of these depicts a healthy blood vessel while the other displays one afflicted with a Abdominal Aortic Aneurysm.

After inputting these geometries into the computational dynamics software FLUENT, six simulations of laminar, stationary flow of a fluid that was assumed to be Newtonian were performed. The mass flow rate across the model outlet boundaries was varied for the different simulations to produce a basis for a parameter analysis study.

The segmentation data was also used as input data to a surface description procedure which produced not only the surface itself, but also the first and second directional derivatives in every one of its defining spatial data points. These sets of derivatives were followingly applied in an additional procedure that calculated values of Gaussian curvature.

A parameter variance analysis was carried out to evaluate the performance of the surface generation procedure. An array of resultant surfaces and surface directional derivatives were obtained. Values of Gaussian curvature were calculated in the defining spatial data points of a few selected surfaces.

The curvature values of a selected data set were visualized through a contour plot as well as through a surface map. Comparisons between the curvature surface map and one wall shear stress surface map were made.

APA, Harvard, Vancouver, ISO, and other styles
43

Seraji, Mojgan. "Morphosyntactic Corpora and Tools for Persian." Doctoral thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-248780.

Full text
Abstract:
This thesis presents open source resources in the form of annotated corpora and modules for automatic morphosyntactic processing and analysis of Persian texts. More specifically, the resources consist of an improved part-of-speech tagged corpus and a dependency treebank, as well as tools for text normalization, sentence segmentation, tokenization, part-of-speech tagging, and dependency parsing for Persian. In developing these resources and tools, two key requirements are observed: compatibility and reuse. The compatibility requirement encompasses two parts. First, the tools in the pipeline should be compatible with each other in such a way that the output of one tool is compatible with the input requirements of the next. Second, the tools should be compatible with the annotated corpora and deliver the same analysis that is found in these. The reuse requirement means that all the components in the pipeline are developed by reusing resources, standard methods, and open source state-of-the-art tools. This is necessary to make the project feasible. Given these requirements, the thesis investigates two main research questions. The first is how can we develop morphologically and syntactically annotated corpora and tools while satisfying the requirements of compatibility and reuse? The approach taken is to accept the tokenization variations in the corpora to achieve robustness. The tokenization variations in Persian texts are related to the orthographic variations of writing fixed expressions, as well as various types of affixes and clitics. Since these variations are inherent properties of Persian texts, it is important that the tools in the pipeline can handle them. Therefore, they should not be trained on idealized data. The second question concerns how accurately we can perform morphological and syntactic analysis for Persian by adapting and applying existing tools to the annotated corpora. The experimental evaluation of the tools shows that the sentence segmenter and tokenizer achieve an F-score close to 100%, the tagger has an accuracy of nearly 97.5%, and the parser achieves a best labeled accuracy of over 82% (with unlabeled accuracy close to 87%).
APA, Harvard, Vancouver, ISO, and other styles
44

Alakuijala, J. (Jyrki). "Algorithms for modeling anatomic and target volumes in image-guided neurosurgery and radiotherapy." Doctoral thesis, University of Oulu, 2001. http://urn.fi/urn:isbn:9514265742.

Full text
Abstract:
Abstract The use of image-guidance in surgery and radiotherapy has significantly improved patient outcome in neurosurgery and radiotherapy treatments. This work developed volume definition and verification techniques for image-guided applications, using a number of algorithms ranging from image processing to visualization. Stereoscopic visualization, volumetric tumor model overlaid on an ultrasound image, and visualization of the treatment geometry were experimented with on a neurosurgical workstation. Visualization and volume definition tools were developed for radiotherapy treatment planning system. The magnetic resonance inhomogeneity correction developed in this work, possibly the first published data-driven method with wide applicability, automatically mitigates the RF field inhomogeneity artefact present in magnetic resonance images. Correcting the RF inhomogeneity improves the accuracy of the generated volumetric models. Various techniques to improve region growing are also presented. The simplex search method and combinatory similarity terms were used to improve the similarity function with a low additional computational cost and high yield in region correctness. Moreover, the effects of different priority queue implementations were studied. A fast algorithm for calculating high-quality digitally reconstructed radiographs has been developed and shown to better meet typical radiotherapy needs than the two alternative algorithms. A novel visualization method, beam's light view, is presented. It uses texture mapping for projecting the fluence of a radiation field on an arbitrary surface. This work suggests several improved algorithms for image processing, segmentation, and visualization used in image-guided treatment systems. The presented algorithms increase the accuracy of image-guidance, which can further improve the applicability and efficiency of image-guided treatments.
APA, Harvard, Vancouver, ISO, and other styles
45

Ferreira, Pedro Miguel Martins. "Contributions to the segmentation of dermoscopic images." Dissertação, 2012. http://hdl.handle.net/10216/68423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hsu, Wei-Bang, and 徐偉邦. "Geometric Primitives Parameters Extraction for Precision Parts by Image Segmentation Technology." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/48474351476474120408.

Full text
Abstract:
碩士
元智大學
機械工程學系
94
Due to the prevailing of the miniature electronic products, demands for precision parts for these products have increased dramatically over the last decade. Due to their small dimensions and total, instead of partial, product inspection, have made manual inspection method unfeasible. Meanwhile, requirements on the automatic optical inspection (AOI) technology to detect product defect and ensure the dimensional accuracy become more stringent. Although, AOI technology has been applied to the inspection of industrial product for decades. The inspection task, for example, firstly to locate the position and let the vision software gauge the radii of holes in a mechanical component. The first step is mostly carried out manually. This limits the performance of the AOI equipments. This paper proposes a geometric parameter extraction method using the image segmentation technique to improve an existing method using image size reduction in searching geometrical parameters. Image segmentation can help us to quickly locate the positions of all objects in an image, and corner detection segment each object into basic geometrical elements like lines, arcs and circles. Then we can choose the corresponding geometrical parameters extraction’s method to find the geometrical parameters of the pattern. There will be more precise and fast in searching geometrical parameters. The process of geometrical inspection is first to detect edge, then to use edge linking and labeling to classify existing objects. Following, utilizing the characteristics of SUSAN edge response to locate the corners, and the object of the first classification will reclassify again. Then the geometric shape in original image can be segmented in basis shape, and we can calculate the inertial value for every segment to recognize the circles and lines. Finally, using correspond method to each segmented group. Hence, it can prevent time wasting and reduce disturbance to enhance the precision of the geometrical parameter inspection.
APA, Harvard, Vancouver, ISO, and other styles
47

Shih, Ming-Yu, and 史明玉. "Market Segmentation and Differential Marketing Analysis Based on Data Mining Technology." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/22264511075871296376.

Full text
Abstract:
碩士
世新大學
資訊管理學研究所(含碩專班)
95
Customer relationship management (CRM) is a key to improve the competition of enterprises. In the era of the Internet, one of the most important issues of CRM, Differential Marketing, is becoming a very important strategy to gain more profits for enterprises, which transfers the idea from goods-centric marketing to customer-oriented marketing. Therefore, providing different services to different customers using Data Mining technology can partition customers, find their potential demands, and figure out a Differential Marketing plan. In this research, an on-line shopping store in Taiwan is taken as an example to implement the Differential Marketing strategy. The Customer Value Matrix of the RFM model is selected for segmenting the best customers, as well as the Self-Organizing Map and the Association Rules are implemented to provide a model for data analysis. The best customers are segmented into 12 clusters depending on the values of customers. Among them, Cluster 10 with the highest loyalty is analyzed, its behavior characteristics shows that the Inexpensive goods is chosen, and the combinations of 20 categories of products are found, and some marketing suggestions are proposed. Therefore, the structure of data analysis shows that the preference of customers, combinations of products and the prices of sold products are suggested. Finally, carry out Differential Marketing and gain more profits.
APA, Harvard, Vancouver, ISO, and other styles
48

Zeng, Wei-Ming, and 曾偉銘. "Automatic Vessels Detection and Segmentation Based on Convolutional Neural Networks Technology." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/m88c2e.

Full text
Abstract:
碩士
國立臺中科技大學
資訊工程系碩士班
106
The main purpose of this study is to use the convolutional neural network (CNN) to carry on the automatic vessels detection and segmentation in the image. First, the objects of the vessels are detection from the image, and then these objects to the segmentation of the vessels type. This paper uses the convoluted neural network architecture of YOLO, firstly, the vessels is detected and located from the image, and then the FCN and AirNet two convolutional neural network architecture as object segmentation model. And the position vessels is segmentation into the vessels object, and the optimized fused FCN and AirNet vessels segmentation results. And finally use the vessels segmentation results to find the main direction of the vessels. In order to verify the effectiveness of this paper, this study carried out the detection and segmentation of vessels in this experiment. The two experimental images are select of the video from YouTube. The image data set is divided into three parts, The first part is training vessels detection, the data set has 144 images. The second part is the training vessels object segmentation, the data set has 1156 images. The third part is the test data set for vessels detection and vessels object segmentation, the data set has 549 test images. Experiments show that the detection results of this study, when the vessels is detected to have more than 80% of the area, the detection rate is 78.97%. Experiments show that the vessels object segmentation results with pixel accuracy (mPA) of up to 92%. Finally, in the calculation of the main direction of the vessels, the vessels main shaft direction can be used to guess the vessels running direction and track the suspicious vessels. From the detection and segmentation results, this paper shows that this method can effectively carried out the detection and segmentation of vessels. The future can be used in the vessels positioning and the vessels recognition.
APA, Harvard, Vancouver, ISO, and other styles
49

Ribeiro, Luís Filipe da Silva. "Segmentation of Aberrant Crypt Foci using Computational Vision." Dissertação, 2013. https://repositorio-aberto.up.pt/handle/10216/69359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Lin, Zhan-Wei, and 林展緯. "Fully Focused Microscopic Image Based on Defocused Image segmentation and Integration Technology." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/92267652867360641079.

Full text
Abstract:
碩士
中原大學
電機工程研究所
93
In photographing, the depth of field has been an issue to the photographers. The depth of field often causes photographers unable to achieve a completely focused image through one single shot behavior. Such impossible behavior often occurs in regular and microscopic photographing processes. In regular photographing, the depth of field can be corrected using long depth of field lenses. However, due to the high magnification extent with a depth of field in micro meters (μm), so only one clear slice image can be obtained in each microscopic photographing process. In this paper presents the defocus reconstruction algorithm, as under the limited depth of field in microscopic photographing, to select each individual clear segments of the image to reconstruct them to one fully focused image. The application of defocus reconstruction algorithm is always restricted to the same photographic scene process in the past. The algorithm is extending the new concept of using the “Mutual Information image registration” to correct the image displacement, for use in the similar image shift condition occurs as the result when specimen movement or stereo microscope is changing focus. The image registration corrects the image displacement error, and proceeds to the defocus reconstruction process to resolve the image displacement and defocused situation. This defocus reconstruction algorithm can be applied to other photographing areas. This research adopts the image sharp value to evaluate the image reconstruction results. It is evident that the clearest images are all resulted from the depth of field reconstruction process. Therefore, this reconstruction algorithm is an effective method to apply in the microscopic photographing for image correction purposes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography