To see the other types of publications on this topic, follow the link: Hough circle detection transform.

Dissertations / Theses on the topic 'Hough circle detection transform'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Hough circle detection transform.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lu, Dingran. "Multi-circle Detections for an Automatic Medical Diagnosis System." DigitalCommons@CalPoly, 2012. https://digitalcommons.calpoly.edu/theses/735.

Full text
Abstract:
Real-time multi-circle detection has been a challenging problem in the field of biomedical image processing, due to the variable sizes and non-ideal shapes of cells in microscopic images. In this study, two new multi-circle detection algorithms are developed to facilitate an automatic bladder cancer diagnosis system: one is a modified circular Hough Transform algorithm integrated with edge gradient information; and the other one is a stochastic search approach based on real valued artificial immune systems. Computer simulation results show both algorithms outperform traditional methods such as the Hough Transform and the geometric feature based method, in terms of both precision and speed.
APA, Harvard, Vancouver, ISO, and other styles
2

Sutherland, Fritz. "Driver traffic violation detection and driver risk calculation through real-time image processing." Diss., University of Pretoria, 2005. http://hdl.handle.net/2263/66246.

Full text
Abstract:
Road safety is a serious problem in many countries and affects the lives of many people. Improving road safety starts with the drivers, and the best way to make them change their habits is to offer incentives for better, safer driving styles. This project aims to make that possible by offering a means to calculate a quantified indicator of how safe a driver's habits are. This is done by developing an on-board, visual road-sign recognition system that can be coupled with a vehicle tracking system to determine how often a driver violates the rules of the road. The system detects stop signs, red traffic lights and speed limit signs, and outputs this data in a format that can be read by a vehicle tracking system, where it can be combined with speed information and sent to a central database where the driver safety rating can be calculated. Input to the system comes from a simple, standard dashboard mounted camera within the vehicle, which generates a continuous stream of images of the scene directly in front of the vehicle. The images are subjected to a number of cascaded detection sub-systems to determine if any of the target objects (road signs) appear within that video frame. The detection system software had to be optimized for minimum false positive detections, since those will unfairly punish the driver, and it also had to be optimized for speed to run on small hardware that can be installed in the vehicle. The first stage of the cascaded system consists of an image detector that detects circles within the image, since traffic lights and speed signs are circular and a stop sign can be approximated by a circle when the image is blurred or the resolution is lowered. The second stage is a neural network that is trained to recognize the target road sign in order to determine which road sign was found, or to eliminate other circular objects found in the image frame. The output of the neural network is then sent through an iterative filter with a majority voted output to eliminate detection 'jitter' and the occasional incorrect classifier output. Object tracking is applied to the 'good' detection outputs and used as an additional input for the detection phase on the next frame. In this way the continuity and robustness of the image detector are improved, since the object tracker indicates to it where the target object is most likely to appear in the next frame, based on the track it has been following through previous frames. In the final stage the detection system output is written to the chosen pins of the hardware output port, from where the detection output can be indicated to the user and also used as an input to the vehicle tracking system. To find the best detection approach, some methods found in literature were studied and the most likely candidates compared. The scale invariant feature transform (SIFT) and speeded up robust features (SURF) algorithms are too slow compared to the cascaded approach to be used for real-time detection on an in-vehicle hardware platform. In the cascaded approach used, different detection stage algorithms are tested and compared. The Hough circle transform is measured against blob detection on stop signs and speed limit signs. On traffic light state detection two approaches are tested and compared, one based on colour information and the other on direct neural network classification. To run the software in the user's vehicle, an appropriate hardware platform is chosen. A number of promising hardware platforms were studied and their specifications compared before the best candidate was selected and purchased for the project. The developed software was tested on the selected hardware in a vehicle during real public road driving for extended periods and under various conditions.<br>Dissertation (MEng)--University of Pretoria, 2017.<br>Electrical, Electronic and Computer Engineering<br>MEng<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
3

Musilová, Kateřina. "Pupilometrie aplikovaná během měření defokusační křivky." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-401002.

Full text
Abstract:
The aim of this work is to design algorithm that will detect pupil from video. The theoretical knowledge necessary for proper pupil detection is also described in this master’s thesis. Detection is done on 24 videos that are converted to single images. The complete result the dependence of the pupil diameter on the used dioptre. The overall success rate of the algorithm is 88,13 %. The overall error is 11,87 %. For 17 out of 24 patients, it is confirmed that the greater the dioptre, the larger the pupil.
APA, Harvard, Vancouver, ISO, and other styles
4

Zutautas, Vaidutis. "Charcoal Kiln Detection from LiDAR-derived Digital Elevation Models Combining Morphometric Classification and Image Processing Techniques." Thesis, Högskolan i Gävle, Samhällsbyggnad, GIS, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-24374.

Full text
Abstract:
This paper describes a unique method for the semi-automatic detection of historic charcoal production sites in LiDAR-derived digital elevation models. Intensified iron production in the early 17th century has remarkably influenced ways of how the land in Sweden was managed. Today, the abundance of charcoal kilns embedded in the landscape survives as cultural heritage monuments that testify about the scale forest management for charcoal production has contributed to the uprising iron manufacturing industry. An arbitrary selected study area (54 km2) south west of Gävle city served as an ideal testing ground, which is known to consist of already registered as well as unsurveyed charcoal kiln sites. The proposed approach encompasses combined morphometric classification methods being subjected to analytical image processing, where an image that represents refined terrain morphology was segmented and further followed by Hough Circle transfer function applied in seeking to detect circular shapes that represent charcoal kilns. Sites that have been identified manually and using the proposed method were only verified within an additionally established smaller validation area (6 km2). The resulting outcome accuracy was measured by calculating harmonic mean of precision and recall (F1-Score). Along with indication of previously undiscovered site locations, the proposed method showed relatively high score in recognising already registered sites after post-processing filtering. In spite of required continual fine-tuning, the described method can considerably facilitate mapping and overall management of cultural resources.
APA, Harvard, Vancouver, ISO, and other styles
5

Oldham, Kevin M. "Table tennis event detection and classification." Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/19626.

Full text
Abstract:
It is well understood that multiple video cameras and computer vision (CV) technology can be used in sport for match officiating, statistics and player performance analysis. A review of the literature reveals a number of existing solutions, both commercial and theoretical, within this domain. However, these solutions are expensive and often complex in their installation. The hypothesis for this research states that by considering only changes in ball motion, automatic event classification is achievable with low-cost monocular video recording devices, without the need for 3-dimensional (3D) positional ball data and representation. The focus of this research is a rigorous empirical study of low cost single consumer-grade video camera solutions applied to table tennis, confirming that monocular CV based detected ball location data contains sufficient information to enable key match-play events to be recognised and measured. In total a library of 276 event-based video sequences, using a range of recording hardware, were produced for this research. The research has four key considerations: i) an investigation into an effective recording environment with minimum configuration and calibration, ii) the selection and optimisation of a CV algorithm to detect the ball from the resulting single source video data, iii) validation of the accuracy of the 2-dimensional (2D) CV data for motion change detection, and iv) the data requirements and processing techniques necessary to automatically detect changes in ball motion and match those to match-play events. Throughout the thesis, table tennis has been chosen as the example sport for observational and experimental analysis since it offers a number of specific CV challenges due to the relatively high ball speed (in excess of 100kph) and small ball size (40mm in diameter). Furthermore, the inherent rules of table tennis show potential for a monocular based event classification vision system. As the initial stage, a proposed optimum location and configuration of the single camera is defined. Next, the selection of a CV algorithm is critical in obtaining usable ball motion data. It is shown in this research that segmentation processes vary in their ball detection capabilities and location out-puts, which ultimately affects the ability of automated event detection and decision making solutions. Therefore, a comparison of CV algorithms is necessary to establish confidence in the accuracy of the derived location of the ball. As part of the research, a CV software environment has been developed to allow robust, repeatable and direct comparisons between different CV algorithms. An event based method of evaluating the success of a CV algorithm is proposed. Comparison of CV algorithms is made against the novel Efficacy Metric Set (EMS), producing a measurable Relative Efficacy Index (REI). Within the context of this low cost, single camera ball trajectory and event investigation, experimental results provided show that the Horn-Schunck Optical Flow algorithm, with a REI of 163.5 is the most successful method when compared to a discrete selection of CV detection and extraction techniques gathered from the literature review. Furthermore, evidence based data from the REI also suggests switching to the Canny edge detector (a REI of 186.4) for segmentation of the ball when in close proximity to the net. In addition to and in support of the data generated from the CV software environment, a novel method is presented for producing simultaneous data from 3D marker based recordings, reduced to 2D and compared directly to the CV output to establish comparative time-resolved data for the ball location. It is proposed here that a continuous scale factor, based on the known dimensions of the ball, is incorporated at every frame. Using this method, comparison results show a mean accuracy of 3.01mm when applied to a selection of nineteen video sequences and events. This tolerance is within 10% of the diameter of the ball and accountable by the limits of image resolution. Further experimental results demonstrate the ability to identify a number of match-play events from a monocular image sequence using a combination of the suggested optimum algorithm and ball motion analysis methods. The results show a promising application of 2D based CV processing to match-play event classification with an overall success rate of 95.9%. The majority of failures occur when the ball, during returns and services, is partially occluded by either the player or racket, due to the inherent problem of using a monocular recording device. Finally, the thesis proposes further research and extensions for developing and implementing monocular based CV processing of motion based event analysis and classification in a wider range of applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Segalini, Lorenzo. "Implementazione in Java dell'algoritmo "Circle Hough Transform"." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
La tesi si concentra sullo studio della tecnica denominata "Circle Hough Transform" ed ha come obiettivo principale quello di dimostrare attraverso l’implementazione in linguaggio Java l’utilità e la validità dell’algoritmo trattato, mostrandone l’efficacia e il funzionamento generale di ogni sua sezione.
APA, Harvard, Vancouver, ISO, and other styles
7

Galbavý, Juraj. "Systém vyhodnocování pro stopový detektor v pevné fázi." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-318170.

Full text
Abstract:
The aim of this thesis is to design an algorithm for an automatic track counting of an image of etched track detector made of CR-39 polymer. Tracks are produced by alpha particles. Chemically etched detector is imaged using a microscope resulting in 64 images of segments on the surface of the detector. Circle shaped tracks in the images have to be detected and counted. This thesis evaluates the utilization of circle hough transform for circle detection. The final software should automate a detector track counting and should also account for defects in the image and contamination of detector surface. The software will produce a measurement report with a total track count in each segment.
APA, Harvard, Vancouver, ISO, and other styles
8

Costa, Luciano da Fontoura. "Effective detection of line segments with Hough transform." Thesis, King's College London (University of London), 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chaudhary, Priyanka. "SPHEROID DETECTION IN 2D IMAGES USING CIRCULAR HOUGH TRANSFORM." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_theses/9.

Full text
Abstract:
Three-dimensional endothelial cell sprouting assay (3D-ECSA) exhibits differentiation of endothelial cells into sprouting structures inside a 3D matrix of collagen I. It is a screening tool to study endothelial cell behavior and identification of angiogenesis inhibitors. The shape and size of an EC spheroid (aggregation of ~ 750 cells) is important with respect to its growth performance in presence of angiogenic stimulators. Apparently, tubules formed on malformed spheroids lack homogeneity in terms of density and length. This requires segregation of well formed spheroids from malformed ones to obtain better performance metrics. We aim to develop and validate an automated imaging software analysis tool, as a part of a High-content High throughput screening (HC-HTS) assay platform, to exploit 3D-ECSA as a differential HTS assay. We present a solution using Circular Hough Transform to detect a nearly perfect spheroid as per its circular shape in a 2D image. This successfully enables us to differentiate and separate good spheroids from the malformed ones using automated test bench.
APA, Harvard, Vancouver, ISO, and other styles
10

Princen, John. "Hough transform methods for curve detection and parameter estimation." Thesis, University of Surrey, 1990. http://epubs.surrey.ac.uk/817/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ramesh, Naren. "A Hardware Implementation of Hough Transform Based on Parabolic Duality." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1396530960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jiménez, Tauste Albert, and Niklas Rydberg. "Area of Interest Identification Using Circle Hough Transform and Outlier Removal for ELISpot and FluoroSpot Images." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254256.

Full text
Abstract:
The aim of this project is to design an algorithm that identifies the Area of Interest (AOI) in ELISpot and FluoroSpot images. ELISpot and FluoroSpot are two varieties of a biochemical test used to analyze immune responses by quantifying the amount of cytokine secreted by cells. ELISpot and FluoroSpot images show a well that contains the cytokinesecreting cells which appear as scattered spots. Prior to counting the number of spots, it is required to detect the area in which to count the spots, i.e. the area delimited by the contour of the well. We propose to use the Circle Hough Transform together with filtering and the Laplacian of Gaussian edge detector in order to accurately detect such area. Furthermore we develop an outlier removal method that contributes to increase the robustness of the proposed detection method. Finally we compare our algorithm with another algorithm already in use. A Swedish biotech company called Mabtech has implemented an AOI identifier in the same field. Our proposed algorithm proves to be more robust and provides consistent results for all the images in the dataset.
APA, Harvard, Vancouver, ISO, and other styles
13

Salmanpour, Rahmdel Payam. "A parallel windowing approach to the Hough transform for line segment detection." Thesis, Middlesex University, 2013. http://eprints.mdx.ac.uk/12629/.

Full text
Abstract:
In the wide range of image processing and computer vision problems, line segment detection has always been among the most critical headlines. Detection of primitives such as linear features and straight edges has diverse applications in many image understanding and perception tasks. The research presented in this dissertation is a contribution to the detection of straight-line segments by identifying the location of their endpoints within a two-dimensional digital image. The proposed method is based on a unique domain-crossing approach that takes both image and parameter domain information into consideration. First, the straight-line parameters, i.e. location and orientation, have been identified using an advanced Fourier-based Hough transform. As well as producing more accurate and robust detection of straight-lines, this method has been proven to have better efficiency in terms of computational time in comparison with the standard Hough transform. Second, for each straight-line a window-of-interest is designed in the image domain and the disturbance caused by the other neighbouring segments is removed to capture the Hough transform buttery of the target segment. In this way, for each straight-line a separate buttery is constructed. The boundary of the buttery wings are further smoothed and approximated by a curve fitting approach. Finally, segments endpoints were identified using buttery boundary points and the Hough transform peak. Experimental results on synthetic and real images have shown that the proposed method enjoys a superior performance compared with the existing similar representative works.
APA, Harvard, Vancouver, ISO, and other styles
14

Thakkar, Chintan. "Ventricle slice detection in MRI images using Hough Transform and Object Matching techniques." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Donnelley, Martin, and martin donnelley@gmail com. "Computer Aided Long-Bone Segmentation and Fracture Detection." Flinders University. Engineering, 2008. http://catalogue.flinders.edu.au./local/adt/public/adt-SFU20080115.222927.

Full text
Abstract:
Medical imaging has advanced at a tremendous rate since x-rays were discovered in 1895. Today, x-ray machines produce extremely high-quality images for radiologists to interpret. However, the methods of interpretation have only recently begun to be augmented by advances in computer technology. Computer aided diagnosis (CAD) systems that guide healthcare professionals to making the correct diagnosis are slowly becoming more prevalent throughout the medical field. Bone fractures are a relatively common occurrence. In most developed countries the number of fractures associated with age-related bone loss is increasing rapidly. Regardless of the treating physician's level of experience, accurate detection and evaluation of musculoskeletal trauma is often problematic. Each year, the presence of many fractures is missed during x-ray diagnosis. For a trauma patient, a mis-diagnosis can lead to ineffective patient management, increased dissatisfaction, and expensive litigation. As a result, detection of long-bone fractures is an important orthopaedic and radiologic problem, and it is proposed that a novel CAD system could help lower the miss rate. This thesis examines the development of such a system, for the detection of long-bone fractures. A number of image processing software algorithms useful for automating the fracture detection process have been created. The first algorithm is a non-linear scale-space smoothing technique that allows edge information to be extracted from the x-ray image. The degree of smoothing is controlled by the scale parameter, and allows the amount of image detail that should be retained to be adjusted for each stage of the analysis. The result is demonstrated to be superior to the Canny edge detection algorithm. The second utilises the edge information to determine a set of parameters that approximate the shaft of the long-bone. This is achieved using a modified Hough Transform, and specially designed peak and line endpoint detectors. The third stage uses the shaft approximation data to locate the bone centre-lines and then perform diaphysis segmentation to separate the diaphysis from the epiphyses. Two segmentation algorithms are presented and one is shown to not only produce better results, but also be suitable for application to all long-bone images. The final stage applies a gradient based fracture detection algorithm to the segmented regions. This algorithm utilises a tool called the gradient composite measure to identify abnormal regions, including fractures, within the image. These regions are then identified and highlighted if they are deemed to be part of a fracture. A database of fracture images from trauma patients was collected from the emergency department at the Flinders Medical Centre. From this complete set of images, a development set and test set were created. Experiments on the test set show that diaphysis segmentation and fracture detection are both performed with an accuracy of 83%. Therefore these tools can consistently identify the boundaries between the bone segments, and then accurately highlight midshaft long-bone fractures within the marked diaphysis. Two of the algorithms---the non-linear smoothing and Hough Transform---are relatively slow to compute. Methods of decreasing the diagnosis time were investigated, and a set of parallelised algorithms were designed. These algorithms significantly reduced the total calculation time, making use of the algorithm much more feasible. The thesis concludes with an outline of future research and proposed techniques that---along with the methods and results presented---will improve CAD systems for fracture detection, resulting in more accurate diagnosis of fractures, and a reduction of the fracture miss rate.
APA, Harvard, Vancouver, ISO, and other styles
16

Limberger, Frederico Artur. "Real-time detection of planar regions in unorganized point clouds." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/97001.

Full text
Abstract:
Detecção automática de regiões planares em nuvens de pontos é um importante passo para muitas aplicações gráficas, de processamento de imagens e de visão computacional. Enquanto a disponibilidade de digitalizadores a laser e a fotografia digital tem nos permitido capturar nuvens de pontos cada vez maiores, técnicas anteriores para detecção de planos são computacionalmente caras, sendo incapazes de alcançar desempenho em tempo real para conjunto de dados contendo dezenas de milhares de pontos, mesmo quando a detecção é feita de um modo não determinístico. Apresentamos uma abordagem determinística para detecção de planos em nuvens de pontos não estruturadas que apresenta complexidade computacional O(n log n) no número de amostras de entrada. Ela é baseada em um método eficiente de votação para a transformada de Hough. Nossa estratégia agrupa conjuntos de pontos aproximadamente coplanares e deposita votos para estes conjuntos em um acumulador esférico, utilizando núcleos Gaussianos trivariados. Uma comparação com as técnicas concorrentes mostra que nossa abordagem é consideravelmente mais rápida e escala significativamente melhor que as técnicas anteriores, sendo a primeira solução prática para detecção determinística de planos em nuvens de pontos grandes e não estruturadas.<br>Automatic detection of planar regions in point clouds is an important step for many graphics, image processing, and computer vision applications. While laser scanners and digital photography have allowed us to capture increasingly larger datasets, previous techniques are computationally expensive, being unable to achieve real-time performance for datasets containing tens of thousands of points, even when detection is performed in a non-deterministic way. We present a deterministic technique for plane detection in unorganized point clouds whose cost is O(n log n) in the number of input samples. It is based on an efficient Hough-transform voting scheme and works by clustering approximately co-planar points and by casting votes for these clusters on a spherical accumulator using a trivariate Gaussian kernel. A comparison with competing techniques shows that our approach is considerably faster and scales significantly better than previous ones, being the first practical solution for deterministic plane detection in large unorganized point clouds.
APA, Harvard, Vancouver, ISO, and other styles
17

Damaryam, Gideon Kanji. "Vision systems for a mobile robot based on line detection using the Hough Transform and artificial neural networks." Thesis, Robert Gordon University, 2008. http://hdl.handle.net/10059/450.

Full text
Abstract:
This project contributes to the problem of mobile robot self-navigation within a rectilinear framework based on visual data. It proposes a number of vision systems based on detection of straight lines in images captured by a robot using the Hough transform and artificial neural networks as core algorithms. The Hough transform is a robust method for detection of basic features (Boyce et al 1987). However, it is so computationally demanding that it is not commonly used in real time applications and applications which utilise anything but small images (Song and Lyu 2005). (Dempsey and McVey 1992) have suggested that this problem might be resolved if the Hough transform were implemented with artificial neural networks. This project investigates the feasibility of systems using these core algorithms, and systems that are hybrids of them. Prior to application of the core algorithms to a captured image, various stages of pre-processing are carried out including resizing for optimum results, edgedetection, and edge thinning using an adaptation of the thinning method of (Park, 2000) proposed by this work. An analysis of the costs and benefits of thinning as part of pre-processing has also been performed. The Hough transform based system, which has been largely successful, has involved a number of new approaches. These include a peak detection scheme; post-processing schemes which find valid sub-lines of lines found by the peak detection process, and establish which high-level features these sub-lines represent; and an appropriate navigation scheme. Two artificial neural network systems were designed based on lines detection and sub-lines detection respectively. The first was able to detect long lines, but not shorter (even though navigationally important) lines, and so was aborted. The second system has two major stages. Networks of stage 1 developed to detect sub-lines in sub-images derived by breaking down the original images, did so passibly well. A network in stage 2 designed to use the results of stage 1 to guide the robot’s motion did not do so well for most test images. The networks of stage 1, however, have been helpful with development of a hybrid vision system. Suggestions have been made on how this work can be furthered.
APA, Harvard, Vancouver, ISO, and other styles
18

Oliveira, Marlene Isabel Falmana de. "Reconstrução 3D a partir de imagens stereo." Master's thesis, Universidade de Évora, 2016. http://hdl.handle.net/10174/18426.

Full text
Abstract:
A reconstrução 3D é uma das várias áreas que estão inseridas no campo da Visão Computacional. A reconstrução 3D com recurso a imagens stereo consiste numa técnica ótica. Esta técnica usa como input duas ou mais imagens de uma cena ou de um objeto e permite obter um modelo 3D desta cena ou objeto. Nesta dissertação é apresentada uma metodologia de reconstrução 3D que usa como input um par de imagens stereo, obtidas com recurso a duas webcams. As câmaras usadas para captar estas imagens são calibradas antes de ser iniciada a reconstrução 3D. Com recurso a algoritmos especializados são detetadas as linhas presentes nas imagens captadas. Assim, nesta dissertação apresenta-se também um algoritmo de deteção de linhas baseado na Transformada de Hough. Quando o processo de deteção de linhas termina, são identificadas correspondências entre estas linhas. Os algoritmos criados para este efeito são também apresentados nesta dissertação. Finalmente, é reconstruído o modelo 3D do objeto presente no par de imagens stereo; Abstract: 3D Reconstruction From Stereo Images 3D reconstruction is one of several areas that are included in the Computer Vision field. 3D reconstruction from stereo images is an optical technique. This technique uses two or more images of a scene or object as input and outputs a 3D model of this scece or object. This dissertation introduces a methodology that allows for 3D reconstruction with a pair of stereo images, obtained with two webcams. These cameras are calibrated before the 3D reconstruction starts. The lines present in the pair of stereo images are detected with specialized algorithms. Thus, in this dissertation it is also presented a line detection algorithm based on the Hough Transform. Once the line detection process is completed, the correspondences between the lines detected in the stereo pair are found. The algorithms created to identify these correspondences are also presented in this dissertation. Finally, the 3D model of the object shown in the stereo images is produced.
APA, Harvard, Vancouver, ISO, and other styles
19

Alfonsi, Fabrizio <1990&gt. "Study and Optimization of Particle Track Detection via Hough Transform Hardware Implementation for the ATLAS Phase-II Trigger Upgrade." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amsdottorato.unibo.it/9774/1/Alfonsi_Fabrizio_Tesi.pdf.

Full text
Abstract:
In the CERN of Geneva the Large Hadron Collider (LHC) will undergo several deep upgrades in the next years. Instantaneous and Integrated Luminosity will be increased respectively up to 5−7·10 34 cm −2 s −1 and 3000 f b −1 . Alongside this collider the experiments exploiting LHC will undergo through upgrades crucial to fulfill the HEP goals. The ATLAS upgrades are divided into phases, namely Phase-I and Phase-II. Part of the ATLAS upgrade concerns the Trigger and Data Acquisition systems. In particular, for the ATLAS trigger, a big technological update is planned for the Phase-II. My contribution to these Phase-I and Phase-II plans has been focused to the Trigger and Data Acquisition system electronic update. In the Phase-I upgrade I worked at the commissioning of the new FELIX readout cards FLX-712 which will be mounted on part of the TDAQ system. These cards are FPGA based with a bandwidth up to 480 Gb/s and exploit PCI Express Generation 3 technology. My work has been focused on the preparation and the follow up of part of the tests of the cards for quality checks and controls. The ATLAS Phase-II trigger targets to increase its output data stream to the Tier 0 of one order of magnitude. For the ATLAS Phase-II upgrade I developed an implementation of a tracking algorithm to fulfill the new trigger requirements. This algorithm, known as Hough Transform, is used to track particle trajectories and it has been already demonstrated to be suited for the ATLAS specifications. In this thesis I present the study, the simulations and the hardware implementation of a preliminary version of the Hough Transform algorithm on a XILINX Ultrascale+ FPGA device.
APA, Harvard, Vancouver, ISO, and other styles
20

Zongur, Ugur. "Detection Of Airport Runways In Optical Satellite Images." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610757/index.pdf.

Full text
Abstract:
Advances in hardware and pattern recognition techniques, along with the widespread utilization of remote sensing satellites, have urged the development of automatic target detection systems. Automatic detection of airports is particularly essential, due to the strategic importance of these targets. In this thesis, a detection method is proposed for airport runways, which is the most distinguishing element of an airport. This method, which operates on large optical satellite images, is composed of a segmentation process based on textural properties, and a runway shape detection stage. In the segmentation process, several local textural features are extracted including not only low level features such as mean, standard deviation of image intensity and gradient, but also Zernike Moments, Circular-Mellin Features, Haralick Features, as well as features involving Gabor Filters, Wavelets and Fourier Power Spectrum Analysis. Since the subset of the mentioned features, which have a role in the discrimination of airport runways from other structures and landforms, cannot be predicted, Adaboost learning algorithm is employed for both classification and determining the feature subset, due to its feature selector nature. By means of the features chosen in this way, a coarse representation of possible runway locations is obtained, as a result of the segmentation operation. Subsequently, the runway shape detection stage, based on a novel form of Hough Transform, is performed over the possible runway locations, in order to obtain final runway positions. The proposed algorithm is examined with experimental work using a comprehensive data set consisting of large and high resolution satellite images and successful results are achieved.
APA, Harvard, Vancouver, ISO, and other styles
21

Westin, Carl-Fredrik. "Feature extraction based on a tensor image description." Licentiate thesis, Linköping University, Linköping University, Computer Vision, 1991. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54888.

Full text
Abstract:
<p>Feature extraction from a tensor based local image representation introduced by Knutsson in [37] is discussed. The tensor representation keeps statements of structure, certainty of statement and energy separate. Further processing for obtaining new features also having these three entities separate is achieved by the use of a new concept, tensor field filtering. Tensor filters for smoothing and for extraction of circular symmetries are presented and discussed in particular. These methods are used for corner detection and extraction of more global features such as lines in images. A novel method for grouping local orientation estimates into global line parameters is introduced. The method is based on a new parameter space, the Möbius Strip parameter space, which has similarities to the Hough transform. A local centroid clustering algorithm is used for classification in this space. The procedure automatically divides curves into line segments with appropriate lengths depending on the curvature. A linked list structure is built up for storing data in an efficient way.</p><br>Ogiltigt nummer / annan version: I publ. nr 290:s ISBN: 91-7870-815-X.
APA, Harvard, Vancouver, ISO, and other styles
22

Čírtek, Jiří. "Sledování malých změn objektů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217199.

Full text
Abstract:
This diploma thesis inspects problems with specification location of edges with higher accuracy then one pixel (subpixel accuracy). In terms of this assignment has been created program, which generates three different shapes of objects. With change of parameters in program is measuring location of gravitational center on objects with subpixel accuracy. Obtained data of gravitational center deviations are depictured in graphs.
APA, Harvard, Vancouver, ISO, and other styles
23

Tran, Antoine. "Object representation in local feature spaces : application to real-time tracking and detection." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLY010/document.

Full text
Abstract:
La représentation visuelle est un problème fondamental en vision par ordinateur. Le but est de réduire l'information au strict nécessaire pour une tâche désirée. Plusieurs types de représentation existent, comme les caractéristiques de couleur (histogrammes, attributs de couleurs...), de forme (dérivées, points d'intérêt...) ou d'autres, comme les bancs de filtres.Les caractéristiques bas-niveau (locales) sont rapides à calculer. Elles ont un pouvoir de représentation limité, mais leur généricité présente un intérêt pour des systèmes autonomes et multi-tâches, puisque les caractéristiques haut-niveau découlent d'elles.Le but de cette thèse est de construire puis d'étudier l'impact de représentations fondées seulement sur des caractéristiques locales de bas-niveau (couleurs, dérivées spatiales) pour deux tâches : la poursuite d'objets génériques, nécessitant des caractéristiques robustes aux variations d'aspect de l'objet et du contexte au cours du temps; la détection d'objets, où la représentation doit décrire une classe d'objets en tenant compte des variations intra-classe. Plutôt que de construire des descripteurs d'objets globaux dédiés, nous nous appuyons entièrement sur les caractéristiques locales et sur des mécanismes statistiques flexibles visant à estimer leur distribution (histogrammes) et leurs co-occurrences (Transformée de Hough Généralisée). La Transformée de Hough Généralisée (THG), créée pour la détection de formes quelconques, consiste à créer une structure de données représentant un objet, une classe... Cette structure, d'abord indexée par l'orientation du gradient, a été étendue à d'autres caractéristiques. Travaillant sur des caractéristiques locales, nous voulons rester proche de la THG originale.En poursuite d'objets, après avoir présenté nos premiers travaux, combinant la THG avec un filtre particulaire (utilisant un histogramme de couleurs), nous présentons un algorithme plus léger et rapide (100fps), plus précis et robuste. Nous présentons une évaluation qualitative et étudierons l'impact des caractéristiques utilisées (espace de couleur, formulation des dérivées partielles...). En détection, nous avons utilisé l'algorithme de Gall appelé forêts de Hough. Notre but est de réduire l'espace de caractéristiques utilisé par Gall, en supprimant celles de type HOG, pour ne garder que les dérivées partielles et les caractéristiques de couleur. Pour compenser cette réduction, nous avons amélioré deux étapes de l'entraînement : le support des descripteurs locaux (patchs) est partiellement produit selon une mesure géométrique, et l'entraînement des nœuds se fait en générant une carte de probabilité spécifique prenant en compte les patchs utilisés pour cette étape. Avec l'espace de caractéristiques réduit, le détecteur n'est pas plus précis. Avec les mêmes caractéristiques que Gall, sur une même durée d'entraînement, nos travaux ont permis d'avoir des résultats identiques, mais avec une variance plus faible et donc une meilleure répétabilité<br>Visual representation is a fundamental problem in computer vision. The aim is to reduce the information to the strict necessary for a query task. Many types of representation exist, like color features (histograms, color attributes...), shape ones (derivatives, keypoints...) or filterbanks.Low-level (and local) features are fast to compute. Their power of representation are limited, but their genericity have an interest for autonomous or multi-task systems, as higher level ones derivate from them. We aim to build, then study impact of low-level and local feature spaces (color and derivatives only) for two tasks: generic object tracking, requiring features robust to object and environment's aspect changes over the time; object detection, for which the representation should describe object class and cope with intra-class variations.Then, rather than using global object descriptors, we use entirely local features and statisticals mecanisms to estimate their distribution (histograms) and their co-occurrences (Generalized Hough Transform).The Generalized Hough Transform (GHT), created for detection of any shape, consists in building a codebook, originally indexed by gradient orientation, then to diverse features, modeling an object, a class. As we work on local features, we aim to remain close to the original GHT.In tracking, after presenting preliminary works combining the GHT with a particle filter (using color histograms), we present a lighter and fast (100 fps) tracker, more accurate and robust.We present a qualitative evaluation and study the impact of used features (color space, spatial derivative formulation).In detection, we used Gall's Hough Forest. We aim to reduce Gall's feature space and discard HOG features, to keep only derivatives and color ones.To compensate the reduction, we enhanced two steps: the support of local descriptors (patches) are partially chosen using a geometrical measure, and node training is done by using a specific probability map based on patches used at this step.With reduced feature space, the detector is less accurate than with Gall's feature space, but for the same training time, our works lead to identical results, but with higher stability and then better repeatability
APA, Harvard, Vancouver, ISO, and other styles
24

Tran, Antoine. "Object representation in local feature spaces : application to real-time tracking and detection." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLY010.

Full text
Abstract:
La représentation visuelle est un problème fondamental en vision par ordinateur. Le but est de réduire l'information au strict nécessaire pour une tâche désirée. Plusieurs types de représentation existent, comme les caractéristiques de couleur (histogrammes, attributs de couleurs...), de forme (dérivées, points d'intérêt...) ou d'autres, comme les bancs de filtres.Les caractéristiques bas-niveau (locales) sont rapides à calculer. Elles ont un pouvoir de représentation limité, mais leur généricité présente un intérêt pour des systèmes autonomes et multi-tâches, puisque les caractéristiques haut-niveau découlent d'elles.Le but de cette thèse est de construire puis d'étudier l'impact de représentations fondées seulement sur des caractéristiques locales de bas-niveau (couleurs, dérivées spatiales) pour deux tâches : la poursuite d'objets génériques, nécessitant des caractéristiques robustes aux variations d'aspect de l'objet et du contexte au cours du temps; la détection d'objets, où la représentation doit décrire une classe d'objets en tenant compte des variations intra-classe. Plutôt que de construire des descripteurs d'objets globaux dédiés, nous nous appuyons entièrement sur les caractéristiques locales et sur des mécanismes statistiques flexibles visant à estimer leur distribution (histogrammes) et leurs co-occurrences (Transformée de Hough Généralisée). La Transformée de Hough Généralisée (THG), créée pour la détection de formes quelconques, consiste à créer une structure de données représentant un objet, une classe... Cette structure, d'abord indexée par l'orientation du gradient, a été étendue à d'autres caractéristiques. Travaillant sur des caractéristiques locales, nous voulons rester proche de la THG originale.En poursuite d'objets, après avoir présenté nos premiers travaux, combinant la THG avec un filtre particulaire (utilisant un histogramme de couleurs), nous présentons un algorithme plus léger et rapide (100fps), plus précis et robuste. Nous présentons une évaluation qualitative et étudierons l'impact des caractéristiques utilisées (espace de couleur, formulation des dérivées partielles...). En détection, nous avons utilisé l'algorithme de Gall appelé forêts de Hough. Notre but est de réduire l'espace de caractéristiques utilisé par Gall, en supprimant celles de type HOG, pour ne garder que les dérivées partielles et les caractéristiques de couleur. Pour compenser cette réduction, nous avons amélioré deux étapes de l'entraînement : le support des descripteurs locaux (patchs) est partiellement produit selon une mesure géométrique, et l'entraînement des nœuds se fait en générant une carte de probabilité spécifique prenant en compte les patchs utilisés pour cette étape. Avec l'espace de caractéristiques réduit, le détecteur n'est pas plus précis. Avec les mêmes caractéristiques que Gall, sur une même durée d'entraînement, nos travaux ont permis d'avoir des résultats identiques, mais avec une variance plus faible et donc une meilleure répétabilité<br>Visual representation is a fundamental problem in computer vision. The aim is to reduce the information to the strict necessary for a query task. Many types of representation exist, like color features (histograms, color attributes...), shape ones (derivatives, keypoints...) or filterbanks.Low-level (and local) features are fast to compute. Their power of representation are limited, but their genericity have an interest for autonomous or multi-task systems, as higher level ones derivate from them. We aim to build, then study impact of low-level and local feature spaces (color and derivatives only) for two tasks: generic object tracking, requiring features robust to object and environment's aspect changes over the time; object detection, for which the representation should describe object class and cope with intra-class variations.Then, rather than using global object descriptors, we use entirely local features and statisticals mecanisms to estimate their distribution (histograms) and their co-occurrences (Generalized Hough Transform).The Generalized Hough Transform (GHT), created for detection of any shape, consists in building a codebook, originally indexed by gradient orientation, then to diverse features, modeling an object, a class. As we work on local features, we aim to remain close to the original GHT.In tracking, after presenting preliminary works combining the GHT with a particle filter (using color histograms), we present a lighter and fast (100 fps) tracker, more accurate and robust.We present a qualitative evaluation and study the impact of used features (color space, spatial derivative formulation).In detection, we used Gall's Hough Forest. We aim to reduce Gall's feature space and discard HOG features, to keep only derivatives and color ones.To compensate the reduction, we enhanced two steps: the support of local descriptors (patches) are partially chosen using a geometrical measure, and node training is done by using a specific probability map based on patches used at this step.With reduced feature space, the detector is less accurate than with Gall's feature space, but for the same training time, our works lead to identical results, but with higher stability and then better repeatability
APA, Harvard, Vancouver, ISO, and other styles
25

Yalcin, Abdurrahman. "Effect Of Shadow In Building Detection And Building Boundary Extraction." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12610179/index.pdf.

Full text
Abstract:
Rectangular-shaped building detection from high resolution aerial/satellite images is proposed for two different methods. Shadow information plays main role in both of these algorithms. One of the algorithms is based on Hough transformation, the other one is based on mean shift segmentation algorithm. Satellite/aerial images are firstly converted to YIQ color space to be used in shadow segmentation. Hue and intensity values are used in computing the ratio image which is used to segment shadowed regions. For shadow segmentation Otsu&rsquo<br>s method is used on the histogram of the ratio image. The segmented shadow image is used as the input for both of two building detection algorithms. In the proposed methods, shadowed regions are believed to be the building shadows. So, non-shadowed regions such as roads, cars, trees etc. are discarded before processing the image. In Hough transform based building detection algorithm, shadowed regions are firstly segmented one by one and filtered for noise removal and edge sharpening. Then, the edges in the filtered image are detected by using Canny edge detection algorithm. Then, line segments are extracted. Finally, the extracted line segments are used to construct rectangular-shaped buildings. In mean shift based building detection algorithm, image is firstly segmented by using mean shift segmentation algorithm. By using shadow image and segmented image, building rooftops are investigated in shadow boundaries. The results are compared for both of the algorithms. In the last step a shadow removal algorithm is implemented to observe the effects of shadow regions in both of two implemented building detection algorithms. Both of these algorithms are applied to shadow removed image and results are compared.
APA, Harvard, Vancouver, ISO, and other styles
26

Bilen, Burak. "Model Based Building Extraction From High Resolution Aerial Images." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/12604984/index.pdf.

Full text
Abstract:
A method for detecting the buildings from high resolution aerial images is proposed. The aim is to extract the buildings from high resolution aerial images using the Hough transform and the model based perceptual grouping techniques.The edges detected from the image are the basic structures used in the building detection procedure. The method proposed in this thesis makes use of the basic image processing techniques. Noise removal and image sharpening techniques are used to enhance the input image. Then, the edges are extracted from the image using the Canny edge detection algorithm. The edges obtained are composed of discrete points. These discrete points are vectorized in order to generate straight line segments. This is performed with the use of the Hough transform and the perceptual grouping techniques. The straight line segments become the basic structures of the buildings. Finally, the straight line segments are grouped based on predefined model(s) using the model based perceptual grouping technique. The groups of straight line segments are the candidates for 2D structures that may be the buildings, the shadows or other man-made objects. The proposed method was implemented with a program written in C programming language. The approach was applied to several study areas. The results achieved are encouraging. The number of the extracted buildings increase if the orientation of the buildings are nearly the same and the Canny edge detector detects most of the building edges.If the buildings have different orientations,some of the buildings may not be extracted with the proposed method. In addition to building orientation, the building size and the parameters used in the Hough transform and the perceptual grouping stages also affect the success of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
27

Yusro, Muhammad. "Development of new algorithm for improving accuracy of pole detection to the supporting system of mobility aid for visually impaired person." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC107/document.

Full text
Abstract:
Ces travaux de recherche visaient à développer un système d'aide à la mobilité pour les personnes ayant une déficience visuelle (VIP ‘Visually Impaired Person’) appelé ‘Smart Environment Explorer Stick (SEES)’. Le but particulier de cette recherche était de développer de nouveaux algorithmes pour améliorer la précision de la détection de la présence de poteaux de la canne SEE-stick en utilisant la méthode de calcul de distance et la recherche de paires de lignes verticales basées sur l'optimisation de la technique de détection de contour de Canny. Désormais, l'algorithme de détection des poteaux est appelé l’algorithme YuRHoS. Le SEES développé comme système de support d'aide à la mobilité VIP a été intégré avec succès à plusieurs dispositifs tels que le serveur distant dénommé iSEE, le serveur local embarqué dénommé SEE-phone et la canne intelligente dénommée SEE-stick. Les performances de SEE-stick ont été améliorées grâce à l'algorithme YuRHoS qui permet de discriminer avec précision les objets (obstacles) en forme de poteau parmi les objets détectés. La comparaison des résultats de détection des poteaux avec ceux des autres algorithmes a conclu que l'algorithme YuRHoS était plus efficace et précis. Le lieu et la couleur des poteaux de test d’évaluation étaient deux des facteurs les plus importants qui influaient sur la capacité du SEE-stick à détecter leur présence. Le niveau de précision de SEE-stick est optimal lorsque le test d’évaluation est effectué à l'extérieur et que les poteaux sont de couleur argentée. Les statistiques montrent que la performance de l'algorithme YuRHoS à l'intérieur était 0,085 fois moins bonne qu'à l'extérieur. De plus, la détection de la présence de poteaux de couleur argentée est 11 fois meilleure que celle de poteaux de couleur noir<br>This research aimed to develop a technology system of mobility aid for Visually Impaired Person (VIP) called Smart Environment Explorer Stick (SEES).Particular purpose of this research was developing new algorithm in improving accuracy of SEE-stick for pole detection using distance calculation method and vertical line pair search based on Canny edge detection optimization and Hough transform. Henceforth, the pole detection algorithm was named as YuRHoS algorithm.The developed SEES as supporting system of VIP mobility aid had been successfully integrated several devices such as global remote server (iSEE), embedded local server (SEE-phone) and smart stick (SEE-stick). Performance of SEE-stick could be improved through YuRHoS algorithm, which was able to fix the accuracy of SEE-stick in detecting pole. Test comparison of pole detection results among others algorithm concluded that YuRHoS algorithm had better accuracy in pole detection.Two most significant factors affecting SEE-stick ability in detecting pole was test location and pole color. Level of accuracy of SEE-stick would be optimum once the test location was performed outdoor and pole color was silver. Statistics result shown that YuRHoS algorithm performance indoor was 0.085 times worse than outdoor. Meanwhile, silver-pole-color as object detection could increase YuRHoS algorithm performance as much as 11 times better compare to black-pole-color
APA, Harvard, Vancouver, ISO, and other styles
28

Guducu, Hasan Volkan. "Building Detection From Satellite Images Using Shadow And Color Information." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609920/index.pdf.

Full text
Abstract:
A method for detecting buildings from satellite/aerial images is proposed in this study. The aim is to extract rectilinear buildings by using hypothesize first verify next manner. Hypothesis generation is accomplished by using edge detection and line generation stages. Hypothesis verification is carried out by using information obtained both from the color segmentation of HSV representation of the image and the shadow detection stages&rsquo<br>output. Satellite/aerial image is firstly filtered to sharpen the edges. Then, edges are extracted using Canny edge detection algorithm. These edges are the input for the Hough Transform stage which will produce line segments according to these extracted edges. Then, extracted line segments are used to generate building hypotheses. Verification of these hypotheses makes use of the outputs of the HSV color segmentation and shadow detection stages. In this study, color segmentation is processed on the HSV representation of the satellite/aerial image which is less sensitive to illumination. In order to perform the shadow detection, the basic information which is shadow areas have higher value of saturation component and lower value of value component in HSV color space is used and according to this information a mask is applied to the HSV representation of the image to produce shadow pixels. The proposed method is implemented as software written in MATLAB programming software. The approach was tested in several different areas. The results are encouraging.
APA, Harvard, Vancouver, ISO, and other styles
29

Alturki, Abdulrahman S. "Principal Point Determination for Camera Calibration." University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1500326474390507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Huaynalaya, Edwin Delgado. "Detecção de ovos de S. mansoni a partir da detecção de seus contornos." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04062012-103118/.

Full text
Abstract:
Schistosoma mansoni é o parasita causador da esquistossomose mansônica que, de acordo com o Ministério da Saúde do Brasil, afeta atualmente vários milhões de pessoas no país. Uma das formas de diagnóstico da esquistossomose é a detecção de ovos do parasita através da análise de lâminas microscópicas com material fecal. Esta tarefa é extremamente cansativa, principalmente nos casos de baixa endemicidade, pois a quantidade de ovos é muito pequena. Nesses casos, uma abordagem computacional para auxílio na detecção de ovos facilitaria o trabalho de diagnóstico. Os ovos têm formato ovalado, possuem uma membrana translúcida, apresentam uma espícula e sua cor é ligeiramente amarelada. Porém nem todas essas características são observadas em todos os ovos e algumas delas são visíveis apenas com uma ampliação adequada. Além disso, o aspecto visual do material fecal varia muito de indivíduo para indivíduo em termos de cor e presença de diversos artefatos (tais como partículas que não são desintegradas pelo sistema digestivo), tornando difícil a tarefa de detecção dos ovos. Neste trabalho investigamos, em particular, o problema de detecção das linhas que contornam a borda de vários dos ovos. Propomos um método composto por duas fases. A primeira fase consiste na detecção de estruturas do tipo linha usando operadores morfológicos. A detecção de linhas é dividida em três etapas principais: (i) realce de linhas, (ii) detecção de linhas, e (iii) refinamento do resultado para eliminar segmentos de linhas que não são de interesse. O resultado dessa fase é um conjunto de segmentos de linhas. A segunda fase consiste na detecção de subconjuntos de segmentos de linha dispostos em formato elíptico, usando um algoritmo baseado na transformada Hough. As elipses detectadas são fortes candidatas a contorno de ovos de S. mansoni. Resultados experimentais mostram que a abordagem proposta pode ser útil para compor um sistema de auxílio à detecção dos ovos.<br>Schistosoma mansoni is one of the parasites which causes schistosomiasis. According to the Brazilian Ministry of Health, several million people in the country are currently affected by schistosomiasis. One way of diagnosing it is by egg identification in stool. This task is extremely time-consuming and tiring, especially in cases of low endemicity, when only few eggs are present. In such cases, a computational approach to help the detection of eggs would greatly facilitate the diagnostic task. Schistosome eggs present oval shape, have a translucent membrane and a spike, and their color is slightly yellowish. However, not all these features are observed in every egg and some of them are visible only with an adequate microscopic magnification. Furthermore, the visual aspect of the fecal material varies widely from person to person in terms of color and presence of different artifacts (such as particles which are not disintegrated by the digestive system), making it difficult to detect the eggs. In this work we investigate the problem of detecting lines which delimit the contour of the eggs. We propose a method comprising two steps. The first phase consists in detecting line-like structures using morphological operators. This line detection phase is divided into three steps: (i) line enhancement, (ii) line detection, and (iii) result refinement in order to eliminate line segments that are not of interest. The output of this phase is a set of line segments. The second phase consists in detecting subsets of line segments arranged in an elliptical shape, using an algorithm based on the Hough transform. Detected ellipses are strong candidates to contour of S. mansoni eggs. Experimental results show that the proposed approach has potential to be effectively used as a component in a computer system to help egg detection.
APA, Harvard, Vancouver, ISO, and other styles
31

Panice, Natália Ribeiro. "Método de detecção automática de eixos de caminhões baseado em imagens." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/18/18144/tde-11122018-213600/.

Full text
Abstract:
A presente pesquisa tem por objetivo desenvolver um sistema automático de detecção de eixos de caminhões a partir de imagens. Para isso, são apresentados dois sistemas automáticos: o primeiro para extração de imagens de caminhões a partir de filmagens de tráfego rodoviário feitas em seis locais de uma mesma rodovia situada no Estado de São Paulo, e o segundo, para detecção dos eixos dos caminhões nas imagens. Ambos os sistemas foram fundamentados em conceitos de Processamento de Imagens e Visão Computacional e o desenvolvimento foi feito utilizando programação em linguagem Python e as bibliotecas OpenCV e SciKit. O salvamento automático das imagens de caminhões foi necessário para a construção do banco de imagens utilizado no outro método: a detecção dos eixos dos veículos identificados. Neste estágio foram realizadas a segmentação da imagem do caminhão, a detecção propriamente dita e a classificação dos eixos. Na segmentação dos veículos, utilizou-se as técnicas de limiarização adaptativa seguida de morfologia matemática e em outra ocasião, o descritor de texturas LBP; enquanto na detecção, a Transformada de Hough. Da análise de desempenho desses métodos, a taxa de salvamento das imagens foi 69,2% considerando todos os caminhões que se enquadraram nos frames. Com relação às detecções, a segmentação das imagens dos caminhões feita utilizando limiarização adaptativa com morfologia matemática ofereceu resultados de 57,7% da detecção do total de eixos dos caminhões e 65,6% de falsas detecções. A técnica LBP forneceu, para os mesmos casos, respectivamente, 68,3% e 84,2%. O excesso de detecção foi um ponto negativo dos resultados e pode ser relacionado aos problemas do ambiente externo, geralmente intrínsecos às cenas de tráfego de veículos. Dois fatores que interferiram de maneira significativa foram a iluminação e a movimentação das folhas e galhos das árvores devido ao vento. Desconsiderando esse inconveniente, derivado dos fatores recém citados, as taxas de acerto dos dois tipos de segmentação aumentariam para 90,4% e 93,5%, respectivamente, e as falsas detecções mudariam para 66,5% e 54,7%. Desse modo, os dois sistemas propostos podem ser considerados promissores para o objetivo proposto.<br>This research aims to develop an automatic truck axle detection system using images. Two automatic systems are presented: one for the extraction of truck images from road videos recorded in a São Paulo state highway and one for the axle detection on images. Both systems are based on Image Processing and Computational Vision techniques, with using programming in Python and OpenCV and SciKit libraries. The truck image extraction system was necessary for the creation of image base, to be used on the axle detection system. Thereunto, image segmentation, axle detection and classification on images were made. In segmentation was used adaptive threshold technique, followed by mathematical morphology and on another time, LBP texture descriptors; for detection, was used Hough Transform. Performance analysis on these methods wielded 69.2% on image save rate, on trucks entirely framed on the image. About axle detection, the truck image segmentation using a combination of adaptive threshold and mathematical morphology wielded 57.7% on axle detection, whilst achieving 65.6% of false detection. Using LBP wielded, on the same images, 68.3% on axle detection and 84.2% of false detection. These excesses was a negative result and can be related to intrinsic issues on the external road traffic environment. Two main factors affected the result: lighting condition changes and the movement of tree leaves and branches. Disregarding these two factors, the proposed system had 90.4% axle truck detection rate using adaptive threshold and mathematical morphology and 93.5% using LBP, and the false detection, changed for 66.5% e 54.7%. Thus, both proposed systems are considered promising.
APA, Harvard, Vancouver, ISO, and other styles
32

Aleixo, Patrícia Nunes. "Object detection and recognition for robotic applications." Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/13811.

Full text
Abstract:
Mestrado em Engenharia Eletrónica e Telecomunicações<br>The computer vision assumes an important relevance in the development of robotic applications. In several applications, robots need to use vision to detect objects, a challenging and sometimes difficult task. This thesis is focused on the study and development of algorithms to be used in detection and identification of objects on digital images to be applied on robots that will be used in practice cases. Three problems are addressed: Detection and identification of decorative stones for textile industry; Detection of the ball in robotic soccer; Detection of objects in a service robot, that operates in a domestic environment. In each case, different methods are studied and applied, such as, Template Matching, Hough transform and visual descriptors (like SIFT and SURF). It was chosen the OpenCv library in order to use the data structures to image manipulation, as well as other structures for all information generated by the developed vision systems. Whenever possible, it was used the implementation of the described methods and have been developed new approaches, both in terms of pre-processing algorithms and in terms of modification of the source code in some used functions. Regarding the pre-processing algorithms, were used the Canny edge detector, contours detection, extraction of color information, among others. For the three problems, there are presented and discussed experimental results in order to evaluate the best method to apply in each case. The best method for each application is already integrated or in the process of integration in the described robots.<br>A visão por computador assume uma importante relevância no desenvolvimento de aplicações robóticas, na medida em que há robôs que precisam de usar a visão para detetar objetos, uma tarefa desafiadora e por vezes difícil. Esta dissertação foca-se no estudo e desenvolvimento de algoritmos para a deteção e identificação de objetos em imagem digital para aplicar em robôs que serão usados em casos práticos. São abordados três problemas: Deteção e identificação de pedras decorativas para a indústria têxtil; Deteção da bola em futebol robótico; Deteção de objetos num robô de serviço, que opera em ambiente doméstico. Para cada caso, diferentes métodos são estudados e aplicados, tais como, Template Matching, transformada de Hough e descritores visuais (como SIFT e SURF). Optou-se pela biblioteca OpenCv com vista a utilizar as suas estruturas de dados para manipulação de imagem, bem como as demais estruturas para toda a informação gerada pelos sistemas de visão desenvolvidos. Sempre que possivel utilizaram-se as implementações dos métodos descritos tendo sido desenvolvidas novas abordagens, quer em termos de algoritmos de preprocessamento quer em termos de alteração do código fonte das funções utilizadas. Como algoritmos de pre-processamento foram utilizados o detetor de arestas Canny, deteção de contornos, extração de informação de cor, entre outros. Para os três problemas, são apresentados e discutidos resultados experimentais, de forma a avaliar o melhor método a aplicar em cada caso. O melhor método em cada aplicação encontra-se já integrado ou em fase de integração dos robôs descritos.
APA, Harvard, Vancouver, ISO, and other styles
33

Cedernaes, Erasmus. "Runway detection in LWIR video : Real time image processing and presentation of sensor data." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-300690.

Full text
Abstract:
Runway detection in long wavelength infrared (LWIR) video could potentially increase the number of successful landings by increasing the situational awareness of pilots and verifying a correct approach. A method for detecting runways in LWIR video was therefore proposed and evaluated for robustness, speed and FPGA acceleration. The proposed algorithm improves the detection probability by making assumptions of the runway appearance during approach, as well as by using a modified Hough line transform and a symmetric search of peaks in the accumulator that is returned by the Hough line transform. A video chain was implemented on a Xilinx ZC702 Development card with input and output via HDMI through an expansion card. The video frames were buffered to RAM, and the detection algorithm ran on the CPU, which however did not meet the real-time requirement. Strategies were proposed that would improve the processing speed by either acceleration in hardware or algorithmic changes.
APA, Harvard, Vancouver, ISO, and other styles
34

Gabriel, Eric [Verfasser]. "Automatic Multi-Scale and Multi-Object Pedestrian and Car Detection in Digital Images Based on the Discriminative Generalized Hough Transform and Deep Convolutional Neural Networks / Eric Gabriel." Kiel : Universitätsbibliothek Kiel, 2019. http://d-nb.info/1187732826/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Vaquette, Geoffrey. "Reconnaissance robuste d'activités humaines par vision." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS090.

Full text
Abstract:
Cette thèse porte sur la segmentation supervisée d'un flux vidéo en fragments correspondant à des activités de la vie quotidienne. En différenciant geste, action et activité, cette thèse s'intéresse aux activités à haut niveau sémantique telles que "Cuisiner" ou "Prendre son repas" par opposition à des actions comme "Découper un aliment". Pour cela, elle s'appuie sur l'algorithme DOHT (Deeply Optimized Hough Transform), une méthode de l'état de l'art utilisant un paradigme de vote (par transformée de Hough). Dans un premier temps, nous adaptons l'algorithme DOHT pour fusionner les informations en provenance de différents capteurs à trois niveaux différents de l'algorithme. Nous analysons l'effet de ces trois niveaux de fusion et montrons son efficacité par une évaluation sur une base de données composée d'actions de la vie quotidienne. Ensuite, une étude des jeux de données existant est menée. Constatant le manque de vidéos adaptées à la segmentation et classification (détection) d'activités à haut niveau sémantique, une nouvelle base de données est proposée. Enregistrée dans un environnement réaliste et dans des conditions au plus proche de l'application finale, elle contient des vidéos longues et non découpées adaptées à un contexte de détection. Dans un dernier temps, nous proposons une approche hiérarchique à partir d'algorithmes DOHT pour reconnaître les activités à haut niveau sémantique. Cette approche à deux niveaux décompose le problème en une détection non-supervisée d'actions pour ensuite détecter les activités désirées<br>This thesis focuses on supervised activity segmentation from video streams within application context of smart homes. Three semantic levels are defined, namely gesture, action and activity, this thesis focuses mainly on the latter. Based on the Deeply Optimized Hough Transform paridigm, three fusion levels are introduced in order to benefit from various modalities. A review of existing action based datasets is presented and the lack of activity detection oriented database is noticed. Then, a new dataset is introduced. It is composed of unsegmented long time range daily activities and has been recorded in a realistic environment. Finaly, a hierarchical activity detection method is proposed aiming to detect high level activities from unsupervised action detection
APA, Harvard, Vancouver, ISO, and other styles
36

Chroboczek, Martin. "Detekce objektů pomocí Houghovy transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236127.

Full text
Abstract:
This diploma thesis deals with object detection using mathematical technique called Hough transform. Hough transform technique is conceived in general terms from the de facto simplest use for the detection of elementary analytically describable shapes such as lines, ellipses, circles or simple analytically definable elements to sophisticated use for the detection of complex - analytically virtually indescribable - objects. These include cars or pedestrians who are detected on the basis of the photographic records of these objects and entities. The document thus maps the definition and use of the respective Hough transform subtechniques along with their basic classification on probabilistic and non-probabilistic methods. The work subsequently culminates in describing the general state-of-the-art technique called Class-Specific Hough Forests for Object Detection, introduces its definition, training procedure on a provided dataset and the detection of test patterns. In conclusion of this work,there is designed and implemented generally trainable object detector using this technique. And there is experimental evaluation of its quality.
APA, Harvard, Vancouver, ISO, and other styles
37

Nalavolu, Praveen Reddy. "PERFORMANCE ANALYSIS OF SRCP IMAGE BASED SOUND SOURCE DETECTION ALGORITHMS." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_theses/50.

Full text
Abstract:
Steered Response Power based algorithms are widely used for finding sound source location using microphone array systems. SRCP-PHAT is one such algorithm that has a robust performance under noisy and reverberant conditions. The algorithm creates a likelihood function over the field of view. This thesis employs image processing methods on SRCP-PHAT images, to exploit the difference in power levels and pixel patterns to discriminate between sound source and background pixels. Hough Transform based ellipse detection is used to identify the sound source locations by finding the centers of elliptical edge pixel regions typical of source patterns. Monte Carlo simulations of an eight microphone perimeter array with single and multiple sound sources are used to simulate the test environment and area under receiver operating characteristic (ROCA) curve is used to analyze the algorithm performance. Performance was compared to a simpler algorithm involving Canny edge detection and image averaging and an algorithms based simply on the magnitude of local maxima in the SRCP image. Analysis shows that Canny edge detection based method performed better in the presence of coherent noise sources.
APA, Harvard, Vancouver, ISO, and other styles
38

Fernandes, Leandro Augusto Frata. "On the generalization of subspace detection in unordered multidimensional data." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2010. http://hdl.handle.net/10183/30610.

Full text
Abstract:
Este trabalho apresenta uma solução geral para a detecção de alinhamentos de dados em conjuntos multidimensionais não ordenados e ruidosos. Nesta abordagem, o tipo requerido de alinhamento de dados pode ser uma forma geométrica (e.g., linha reta, plano, círculo, esfera, seção cônica, entre outras) ou qualquer estrutura, com dimensionalidade arbitrária, que possa ser caracterizada por um subespaço linear. A detecção é realizada por meio de um procedimento composto por três etapas. Na etapa de inicialização, um espaço de parâmetros com p (n − p) dimensões é definido de modo que cada ponto neste espaço represente uma instância do alinhamento requerido, descrito por um subespaço p-dimensional em um domínio n-dimensional. Em seguida, uma grade de acumuladores é criada como sendo a representação discreta do espaço de parâmetros. Na segunda etapa do procedimento, cada elemento no conjunto de dados de entrada (também um subespaço no domínio n-dimensional) é mapeado para o espaço de parâmetros como os pontos (no espaço de parâmetros) representando os subespaços requeridos que contém ou que estão contidos no elemento de entrada. À medida que os elementos de entrada são mapeados, as células do acumulador relacionadas com o mapeamento são incrementadas pelo valor de importância do elemento mapeado. A etapa final do procedimento recupera os subespaços p-dimensionais que melhor se ajustam aos dados de entrada como sendo os máximos locais na grade de acumuladores. A parametrização proposta é independente das propriedades geométricas dos alinhamentos a serem detectados. Além disso, o procedimento de mapeamento é independente do tipo de dado de entrada e é capaz de se adaptar a elementos com dimensionalidades arbitrárias. Essas características permitem a utilização da técnica (sem a necessidade de modificações) como uma ferramenta para a detecção de padrões em uma grande quantidade de aplicações. Por conta de sua natureza geral, otimizações desenvolvidas para a abordagem proposta beneficiam, de forma imediata, todos os casos de detecção. Neste trabalho eu demonstro uma implementação em software da técnica proposta e mostro que ela pode ser aplicada tanto em casos simples de detecção, quanto na detecção concorrente de tipos diferentes de alinhamentos, com diferentes interpretações geométricas e em conjuntos de dados compostos por vários tipos de elementos. Esta dissertação também apresenta uma extensão do esquema de detecção para dados de entrada com distribuição Gaussiana de incerteza. A extensão proposta produz distribuições de valores mais suaves na grade de acumuladores e faz com que a técnica fique mais robusta à detecção de subespaços espúrios.<br>This dissertation presents a generalized closed-form framework for detecting data alignments in large unordered noisy multidimensional datasets. In this approach, the intended type of data alignment may be a geometric shape (e.g., straight line, plane, circle, sphere, conic section, among others) or any other structure, with arbitrary dimensionality that can be characterized by a linear subspace. The detection is performed using a three-step process. In the initialization, a p (n − p)-dimensional parameter space is defined in such a way that each point in this space represents an instance of the intended alignment described by a p-dimensional subspace in some n-dimensional domain. In turn, an accumulator array is created as the discrete representation of the parameter space. In the second step each input entry (also a subspace in the n-dimensional domain) is mapped to the parameter space as the set of points representing the intended p-dimensional subspaces that contain or are contained by the entry. As the input entries are mapped, the bins of the accumulator related to such a mapping are incremented by the importance value of the entry. The subsequent and final step retrieves the p-dimensional subspaces that best fit input data as the local maxima in the accumulator array. The proposed parameterization is independent of the geometric properties of the alignments to be detected. Also, the mapping procedure is independent of the type of input data and automatically adapts to entries of arbitrary dimensionality. This allows application of the proposed approach (without changes) in a broad range of applications as a pattern detection tool. Given its general nature, optimizations developed for the proposed framework immediately benefit all the detection cases. I demonstrate a software implementation of the proposed technique and show that it can be applied in simple detection cases as well as in concurrent detection of multiple kinds of alignments with different geometric interpretations, in datasets containing multiple types of data. This dissertation also presents an extension of the general detection scheme to data with Gaussian-distributed uncertainty. The proposed extension produces smoother distributions of values in the accumulator array and makes the framework more robust to the detection of spurious subspaces.
APA, Harvard, Vancouver, ISO, and other styles
39

Přidal, Oldřich. "Stanovení podobnosti objektů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219208.

Full text
Abstract:
The aim of this thesis was to make a program for object finding, object segmentation and similarity object detection in the image. Object are representing by cars. Description of image making, image preprocessing, geometrical transform and Hough transform was written in the theoretical part of the thesis. Also basic morphological operations, corner detection algorithms and methods of object similarity detection were described in this part. The practical part of the thesis focus to realization of single segments from how to make image, through main program analysis and auxiliary functions to similarity results evaluation. Main program is devided to four parts. The program is preprocessed in the first part. The geometrical transforms are used in the second part and the object similarity is detected in the third part. The last part shows the results. The algorithm is realized in C++ language using the OpenCV library.
APA, Harvard, Vancouver, ISO, and other styles
40

Juránková, Markéta. "Parametrizace bodů a čar pomocí paralelních souřadnic pro Houghovu transformaci." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-261242.

Full text
Abstract:
Tato dizertační práce se zaměřuje na použití paralelních souřadnic pro parametrizaci čar a bodů. Paralelní souřadný systém má souřadnicové osy vzájemně rovnoběžné. Bod ve dvourozměrném prostoru je v paralelních souřadnicích zobrazen jako přímka a přímka jako bod. Toho je možné využít pro Houghovu transformaci - metodu, při které body zájmu hlasují v prostoru parametrů pro danou hypotézu. Parametrizace pomocí paralelních souřadnic vyžaduje pouze rasterizaci úseček, a proto je velmi rychlá a přesná. V práci je tato parameterizace demonstrována na detekci maticových kódů a úběžníků.
APA, Harvard, Vancouver, ISO, and other styles
41

Hříbek, Petr. "Detekce elipsy v obraze." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235427.

Full text
Abstract:
The thesis introduces methods used for an ellipse detection. Each method is theoretically described in current subsection. The description includes methods like Hough transform, Random Hough transform, RANSAC, Genetic Algorithm and improvements with optimalization. Further there are described modifications of current procedures in the thesis to reach better results. Next to the last chapter represents testing parameters of speed, quality and accuracy of implemented algorithms. There is a conclusion of testing and a result discussion at the end.
APA, Harvard, Vancouver, ISO, and other styles
42

Labaj, Tomáš. "Detekce křivek v obraze." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236653.

Full text
Abstract:
This thesis deals with curve detection in images. First, current methods used in this area of image processing are summarized and described. Main topic of this thesis is a comparison of methods of parametric curve detection, such as Hough transformation and RANSAC-based methods. These methods are compared according to several criteria which are the most important for precise edge detection.
APA, Harvard, Vancouver, ISO, and other styles
43

Doležal, Petr. "Automatická kontrola správnosti sestavení výrobku." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218289.

Full text
Abstract:
This diploma evaluates methods for verification of key characteristics of a product using digital image processing techniques. At first, reasons why this work has been done are described followed by a list of all methods that were used in this diploma such as Hough Circle Transform and Flood Fill (Seed Fill) algorithm. Also, a new approach how to compensate non regularly illuminated scene, which is based on surface modeling with Bézier Surfaces, was developed. Moreover, the algorithm was implemented in the C++ programming language and some of the parts were also simulated using the MATLAB environment. The algorithm was evaluated based on the percentage level of recognition of the required parameters. Efficiency of the implementation is also important for the author.
APA, Harvard, Vancouver, ISO, and other styles
44

Dobrovolný, Martin. "Detekce a rozpoznání maticového kódu v reálném čase." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236564.

Full text
Abstract:
This work is dealing with detecting and recongnizing matrix codes. It is experimenting use of PCLines algorithm. PCLines is using Hough transform and parallel coordinates for fast detection of lines. Suggested algorithm with double use PCLines detects sets of parallels and problem with image distorted by with parallel projection is solved by cross-ratio equation. We did some optimizations for realtime running and created experimental implementation. Test results shows, that use PCLines one way to detect matrix codes.
APA, Harvard, Vancouver, ISO, and other styles
45

Kałuża, Marian. "Detekce, lokalizace a dekódování QR kódu v obraze." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236620.

Full text
Abstract:
This thesis describes the QR Code detector based on detecting edge lines using Hough Transform and point-to-line parametrization. Generally, the presented method can be also used for detecting different types of matrix codes. This approach searches in the accumulated Hough space for evidence of the matrix code as one compound object. It detects parallel lines by finding a concurrent lines' image groups in the transformed space. It contrasts with the common approach which is first detect the lines and calculate the vanishing point afterwards, which omit much of the useful information stored in the Hough space which can be useful for the detection of vanishing points. The purpose of this thesis is a research of methods which can detect the QR Code in spite of rotation, perspective distortion or uneven illumination. The experimental evaluation shows that the presented algorithm is stable and accurate and is tolerant to high degrees of perspective deformation.
APA, Harvard, Vancouver, ISO, and other styles
46

Modahl, Ylva, and Caroline Skoglund. "Lokalisering av brunnar i ELISpot." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254257.

Full text
Abstract:
Health is a fundamental human right. To increase global health, research in the medical sector is of great importance. Decreasing time consumption of biomedical testing could accelerate the research and development of new drugs and vaccines. This could be achieved by automation of biomedical analysis, using computerized methods. In order to perform analysis on pictures of biomedical tests, it is important to identify the area of interest (AOI) of the test. For example, cells and bacteria are commonly grown in petri dishes, in this case the AOI is the bottom area of the dish, since this is where the object of analysis is located.This study was performed with the aim to compare a few computerized methods for identifying the AOI in pictures of biomedical tests. In the study, biomedical images from a testing method called ELISpot have been used. ELISpot uses plates with up to 96 circular wells, where pictures of the separate wells were used in order to find the AOI corresponding to the bottom area of each well. The focus has been on comparing the performance of three edge detection methods. More specifically, their ability to accurately detect the edges of the well. Furthermore, a method for identifying a circle based on the detected edges was used to specify the AOI.The study shows that methods using second order derivatives for edge detection, gives the best results regarding to robustness.
APA, Harvard, Vancouver, ISO, and other styles
47

Kříž, Petr. "Aplikace pro analýzu pohybů tenisového hráče." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241050.

Full text
Abstract:
This thesis deals with segmentation of tennis player´s body parts for analysis its motion. Testing code was written in C++ with use of OpenCV library. Some image processing techniques such as thresholding, background subtraction or finding the largest contours were implemented. There was used a linear triangulation technique for calculating the 3D coordinates of segmented points.
APA, Harvard, Vancouver, ISO, and other styles
48

Reis, Filho Ivan José dos. "Captura e reconstrução da marcha humana utilizando marcadores passivos." Universidade Federal de São Carlos, 2016. https://repositorio.ufscar.br/handle/ufscar/8067.

Full text
Abstract:
Submitted by Izabel Franco (izabel-franco@ufscar.br) on 2016-10-10T13:03:01Z No. of bitstreams: 1 DissIJRF.pdf: 11147756 bytes, checksum: c2f35d4805dd8c9a93b43189bacf2899 (MD5)<br>Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T19:38:57Z (GMT) No. of bitstreams: 1 DissIJRF.pdf: 11147756 bytes, checksum: c2f35d4805dd8c9a93b43189bacf2899 (MD5)<br>Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T19:39:03Z (GMT) No. of bitstreams: 1 DissIJRF.pdf: 11147756 bytes, checksum: c2f35d4805dd8c9a93b43189bacf2899 (MD5)<br>Made available in DSpace on 2016-10-20T19:39:09Z (GMT). No. of bitstreams: 1 DissIJRF.pdf: 11147756 bytes, checksum: c2f35d4805dd8c9a93b43189bacf2899 (MD5) Previous issue date: 2016-05-20<br>Não recebi financiamento<br>The computational analysis of human mobility can aid in treatment and rehabilitation of people who have some kind of physical disability. Physiotherapeutic studies, called kinematics, aims to correct body’s dysfunction, caused by genetic malformations, degenerative diseases or posture problems. A computer system can be used to precisely describe motion, anatomical position and angular terms during gait. Therefore, the movements need to be captured and represented in three-dimensional space to be evaluated. This work is a study, development and evaluation of a system for capturing human movement based on recorded videos, using low cost computing resources and adapted to the reality of gait analysis. The automation of computing movements capture, as well as aid in neurological, motor or orthopedic rehabilitation will expand its application to community. In this context, a study was conducted about computing methods of human movements capture by passive markers, and defined the four main steps: calibration and synchronization of the cameras, detection and three-dimensional reconstruction. Computer experiments were made to facilitate the theoretical understanding of this work. Kinematic analyzes data were used as a study case for the implementati<br>A análise computacional de movimentos humanos pode auxiliar no tratamento e reabilitação de pessoas que possuem algum tipo de deficiência motora. Estudos na fisioterapia, chamado de avaliação cinemática, tem por objetivo direcionar tratamentos para corrigir as disfunções do corpo, causadas pela má formação genética, doenças degenerativas ou problemas de postura. Um sistema computacional pode ser utilizado para descrever precisamente os movimentos, posição anatômica e termos angulares durante o período da marcha. Para tanto, os movimentos precisam ser capturados e reconstruídos no espaço tridimensional para posteriormente serem avaliados. Este trabalho consiste no estudo, desenvolvimento e avaliação de um sistema para captura de movimentos humanos baseado em registros de vídeos, utilizando recursos computacionais de baixo custo e adaptado a realidade da análise de marcha. A automatização da captura computacional destes movimentos, além de auxiliar na reabilitação motora neurológica e ou ortopédica de pessoas que possuem alguma deficiência do gênero, ampliará sua aplicação na comunidade. Nesse contexto, foi realizado o levantamento bibliográfico das principais técnicas de captura computacional de movimentos, e no caso dos sistemas ópticos com marcadores passivos, e definido quatro etapas principais: calibração, sincronização das câmeras, detecção e reconstrução tridimensional. Ainda, experimentos computacionais foram realizados para o entendimento prático e teórico do trabalho. Registros de marcha de crianças foram utilizados como estudo de caso para a implementação das etapas de calibração, detecção dos marcadores e reconstrução tridimensional.
APA, Harvard, Vancouver, ISO, and other styles
49

Jošth, Radovan. "Využití GPU pro algoritmy grafiky a zpracování obrazu." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-261274.

Full text
Abstract:
Táto práca popisuje niekoľko vybraných algoritmov, ktoré boli primárne vyvinuté pre CPU procesory, avšak vzhľadom k vysokému dopytu po ich vylepšeniach sme sa rozhodli ich využiť v prospech GPGPU (procesorov grafického adaptéra). Modifikácia týchto algoritmov bola zároveň cieľom nášho výskumu, ktorý  bol prevedený pomocou CUDA rozhrania. Práca je členená podľa troch skupín algoritmov, ktorým sme sa venovali: detekcia objektov v reálnom čase, spektrálna analýza obrazu a detekcia čiar v reálnom čase. Pre výskum detekcie objektov v reálnom čase sme zvolili použitie LRD a LRP funkcií.  Výskum spektrálnej analýzy obrazu bol prevedný pomocou PCA a NTF algoritmov. Pre potreby skúmania detekcie čiar v reálnom čase sme používali dva rôzne spôsoby modifikovanej akumulačnej schémy Houghovej transformácie. Pred samotnou časťou práce venujúcej sa konkrétnym algoritmom a predmetu skúmania, je v úvodných kapitolách, hneď po kapitole ozrejmujúcej dôvody skúmania vybranej problematiky, stručný prehľad architektúry GPU a GPGPU. Záverečné kapitoly sú zamerané na konkretizovanie vlastného prínosu autora, jeho zameranie, dosiahnuté výsledky a zvolený prístup k ich dosiahnutiu. Súčasťou výsledkov je niekoľko vyvinutých produktov.
APA, Harvard, Vancouver, ISO, and other styles
50

Alves, Thiago Waszak. "Sistema de detecção em tempo real de faixas de sinalização de trânsito para veículos inteligentes utilizando processamento de imagem." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157872.

Full text
Abstract:
A mobilidade é uma marca da nossa civilização. Tanto o transporte de carga quanto o de passageiros compartilham de uma enorme infra-estrutura de conexões operados com o apoio de um sofisticado sistema logístico. Simbiose otimizada de módulos mecânicos e elétricos, os veículos evoluem continuamente com a integração de avanços tecnológicos e são projetados para oferecer o melhor em conforto, segurança, velocidade e economia. As regulamentações organizam o fluxo de transporte rodoviário e as suas interações, estipulando regras a fim de evitar conflitos. Mas a atividade de condução pode tornar-se estressante em diferentes condições, deixando os condutores humanos propensos a erros de julgamento e criando condições de acidente. Os esforços para reduzir acidentes de trânsito variam desde campanhas de re-educação até novas tecnologias. Esses tópicos têm atraído cada vez mais a atenção de pesquisadores e indústrias para Sistemas de Transporte Inteligentes baseados em imagens que visam a prevenção de acidentes e o auxilio ao seu motorista na interpretação das formas de sinalização urbana. Este trabalho apresenta um estudo sobre técnicas de detecção em tempo real de faixas de sinalização de trânsito em ambientes urbanos e intermunicipais, com objetivo de realçar as faixas de sinalização da pista para o condutor do veículo ou veículo autônomo, proporcionando um controle maior da área de tráfego destinada ao veículo e prover alertas de possíveis situações de risco. A principal contribuição deste trabalho é otimizar a formar como as técnicas de processamento de imagem são utilizas para realizar a extração das faixas de sinalização, com o objetivo de reduzir o custo computacional do sistema. Para realizar essa otimização foram definidas pequenas áreas de busca de tamanho fixo e posicionamento dinâmico. Essas áreas de busca vão isolar as regiões da imagem onde as faixas de sinalização estão contidas, reduzindo em até 75% a área total onde são aplicadas as técnicas utilizadas na extração de faixas. Os resultados experimentais mostraram que o algoritmo é robusto em diversas variações de iluminação ambiente, sombras e pavimentos com cores diferentes tanto em ambientes urbanos quanto em rodovias e autoestradas. Os resultados mostram uma taxa de detecção correta média de 98; 1%, com tempo médio de operação de 13,3 ms.<br>Mobility is an imprint of our civilization. Both freight and passenger transport share a huge infrastructure of connecting links operated with the support of a sophisticated logistic system. As an optimized symbiosis of mechanical and electrical modules, vehicles are evolving continuously with the integration of technological advances and are engineered to offer the best in comfort, safety, speed and economy. Regulations organize the flow of road transportation machines and help on their interactions, stipulating rules to avoid conflicts. But driving can become stressing on different conditions, leaving human drivers prone to misjudgments and creating accident conditions. Efforts to reduce traffic accidents that may cause injuries and even deaths range from re-education campaigns to new technologies. These topics have increasingly attracted the attention of researchers and industries to Image-based Intelligent Transportation Systems that aim to prevent accidents and help your driver in the interpretation of urban signage forms. This work presents a study on real-time detection techniques of traffic signaling signs in urban and intermunicipal environments, aiming at the signaling lanes of the lane for the driver of the vehicle or autonomous vehicle, providing a greater control of the area of traffic destined to the vehicle and to provide alerts of possible risk situations. The main contribution of this work is to optimize how the image processing techniques are used to perform the lanes extraction, in order to reduce the computational cost of the system. To achieve this optimization, small search areas of fixed size and dynamic positioning were defined. These search areas will isolate the regions of the image where the signaling lanes are contained, reducing up to 75% the total area where the techniques used in the extraction of lanes are applied. The experimental results showed that the algorithm is robust in several variations of ambient light, shadows and pavements with different colors, in both urban environments and on highways and motorways. The results show an average detection rate of 98.1%, with average operating time of 13.3 ms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography