To see the other types of publications on this topic, follow the link: Super computers.

Dissertations / Theses on the topic 'Super computers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Super computers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Majeed, Taban Fouad. "Segmentation, super-resolution and fusion for digital mammogram classification." Thesis, University of Buckingham, 2016. http://bear.buckingham.ac.uk/162/.

Full text
Abstract:
Mammography is one of the most common and effective techniques used by radiologists for the early detection of breast cancer. Recently, computer-aided detection/diagnosis (CAD) has become a major research topic in medical imaging and has been widely applied in clinical situations. According to statics, early detection of cancer can reduce the mortality rates by 30% to 70%, therefore detection and diagnosis in the early stage are very important. CAD systems are designed primarily to assist radiologists in detecting and classifying abnormalities in medical scan images, but the main challenges hindering their wider deployment is the difficulty in achieving accuracy rates that help improve radiologists’ performance. The detection and diagnosis of breast cancer face two main issues: the accuracy of the CAD system, and the radiologists’ performance in reading and diagnosing mammograms. This thesis focused on the accuracy of CAD systems. In particular, we investigated two main steps of CAD systems; pre-processing (enhancement and segmentation), feature extraction and classification. Through this investigation, we make five main contributions to the field of automatic mammogram analysis. In automated mammogram analysis, image segmentation techniques are employed in breast boundary or region-of-interest (ROI) extraction. In most Medio-Lateral Oblique (MLO) views of mammograms, the pectoral muscle represents a predominant density region and it is important to detect and segment out this muscle region during pre-processing because it could be bias to the detection of breast cancer. An important reason for the breast border extraction is that it will limit the search-zone for abnormalities in the region of the breast without undue influence from the background of the mammogram. Therefore, we propose a new scheme for breast border extraction, artifact removal and removal of annotations, which are found in the background of mammograms. This was achieved using an local adaptive threshold that creates a binary mask for the images, followed by the use of morphological operations. Furthermore, an adaptive algorithm is proposed to detect and remove the pectoral muscle automatically. Feature extraction is another important step of any image-based pattern classification system. The performance of the corresponding classification depends very much on how well the extracted features represent the object of interest. We investigated a range of different texture feature sets such as Local Binary Pattern Histogram (LBPH), Histogram of Oriented Gradients (HOG) descriptor, and Gray Level Co-occurrence Matrix (GLCM). We propose the use of multi-scale features based on wavelet and local binary patterns for mammogram classification. We extract histograms of LBP codes from the original image as well as the wavelet sub-bands. Extracted features are combined into a single feature set. Experimental results show that our proposed method of combining LBPH features obtained from the original image and with LBPH features obtained from the wavelet domain increase the classification accuracy (sensitivity and specificity) when compared with LBPH extracted from the original image. The feature vector size could be large for some types of feature extraction schemes and they may contain redundant features that could have a negative effect on the performance of classification accuracy. Therefore, feature vector size reduction is needed to achieve higher accuracy as well as efficiency (processing and storage). We reduced the size of the features by applying principle component analysis (PCA) on the feature set and only chose a small number of eigen components to represent the features. Experimental results showed enhancement in the mammogram classification accuracy with a small set of features when compared with using original feature vector. Then we investigated and propose the use of the feature and decision fusion in mammogram classification. In feature-level fusion, two or more extracted feature sets of the same mammogram are concatenated into a single larger fused feature vector to represent the mammogram. Whereas in decision-level fusion, the results of individual classifiers based on distinct features extracted from the same mammogram are combined into a single decision. In this case the final decision is made by majority voting among the results of individual classifiers. Finally, we investigated the use of super resolution as a pre-processing step to enhance the mammograms prior to extracting features. From the preliminary experimental results we conclude that using enhanced mammograms have a positive effect on the performance of the system. Overall, our combination of proposals outperforms several existing schemes published in the literature.
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Hassan, Nadia. "Mathematically inspired approaches to face recognition in uncontrolled conditions : super resolution and compressive sensing." Thesis, University of Buckingham, 2014. http://bear.buckingham.ac.uk/6/.

Full text
Abstract:
Face recognition systems under uncontrolled conditions using surveillance cameras is becoming essential for establishing the identity of a person at a distance from the camera and providing safety and security against terrorist, attack, robbery and crime. Therefore, the performance of face recognition in low-resolution degraded images with low quality against images with high quality/and of good resolution/size is considered the most challenging tasks and constitutes focus of this thesis. The work in this thesis is designed to further investigate these issues and the following being our main aim: “To investigate face identification from a distance and under uncontrolled conditions by primarily addressing the problem of low-resolution images using existing/modified mathematically inspired super resolution schemes that are based on the emerging new paradigm of compressive sensing and non-adaptive dictionaries based super resolution.” We shall firstly investigate and develop the compressive sensing (CS) based sparse representation of a sample image to reconstruct a high-resolution image for face recognition, by taking different approaches to constructing CS-compliant dictionaries such as Gaussian Random Matrix and Toeplitz Circular Random Matrix. In particular, our focus is on constructing CS non-adaptive dictionaries (independent of face image information), which contrasts with existing image-learnt dictionaries, but satisfies some form of the Restricted Isometry Property (RIP) which is sufficient to comply with the CS theorem regarding the recovery of sparsely represented images. We shall demonstrate that the CS dictionary techniques for resolution enhancement tasks are able to develop scalable face recognition schemes under uncontrolled conditions and at a distance. Secondly, we shall clarify the comparisons of the strength of sufficient CS property for the various types of dictionaries and demonstrate that the image-learnt dictionary far from satisfies the RIP for compressive sensing. Thirdly, we propose dictionaries based on the high frequency coefficients of the training set and investigate the impact of using dictionaries on the space of feature vectors of the low-resolution image for face recognition when applied to the wavelet domain. Finally, we test the performance of the developed schemes on CCTV images with unknown model of degradation, and show that these schemes significantly outperform existing techniques developed for such a challenging task. However, the performance is still not comparable to what could be achieved in controlled environment, and hence we shall identify remaining challenges to be investigated in the future.
APA, Harvard, Vancouver, ISO, and other styles
3

Luengo, Imanol. "Hierarchical super-regions and their applications to biological volume segmentation." Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/48719/.

Full text
Abstract:
Advances in Biological Imaging technology have made possible imaging of sub-cellular samples with an unprecedented resolution. By using Tomographic Reconstruction biological researchers can now obtain volumetric reconstructions for whole cells in near-native state using cryo-Soft X-ray Tomography or even smaller sub-cellular regions with cryo-Electron Tomography. These technologies allow for visualisation, exploration and analysis of very exciting biological samples, however, it doesn’t come without its challenges. Poor signal-to-noise ratio, low contrast and other sample preparation and re-construction artefacts make these 3D datasets to be a great challenge for the image processing and computer vision community. Without previous available annotations due to the biological relevance of the datasets (which makes them not being publicly available) and the scarce previous research in the field, (semi-)automatic segmentation of these datasets tends to fail. In order to bring state-of-the-art in computer vision closer to the biological community and overcome the difficulties previously mentioned, we are going to build towards a semi-automatic segmentation framework. To do so, we will first introduce superpixels, a group of adjacent pixels that share similar characteristics that reduce whole images to a few superpixels that still preserve important information of the image. Superpixels have been used in the recent literature to speed up object detection, tracking and scene parsing systems. The reduced representation of the image with a few regions allows for faster processing on the subsequent algorithms applied over them. Two novel superpixel algorithms will be presented, introducing with them what we call a Super-Region Hierarchy. A Super-Region Hierarchy is composed of similar regions agglomerated hierarchically. We will show that exploiting this hierarchy in both directions (bottom-up and top-down) helps improving the quality of the superpixels and generalizing them toimages of large dimensionality. Then, superpixels are going to be extended to 3D (named supervoxels), resulting in a variation of two new algorithms ready to be applied to large biological volumes. We will show that representing biological volumes with supervoxels helps not only to dramatically reduce the computational complexity of the analysis (as billions of voxels can be accurately represented with few thousand supervoxels), but also improve the accuracy of the analysis itself by reducing the local noisy neighbourhood of these datasets when grouping voxel features within supervoxels. These regions are only as powerful as the features that represent them, and thus, an in-depth discussion about biological features and grouping methods will lead the way to our first interactive segmentation model, by gathering contextual information from super-regions and hierarchical segmentation layers to allow for segmentation of large regions of the volume with few user input (in the form of annotations or scribbles). Moving forward to improve the interactive segmentation model, a novel algorithm will be presented to extract the most representative (or relevant) sub-volumes from a 3D dataset, since the lack of training data is one of the deciding factors for automatic approaches to fail. We will show that by serving small sub-volumes to the user to be segmented and applying Active Learning to select the next best sub-volume, the number of user interactions needed to completely segment a 3D volume is dramatically reduced. A novel classifier based on Random Forests will be presented to better benefit from these regions of known shape. To finish, SuRVoS will be introduced. A novel fully functional and publicly available workbench based on the work presented here. It is a software tool that comprises most of the ideas, problem formulations and algorithms into a single user interface. It allows a user to interactively segment arbitrary volumetric datasets in a very intuitive and easy to use manner. We have then covered all the topics from data representation to segmentation of biological volumes, and provide with a software tool that hopefully will help closing the gap between biological imaging and computer vision, allowing to generate annotations (or ground truth as it is known in computer vision) much quicker with the aim of gathering a large biological segmentation database to be used in future large-scale completely automatic projects.
APA, Harvard, Vancouver, ISO, and other styles
4

Robinson, Matthew Brandon Cleaver Gerald B. "Towards a systematic investigation of weakly coupled free fermionic heterotic string gauge group statistics." Waco, Tex. : Baylor University, 2009. http://hdl.handle.net/2104/5358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ragagnin, Antonio [Verfasser], and Klaus [Akademischer Betreuer] Dolag. "From the mass-concentration relation of haloes to GPUs and into the web : a guide on fully utilizing super computers for the largest, cosmological hydrodynamic simulations / Antonio Ragagnin ; Betreuer: Klaus Dolag." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2018. http://d-nb.info/1176971727/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Achurra, Jeannette M. Arosemena. "Multi-image animation : "Super Hero" /." Online version of thesis, 1989. http://hdl.handle.net/1850/11484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Azar, Pablo Daniel. "Super-efficient rational proofs." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/93052.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 47-49).
Information asymmetry is a central problem in both computer science and economics. In many fundamental problems, an uninformed principal wants to obtain some knowledge from an untrusted expert. This models several real-world situations, such as a manager's relation with her employees, or the delegation of computational tasks to workers over the internet. Because the expert is untrusted, the principal needs some guarantee that the provided knowledge is correct. In computer science, this guarantee is usually provided via a proof, which the principal can verify. Thus, a dishonest expert will always get caught and penalized. In many economic settings, the guarantee that the knowledge is correct is usually provided via incentives. That is, a game is played between expert and principal such that the expert maximizes her utility by being honest. A rational proof is an interactive proof where the prover, Merlin, is neither honest nor malicious, but rational. That is, Merlin acts in order to maximize his own utility. I previously introduced and studied Rational Proofs when the verifier, Arthur, is a probabilistic polynomial-time machine [3]. In this thesis, I characterize super-efficient rational proofs, that is, rational proofs where Arthur runs in logarithmic time. These new rational proofs are very practical. Not only are they much faster than their classical analogues, but they also provide very tangible incentives for the expert to be honest. Arthur only needs a polynomial-size budget, yet he can penalize Merlin by a large quantity if he deviates from the truth.
by Pablo Daniel Azar.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Jain, Vinit. "Deep Learning based Video Super- Resolution in Computer Generated Graphics." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292687.

Full text
Abstract:
Super-Resolution is a widely studied problem in the field of computer vision, where the purpose is to increase the resolution of, or super-resolve, image data. In Video Super-Resolution, maintaining temporal coherence for consecutive video frames requires fusing information from multiple frames to super-resolve one frame. Current deep learning methods perform video super-resolution, yet most of them focus on working with natural datasets. In this thesis, we use a recurrent back-projection network for working with a dataset of computer-generated graphics, with example applications including upsampling low-resolution cinematics for the gaming industry. The dataset comes from a variety of gaming content, rendered in (3840 x 2160) resolution. The objective of the network is to produce the upscaled version of the low-resolution frame by learning an input combination of a low-resolution frame, a sequence of neighboring frames, and the optical flow between each neighboring frame and the reference frame. Under the baseline setup, we train the model to perform 2x upsampling from (1920 x 1080) to (3840 x 2160) resolution. In comparison against the bicubic interpolation method, our model achieved better results by a margin of 2dB for Peak Signal-to-Noise Ratio (PSNR), 0.015 for Structural Similarity Index Measure (SSIM), and 9.3 for the Video Multi-method Assessment Fusion (VMAF) metric. In addition, we further demonstrate the susceptibility in the performance of neural networks to changes in image compression quality, and the inefficiency of distortion metrics to capture the perceptual details accurately.
Superupplösning är ett allmänt studerat problem inom datorsyn, där syftet är att öka upplösningen på eller superupplösningsbilddata. I Video Super- Resolution kräver upprätthållande av tidsmässig koherens för på varandra följande videobilder sammanslagning av information från flera bilder för att superlösa en bildruta. Nuvarande djupinlärningsmetoder utför superupplösning i video, men de flesta av dem fokuserar på att arbeta med naturliga datamängder. I denna avhandling använder vi ett återkommande bakprojektionsnätverk för att arbeta med en datamängd av datorgenererad grafik, med exempelvis applikationer inklusive upsampling av film med låg upplösning för spelindustrin. Datauppsättningen kommer från en mängd olika spelinnehåll, återgivna i (3840 x 2160) upplösning. Målet med nätverket är att producera en uppskalad version av en ram med låg upplösning genom att lära sig en ingångskombination av en lågupplösningsram, en sekvens av intilliggande ramar och det optiska flödet mellan varje intilliggande ram och referensramen. Under grundinställningen tränar vi modellen för att utföra 2x uppsampling från (1920 x 1080) till (3840 x 2160) upplösning. Jämfört med den bicubiska interpoleringsmetoden uppnådde vår modell bättre resultat med en marginal på 2 dB för Peak Signal-to-Noise Ratio (PSNR), 0,015 för Structural Similarity Index Measure (SSIM) och 9.3 för Video Multimethod Assessment Fusion (VMAF) mätvärde. Dessutom demonstrerar vi vidare känsligheten i neuronal nätverk för förändringar i bildkomprimeringskvaliteten och ineffektiviteten hos distorsionsmätvärden för att fånga de perceptuella detaljerna exakt.
APA, Harvard, Vancouver, ISO, and other styles
9

Laws, Dannielle Kaye. "Gaming in Conversation: The Impact of Video Games in Second Language Communication." University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1461800075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Walsh, David Oliver 1966. "New methods for super-resolution." Thesis, The University of Arizona, 1993. http://hdl.handle.net/10150/291988.

Full text
Abstract:
This thesis presents a new, non-iterative method for super-resolution which we call the direct method. By exploiting the inherent structure of the discrete signal processing environment, the direct method reduces the discrete super-resolution problem to solving a linear set of equations. The direct method is shown to be closely related to the Gerchberg algorithm for super-resolution. A mathematical justification for early termination of the Gerchberg algorithm is presented and the design of optimal termination schemes is discussed. Another new super-resolution method, which we call the SVD method, is presented. The SVD method is based on the direct method and employs SVD techniques to minimize errors in the solution due to noise and aliasing errors on the known frequency samples. The new SVD method is shown to provide results nearly identical to the optimal solution given by the Gerchberg algorithm, with huge savings in time and computational work.
APA, Harvard, Vancouver, ISO, and other styles
11

Pinel, Xavier. "A perturbed two-level preconditioner for the solution of three-dimensional heterogeneous Helmholtz problems with applications to geophysics." Thesis, Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0033/document.

Full text
Abstract:
Le sujet de cette thèse est le développement de méthodes itératives permettant la résolution degrands systèmes linéaires creux d'équations présentant plusieurs seconds membres simultanément. Ces méthodes seront en particulier utilisées dans le cadre d'une application géophysique : la migration sismique visant à simuler la propagation d'ondes sous la surface de la terre. Le problème prend la forme d'une équation d'Helmholtz dans le domaine fréquentiel en trois dimensions, discrétisée par des différences finies et donnant lieu à un système linéaire creux, complexe, non-symétrique, non-hermitien. De plus, lorsque de grands nombres d'onde sont considérés, cette matrice possède une taille élevée et est indéfinie. Du fait de ces propriétés, nous nous proposons d'étudier des méthodes de Krylov préconditionnées par des techniques hiérarchiques deux niveaux. Un tel pre-conditionnement s'est montré particulièrement efficace en deux dimensions et le but de cette thèse est de relever le défi de l'adapter au cas tridimensionel. Pour ce faire, des méthodes de Krylov sont utilisées à la fois comme lisseur et comme méthode de résolution du problème grossier. Ces derniers choix induisent l'emploi de méthodes de Krylov dites flexibles
The topic of this PhD thesis is the development of iterative methods for the solution of large sparse linear systems of equations with possibly multiple right-hand sides given at once. These methods will be used for a specific application in geophysics - seismic migration - related to the simulation of wave propagation in the subsurface of the Earth. Here the three-dimensional Helmholtz equation written in the frequency domain is considered. The finite difference discretization of the Helmholtz equation with the Perfect Matched Layer formulation produces, when high frequencies are considered, a complex linear system which is large, non-symmetric, non-Hermitian, indefinite and sparse. Thus we propose to study preconditioned flexible Krylov subspace methods, especially minimum residual norm methods, to solve this class of problems. As a preconditioner we consider multi-level techniques and especially focus on a two-level method. This twolevel preconditioner has shown efficient for two-dimensional applications and the purpose of this thesis is to extend this to the challenging three-dimensional case. This leads us to propose and analyze a perturbed two-level preconditioner for a flexible Krylov subspace method, where Krylov methods are used both as smoother and as approximate coarse grid solver
APA, Harvard, Vancouver, ISO, and other styles
12

Woods, Matthew. "Image Super-Resolution Enhancements for Airborne Sensors." Thesis, Northwestern University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10193209.

Full text
Abstract:

This thesis discusses the application of advanced digital signal and image processing techniques, particularly the technique known as super-resolution (SR), to enhance the imagery produced by cameras mounted on an airborne platform such as an unmanned aircraft system (UAS). SR is an image processing technology applicable to any digital, pixilated camera that is physically limited by construction to sample a scene with a discrete, m x n pixel array. The straightforward objective of SR is to utilize mathematics and signal processing to overcome this physical limitation of the m x n array and emulate the “capabilities” of a camera with a higher-density, km x kn (k> 1) pixel array. The exact meaning of “capabilities”, in the preceding sentence, is application dependent.

SR is a well-studied field starting with the seminal 1984 paper by Huang and Tsai. Since that time, a multitude of papers, books, and software solutions have been written and published on the subject. However, although sharing many common aspects, the application to imaging systems on airborne platforms brings forth a number of unique challenges as well as opportunities that are neither currently addressed nor currently exploited by the state-of-the-art. These include wide field-of-view imagery, optical distortion, oblique viewing geometries, spectral variety from the visible band through the infrared, constant ego-motion, and availability of supplementary information from inertial measurement sensors. Our primary objective in this thesis is to extend the field of SR by addressing these areas. In our research experiments, we make significant use of both simulated imagery as well as real video collected from a number of flying platforms.

APA, Harvard, Vancouver, ISO, and other styles
13

Bergbom, Mattias. "Super-Helices for Hair Modeling and Dynamics." Thesis, Linköping University, Department of Science and Technology, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10412.

Full text
Abstract:

We present core components of a hair modeling and dynamics solution for the feature film industry. Recent research results in hair simulation are exploited as a dynamics model based on solving the Euler-Lagrange equations of motion for a discretized Cosserat curve is implemented in its entirety. Solutions to the dynamics equations are derived and a framework for symbolic integration is outlined. The resulting system is not unconditionally positive definite but requires balanced physical parameters in order to be solvable using a regular linear solver. Several implementation examples are presented, as well as a novel modeling technique based on non-linear optimization.

APA, Harvard, Vancouver, ISO, and other styles
14

Erbay, Fulya. "A Comparative Evaluation Of Super." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613253/index.pdf.

Full text
Abstract:
In this thesis, it is proposed to get the high definition color images by using super &ndash
resolution algorithms. Resolution enhancement of RGB, HSV and YIQ color domain images is presented. In this study, three solution methods are presented to improve the resolution of HSV color domain images. These solution methods are suggested to beat the color artifacts on super resolution image and decrease the computational complexity in HSV domain applications. PSNR values are measured and compared with the results of other two color domain experiments. In RGB color space, super &ndash
resolution algorithms are applied three color channels (R, G, B) separately and PSNR values are measured. In YIQ color domain, only Y channel is processed with super resolution algorithms because Y channel is luminance component of the image and it is the most important channel to improve the resolution of the image in YIQ color domain. Also, the third solution method suggested for HSV color domain offers applying super resolution algorithm to only value channel. Hence, value channel carry brightness data of the image. The results are compared with the YIQ color domain experiments. During the experiments, four different super resolution algorithms are used that are Direct Addition, MAP, POCS and IBP. Although, these methods are widely used reconstruction of monochrome images, here they are used for resolution enhancement of color images. Color super resolution performances of these algorithms are tested.
APA, Harvard, Vancouver, ISO, and other styles
15

Smith, Cody S. "Compressive Point Cloud Super Resolution." DigitalCommons@USU, 2012. https://digitalcommons.usu.edu/etd/1392.

Full text
Abstract:
Automatic target recognition (ATR) is the ability for a computer to discriminate between different objects in a scene. ATR is often performed on point cloud data from a sensor known as a Ladar. Increasing the resolution of this point cloud in order to get a more clear view of the object in a scene would be of significant interest in an ATR application. A technique to increase the resolution of a scene is known as super resolution. This technique requires many low resolution images that can be combined together. In recent years, however, it has become possible to perform super resolution on a single image. This thesis sought to apply Gabor Wavelets and Compressive Sensing to single image super resolution of digital images of natural scenes. The technique applied to images was then extended to allow the super resolution of a point cloud.
APA, Harvard, Vancouver, ISO, and other styles
16

Hum, Herbert Hing-Jing. "The super-actor machine : a hybrid dataflowvon Neumann architecture." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=39346.

Full text
Abstract:
Emerging VLSI/ULSI technologies have created new opportunities in designing computer architectures capable of hiding the latencies and synchronization overheads associated with von Neumann-style multiprocessing. Pure Dataflow architectures have been suggested as solutions, but they do not adequately address the issues of local memory latencies and fine-grain synchronization costs. In this thesis, we propose a novel hybrid dataflow/von Neumann architecture, called the Super-Actor Machine, to address the problems facing von Neumann and pure dataflow machines. This architecture uses a novel high-speed memory organization known as a register-cache to tolerate local memory latencies and decrease local memory bandwidth requirements. The register-cache is unique in that it appears as a register file to the execution unit, while from the perspective of main memory, its contents are tagged as in conventional caches. Fine-grain synchronization costs are alleviated by the hybrid execution model and a loosely-coupled scheduling mechanism.
A major goal of this dissertation is to characterize the performance of the Super-Actor Machine and compare it with other architectures for a class of programs typical of scientific computations. The thesis includes a review on the precursor called the McGill Dataflow Architecture, description of a Super-Actor Execution Model, a design for a Super-Actor Machine, description of the register-cache mechanism, compilation techniques for the Super-Actor Machine and results from a detailed simulator. Results show that the Super-Actor Machine can tolerate local memory latencies and fine-grain synchronization overheads--the execution unit can sustain 99% throughput--if a program has adequate exposed parallelism.
APA, Harvard, Vancouver, ISO, and other styles
17

Lindberg, Magnus. "An Imitation-Learning based Agentplaying Super Mario." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4529.

Full text
Abstract:
Context. Developing an Artificial Intelligence (AI) agent that canpredict and act in all possible situations in the dynamic environmentsthat modern video games often consists of is on beforehand nearly im-possible and would cost a lot of money and time to create by hand. Bycreating a learning AI agent that could learn by itself by studying itsenvironment with the help of Reinforcement Learning (RL) it wouldsimplify this task. Another wanted feature that often is required is AIagents with a natural acting behavior and a try to solve that problemcould be to imitating a human by using Imitation Learning (IL). Objectives. The purpose of this investigation is to study if it is pos-sible to create a learning AI agent feasible to play and complete somelevels in a platform game with the combination of the two learningtechniques RL and IL. Methods. To be able to investigate the research question an imple-mentation is done that combines one RL technique and one IL tech-nique. By letting a set of human players play the game their behavioris saved and applied to the agents. The RL is then used to train andtweak the agents playing performance. A couple of experiments areexecuted to evaluate the differences between the trained agents againsttheir respective human teacher. Results. The results of these experiments showed promising indica-tions that the agents during different phases of the experiments hadsimilarly behavior compared to their human trainers. The agents alsoperformed well when comparing them to other already existing ones. Conclusions. To conclude there is promising results of creating dy-namical agents with natural behavior with the combination of RL andIL and that it with additional adjustments would make it performeven better as a learning AI with a more natural behavior.
APA, Harvard, Vancouver, ISO, and other styles
18

Monahan, Shean Patrick 1961. "A super computer discrete ordinates method without observable ray effects or numerical diffusion." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276911.

Full text
Abstract:
A new discrete ordinates method designed for use on modern, large memory, vector and/or parallel processing super computers has been developed. The method is similar to conventional SN techniques in that the medium is divided into spatial mesh cells and that discrete directions are used. However, in place of an approximate differencing scheme, a nearly exact matrix representation of the streaming operator is determined. Although extremely large, this matrix can be stored on today's computers for repeated use in the source iteration. Since the source iteration is cast in matrix form it benefits enormously from vector and/or parallel processing, if available. Several test results are presented demonstrating the reduction in numerical diffusion and elimination of ray effects.
APA, Harvard, Vancouver, ISO, and other styles
19

Dahlem, Marcus. "Optical studies of super-collimation in photonic crystals." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34677.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. [121]-125).
Recent developments in material science and engineering have made possible the fabrication of photonic crystals for optical wavelengths. These periodic structures of alternating high-to-low index of refraction materials allow the observation of peculiar effects, in particular, the propagation of optical beams without spatial spreading. This effect, called super-collimation (also known as self-collimation), allows diffraction-free propagation of micron-sized beams over centimeter-scale distances. This linear effect is a natural result of the unique dispersive properties of photonic crystals. In this thesis, these dispersive properties are studied in a two-dimensional photonic crystal slab. Both qualitative and quantitative descriptions are presented. The beam propagation method was used to simulate the evolution of a Gaussian beam inside such structures. The wavelength dependence of the super-collimation effect was studied, and it was observed that the optimum wavelength for this device was around 1500 nm. A precise contact-mode near-field optical microscopy technique was used to obtain high-resolution images of the beam profile at different positions along the photonic crystal, and showed that a 2 [micro]m beam width was conserved over 3 mm. In addition, high-resolution confocal measurements confirmed the size of the beam after 5 mm of propagation.
(cont.) The figure of merit associated with the super-collimation effect is defined by the number of diffraction lengths over which the beam stays collimated. The diffraction length is the distance in which a beam will broaden to 2¹ʹ² of its initial width. Previous experimental studies showed figures of merit smaller than 6; the results of this experiment show figures of merit as high as 376, which correspond to more than 14200 lattice constants. Preliminary results were obtained with an 8 mm sample that could achieve a figure of merit of 601.
by Marcus Dahlem.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
20

Russell, Bryan Christopher 1979. "Exploiting the sparse derivative prior for super-resolution." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87902.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (p. 69-73).
by Bryan Christopher Russell.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
21

Shih, Ta-Ming Ph D. Massachusetts Institute of Technology. "Super-collimation in a rod-based photonic crystal." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/42061.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 77-79).
Super-collimation is the propagation of a light beam without spreading that occurs when the light beam is guided by the dispersion properties of a photonic crystal, rather than by defects in the photonic crystal. Super-collimation has many potential applications, the most straight-forward of which is in the area of integrated optical circuits, where super-collimation can be utilized for optical routing and optical logic. Another interesting direction is the burgeoning field of optofluidics, in which integrated biological or chemical sensors can be based on super-collimating structures. The work presented in the thesis includes the design, fabrication, and characterizion of a rod-based two-dimensional photonic crystal super-collimator. The dispersion contours for the photonic crystal are simulated as part of the design process. Two different fabrication process methods are developed and applied. The super-collimator s fabricated, and the fabrication methods are analyzed and compared. Characterization of the super-collimator has resulted in the first experimental observation of upper-collimation in a two-dimensional photonic crystal of rods. The advantages of he rod-based device structure and potential applications of the super-collimator are discussed in closing.
by Ta-Ming Shih.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
22

Åslund, Jacob, and Anton Dahlin. "Compression of Generative Networks for Single Image Super-Resolution." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280329.

Full text
Abstract:
In this research project we have compressed the model size of a generative neural network trained to upscale low resolution images. After first training a large network for this task, we used knowledge distillation to train smaller networks to approximate its output. The weights of the resulting networks were also converted from float32 to float16 to further reduce model size. We found that the size of the original network could be reduced down to 50 percent without noticeable loss in performance, and down to 15 percent of the original size with acceptable loss in performance
I det här projektet har vi komprimerat storleken på ett generativt neuralt nät- verk som tränats för att förstora lågupplösta bilder. Efter att först ha tränat ett stort nätverk för detta ändamål, använde vi sedan knowledge distillation för att träna mindre nätverk att approximera dess utdata. Vikterna i de resulterande nätverken konverterades även från float32 till float16 för att ytterligare förminska storleken. Vi fann att storleken på originalnätverket kunde reduceras ned till 50 procent utan märkbar skillnad i resultat, och ned till 15 procent av originalstorleken med acceptabel skillnad i resultat.
APA, Harvard, Vancouver, ISO, and other styles
23

He, Qing Ph D. Massachusetts Institute of Technology. "A super-nyquist architecture for rateless underwater acoustic communication." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/75455.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 135-136).
Oceans cover about 70 percent of Earth's surface. Despite the abundant resources they contain, much of them remain unexplored. Underwater communication plays a key role in the area of deep ocean exploration. It is also essential in the field of the oil and fishing industry, as well as for military use. Although research on communicating wirelessly in the underwater environment began decades ago, it remains a challenging problem due to the oceanic medium, in which dynamic movements of water and rich scattering are commonplace. In this thesis, we develop an architecture for reliably communicating over the underwater acoustic channel. A notable feature of this architecture is its rateless property: the receiver simply collects pieces of transmission until successful decoding is possible. With this, we aim to achieve capacity-approaching communication under a variety of a priori unknown channel conditions. This is done by using a super-Nyquist (SNQ) transmission scheme. Several other important technologies are also part of the design, among them dithered repetition coding, adaptive decision feedback equalization (DFE), and multiple-input multiple-output (MIMO) communication. We present a complete block diagram for the transmitter and receiver architecture for the SNQ scheme. We prove the sufficiency of the architecture for optimality, and we show through analysis and simulation that as the SNQ signaling rate increases, the SNQ scheme is indeed capacity-achieving. At the end, the performance of the proposed SNQ scheme and its transceiver design are tested in physical experiments, whose results show that the SNQ scheme achieves a significant gain in reliable communication rate over conventional (non-SNQ) schemes.
by Qing He.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
24

Arachchige, Somi Ruwan Budhagoda. "Face recognition in low resolution video sequences using super resolution /." Online version of thesis, 2008. http://hdl.handle.net/1850/7770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Bégin, Isabelle. "Camera-independent learning and image quality assessment for super-resolution." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102957.

Full text
Abstract:
An increasing number of applications require high-resolution images in situations where the access to the sensor and the knowledge of its specifications are limited. In this thesis, the problem of blind super-resolution is addressed, here defined as the estimation of a high-resolution image from one or more low-resolution inputs, under the condition that the degradation model parameters are unknown. The assessment of super-resolved results, using objective measures of image quality, is also addressed.
Learning-based methods have been successfully applied to the single frame super-resolution problem in the past. However, sensor characteristics such as the Point Spread Function (PSF) must often be known. In this thesis, a learning-based approach is adapted to work without the knowledge of the PSF thus making the framework camera-independent. However, the goal is not only to super-resolve an image under this limitation, but also to provide an estimation of the best PSF, consisting of a theoretical model with one unknown parameter.
In particular, two extensions of a method performing belief propagation on a Markov Random Field are presented. The first method finds the best PSF parameter by performing a search for the minimum mean distance between training examples and patches from the input image. In the second method, the best PSF parameter and the super-resolution result are found simultaneously by providing a range of possible PSF parameters from which the super-resolution algorithm will choose from. For both methods, a first estimate is obtained through blind deconvolution and an uncertainty is calculated in order to restrict the search.
Both camera-independent adaptations are compared and analyzed in various experiments, and a set of key parameters are varied to determine their effect on both the super-resolution and the PSF parameter recovery results. The use of quality measures is thus essential to quantify the improvements obtained from the algorithms. A set of measures is chosen that represents different aspects of image quality: the signal fidelity, the perceptual quality and the localization and scale of the edges.
Results indicate that both methods improve similarity to the ground truth and can in general refine the initial PSF parameter estimate towards the true value. Furthermore, the similarity measure results show that the chosen learning-based framework consistently improves a measure designed for perceptual quality.
APA, Harvard, Vancouver, ISO, and other styles
26

Nordberg, Emma. "Föräldrars perspektiv av Super Mario och WoW." Thesis, University of Gävle, Faculty of Health and Occupational Studies, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-7077.

Full text
Abstract:

Vardagen ställer allt färre krav på fysisk aktivitet och stillasittande aktiviteter har blivit allt vanligare. En av dessa aktiviteter är tv- och dataspel som idag tros ha fler utövare än fotboll och ishockey har tillsammans. Studiens syfte var därför att undersöka attityder och värderingar hos föräldrar med tv- och dataspelande barn. Studien var kvalitativ på en beskrivande nivå. Sammanlagt har sex semistrukturerade intervjuer genomförts med totalt nio deltagande föräldrar, sex kvinnor och tre män. Huvudresultatet i studien visade att föräldrars negativa attityd främst påverkades av våldet som förekommer i många spel. Spelen kunde också ha en negativ effekt genom att barnen blev trötta och på dåligt humör ifall de spelade för länge. Positivt var att barnen hade kul då de spelade, de lärde sig nya saker samt att de kunde känna sig stolta över vad de åstadkommit i spelen. Jämfört med andra medier jämställde föräldrar tv- och dataspelens effekter på hälsan. Föräldrarna kunde se en skillnad mellan olika konsoler då Nintendo Wii är den konsol som erbjuder rörelse, till skillnad från traditionella konsoler.

APA, Harvard, Vancouver, ISO, and other styles
27

Bersin, Eric (Eric A. ). "Super-resolution localization and readout of individual solid-state qubits." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115623.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 67-74).
A central goal in quantum information science is to establish entanglement across multiple quantum memories in a manner that allows individual control and readout of each constituent qubit. In the area of solid state quantum optics, a leading system is the negatively charged nitrogen vacancy center in diamond, which allows access to a spin center that can be entangled to multiple nuclear spins. Scaling these systems will require the entanglement of multiple NV centers, together with their nuclear spins, in a manner that allows for individual control and readout. Here we demonstrate a technique that allows us to prepare and measure individual centers within an ensemble, well below the diffraction limit. The technique relies on optical addressing of spin-dependent transitions, and makes use of the built-in inhomogeneous distribution of emitters resulting from strain splitting to measure individual spins in a manner that is non-destructive to the quantum state of other nearby centers. We demonstrate the ability to resolve individual NV centers with subnanometer spatial resolution. Furthermore, we demonstrate crosstalk-free individual readout of spin populations within a diffraction limited spot by performing resonant readout of one NV during a spectroscopic sequence of another. This method opens the door to multi-qubit coupled spin systems in solids, with individual spin manipulation and readout.
by Eric Bersin.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
28

Brown, Jeffrey S. (Jeffrey Steven) 1977. "An empirical analysis of super resolution techniques for image restoration." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/81525.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaves 93-94).
by Jeffrey S. Brown.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
29

Roeder, James Roger. "Assessment of super-resolution for face recognition from very-low resolution images." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2009. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Toronto, Neil B. "Super-Resolution via Image Recapture and Bayesian Effect Modeling." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/1839.

Full text
Abstract:
The goal of super-resolution is to increase not only the size of an image, but also its apparent resolution, making the result more plausible to human viewers. Many super-resolution methods do well at modest magnification factors, but even the best suffer from boundary and gradient artifacts at high magnification factors. This thesis presents Bayesian edge inference (BEI), a novel method grounded in Bayesian inference that does not suffer from these artifacts and remains competitive in published objective quality measures. BEI works by modeling the image capture process explicitly, including any downsampling, and modeling a fictional recapture process, which together allow principled control over blur. Scene modeling requires noncausal modeling within a causal framework, and an intuitive technique for that is given. Finally, BEI with trivial changes is shown to perform well on two tasks outside of its original domain—CCD demosaicing and inpainting—suggesting that the model generalizes well.
APA, Harvard, Vancouver, ISO, and other styles
31

Tillberg, Paul W. "Development of multiplexing strategies for electron and super-resolution optical microscopy/." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79544.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 30-31).
The aim of this work is to increase the multiplexing capabilities of electron and super resolution optical microscopy. This will be done through the development of molecular-scale barcodes that can be resolved in one of the two high resolution imaging modes. In the optical domain, the number of colors available in stochastic optical reconstruction microscopy (STORM) will be increased by taking advantage of not only the spectral differences between STORM fluorophores but their kinetic properties as well. In the electron microscopy domain, the recently developed electron contrast-generating protein miniSOG will be concatenated to produce fully genetically encoded barcodes that can be resolved using standard transmission electron microscopy techniques. At the time of writing, the hardware for a STORM microscope has been assembled. Single molecule fluorescence blinking has been observed, though the imaging buffer still needs to be optimized for imaging. Concatamers of miniSOG have been generated and can be expressed in HEK cells and photo-oxidized.
by Paul W. Tillberg.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
32

McFarland, Matthew Ogden. "Enhanced Cal Poly SuPER System Simulink Model." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/376.

Full text
Abstract:
The Cal Poly Sustainable Power for Electrical Resources (SuPER) project is a solar power DC distribution system designed to autonomously manage and supply the energy needs of a single family off-the-grid home. The following thesis describes the improvement and re-design of a MATLAB Simulink model for the Cal Poly SuPER system. This model includes a photovoltaic (PV) array, a lead-acid gel battery with temperature effects, a wind turbine model, a re-designed DC-DC converter, a DC microgrid, and multiple loads. This thesis will also include several control algorithms such as a temperature controlled thermoelectric (T.E.) cooler, intelligent load switching, and an intelligent power source selector. Furthermore, a seven day simulation and evaluation of the results are presented. This simulation is an important tool for further system development, re-design, and long term system performance prediction.
APA, Harvard, Vancouver, ISO, and other styles
33

Firoiu, Vlad. "Beating the world's best at Super Smash Bros. with deep reinforcement learning." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108984.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 29).
There has been a recent explosion in the capabilities of game-playing artificial intelligence. Many classes of RL tasks, from Atari games to motor control to board games, are now solvable by fairly generic algorithms, based on deep learning, that learn to play from experience with often minimal knowledge of the specific domain of interest. In this work, we will investigate the performance of these methods on Super Smash Bros. Melee (SSBM), a popular multiplayer fighting game. The SSBM environment has complex dynamics and partial observability, making it challenging for man and machine alike. The multiplayer aspect poses an additional challenge, as the vast majority of recent advances in RL have focused on single-agent environments. Nonetheless, we will show that it is possible to train agents that are competitive against and even surpass human professionals, a new result for the video game setting..
by Vlad Firoiu.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

Castillo, Araújo Victor. "Ensembles of Single Image Super-Resolution Generative Adversarial Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-290945.

Full text
Abstract:
Generative Adversarial Networks have been used to obtain state-of-the-art results for low-level computer vision tasks like single image super-resolution, however, they are notoriously difficult to train due to the instability related to the competing minimax framework. Additionally, traditional ensembling mechanisms cannot be effectively applied with these types of networks due to the resources they require at inference time and the complexity of their architectures. In this thesis an alternative method to create ensembles of individual, more stable and easier to train, models by using interpolations in the parameter space of the models is found to produce better results than those of the initial individual models when evaluated using perceptual metrics as a proxy of human judges. This method can be used as a framework to train GANs with competitive perceptual results in comparison to state-of-the-art alternatives.
Generative Adversarial Networks (GANs) har använts för att uppnå state-of-the- art resultat för grundläggande bildanalys uppgifter, som generering av högupplösta bilder från bilder med låg upplösning, men de är notoriskt svåra att träna på grund av instabiliteten relaterad till det konkurrerande minimax-ramverket. Dessutom kan traditionella mekanismer för att generera ensembler inte tillämpas effektivt med dessa typer av nätverk på grund av de resurser de behöver vid inferenstid och deras arkitekturs komplexitet. I det här projektet har en alternativ metod för att samla enskilda, mer stabila och modeller som är lättare att träna genom interpolation i parameterrymden visat sig ge bättre perceptuella resultat än de ursprungliga enskilda modellerna och denna metod kan användas som ett ramverk för att träna GAN med konkurrenskraftig perceptuell prestanda jämfört med toppmodern teknik.
APA, Harvard, Vancouver, ISO, and other styles
35

Fowler, Matthew J. "Acquisition strategies for aging aircraft modernizing the Marine Corps' CH-53E Super Stallion Helicopter." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA401085.

Full text
Abstract:
Thesis (M.S. in Management) Naval Postgraduate School, Dec. 2001.
Thesis advisors, David F. Matthews, Donald R. Eaton, William Gates. "December 2001." Includes bibliographical references (p. 135-138). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
36

Linares, Oscar Alonso Cuadros. "Mandible and Skull Segmentation in Cone Bean Computed Tomography Data." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-24072018-165943/.

Full text
Abstract:
Cone Beam Computed Tomography (CBCT) is a medical imaging technique routinely employed for diagnosis and treatment of patients with cranio-maxillo-facial defects. CBCT 3D reconstruction and segmentation of bones such as mandible or maxilla are essential procedures in orthodontic treatments. However, CBCT images present characteristics that are not desirable for processing, including low contrast, inhomogeneity, noise, and artifacts. Besides, values assigned to voxels are relative Hounsfield Units (HU), unlike traditional Computed Tomography (CT). Such drawbacks render CBCT segmentation a difficult and time-consuming task, usually performed manually with tools designed for medical image processing. We introduce two interactive two-stage methods for 3D segmentation of CBCT data: i) we first reduce the CBCT image resolution by grouping similar voxels into super-voxels defining a graph representation; ii) next, seeds placed by users guide graph clustering algorithms, splitting the bones into mandible and skull. We have evaluated our segmentation methods intensively by comparing the results against ground truth data of the mandible and the skull, in various scenarios. Results show that our methods produce accurate segmentation and are robust to changes in parameter settings. We also compared our approach with a similar segmentation strategy and we showed that it produces more accurate segmentation of the mandible and skull. In addition, we have evaluated our proposal with CT data of patients with deformed or missing bones. We obtained more accurate segmentation in all cases. As for the efficiency of our implementation, a segmentation of a typical CBCT image of the human head takes about five minutes. Finally, we carried out a usability test with orthodontists. Results have shown that our proposal not only produces accurate segmentation, as it also delivers an effortless and intuitive user interaction.
Tomografia Computadorizada de Feixe Cônico (TCFC) é uma modalidade para obtenção de imagens médicas 3D do crânio usada para diagnóstico e tratamento de pacientes com defeitos crânio-maxilo-faciais. A segmentação tridimensional de ossos como a mandíbula e a maxila são procedimentos essências em tratamentos ortodônticos. No entanto, a TCFC apresenta características não desejáveis para processamento digital como, por exemplo, baixo contraste, inomogeneidade, ruído e artefatos. Além disso, os valores atribuídos aos voxels são unidades de Hounsfield (HU) relativas, diferentemente da Tomografia Computadorizada (TC) tradicional. Esses inconvenientes tornam a segmentação de TCFC uma tarefa difícil e demorada, a qual é normalmente realizada por meio de ferramentas desenvolvidas para processamento digital de imagens médicas. Esta tese introduz dois métodos interativos para a segmentação 3D de TCFC, os quais são divididos em duas etapas: i) redução da resolução da TCFC por meio da agrupamento de voxels em super-voxels, seguida da criação de um grafo no qual os vértices são super-voxels; ii) posicionamento de sementes pelo usuário e segmentação por algoritmos de agrupamento em grafos, o que permite separar os ossos rotulados. Os métodos foram intensamente avaliados por meio da comparação dos resultados com padrão ouro da mandíbula e do crânio, considerando diversos cenários. Os resultados mostraram que os métodos não apenas produzem segmentações precisas, como também são robustos a mudanças nos parâmetros. Foi ainda realizada uma comparação com um trabalho relacionado, gerando melhores resultados tanto na segmentação da mandíbula quanto a do crânio. Além disso, foram avaliadas TCs de pacientes com ossos faltantes e quebrados. A segmentação de uma TCFC é realizada em cerca de 5 minutos. Por fim, foram realizados testes com usuarios ortodontistas. Os resultados mostraram que nossa proposta não apenas produz segmentações precisas, como também é de fácil interação.
APA, Harvard, Vancouver, ISO, and other styles
37

Avello, Miriam Y. "Fabrication of a two-terminal super conducting device with a poled ferroelectric control layer." FIU Digital Commons, 1996. http://digitalcommons.fiu.edu/etd/1340.

Full text
Abstract:
The discovery of High-Temperature Superconductors (HTSCs) has spurred the need for the fabrication of superconducting electronic devices able to match the performance of today's semiconductor devices. While there are several HTSCs in use today, YBaCuO7-x (YBCO) is the better characterized and more widely used material for small electronic applications. This thesis explores the fabrication of a Two-Terminal device with a superconductor and a painted on electrode as the terminals and a ferroelectric, BaTiO 3 (BTO), in between. The methods used to construct such a device and the challenges faced with the fabrication of a viable device will be examined. The ferroelectric layer of the devices that proved adequate for use were poled by the application of an electric field. Temperature Bias Poling used an applied field of 105V/cm at a temperature of approximately 135*C. High Potential Poling used an applied field of 106V/cm at room temperature (20*C). The devices were then tested for a change in their superconducting critical temperature, Tc. A shift of 1-2K in the Tc(onset) of YBCO was observed for Temperature Bias Poling and a shift of 2-6K for High Potential Poling. These are the first reported results of the field effect using BTO on YBCO. The mechanism involved in the shifting of Tc will be discussed along with possible applications.
APA, Harvard, Vancouver, ISO, and other styles
38

Mueller, Kyle Thomas. "Super-adiabatic combustion in porous media with catalytic enhancement for thermoelectric power conversion." Master's thesis, University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4809.

Full text
Abstract:
The combustion of ultra-lean fuel to air mixtures provides an efficient way to convert the chemical energy of hydrocarbons into useful power. Conventional burning techniques of a mixture have defined flammability limits beyond which a flame cannot self-propagate due to heat losses. Matrix stabilized porous medium combustion is an advanced technique in which a solid porous matrix within the combustion chamber accumulates heat from the hot gaseous products and preheats incoming reactants. This heat recirculation extends the standard flammability limits and allows the burning of ultra-lean fuel mixtures, conserving energy resources, or the burning of gases of low calorific value, utilizing otherwise wasted resources. The heat generated by the porous burner can be harvested with thermoelectric devices for a reliable method of generating electricity for portable electronic devices by the burning of otherwise noncombustible mixtures. The design of the porous media burner, its assembly and testing are presented. Highly porous (~80% porosity) alumina foam was used as the central media and alumina honeycomb structure was used as an inlet for fuel and an outlet for products of the methane-air combustion. The upstream and downstream honeycomb structures were designed with pore sizes smaller than the flame quenching distance, preventing the flame from propagating outside of the central section. Experimental results include measurements from thermocouples distributed throughout the burner and on each side of the thermoelectric module along with associated current, voltage and power outputs. Measurements of the burner with catalytic coating were obtained for stoichiometric and lean mixtures and compared to the results obtained from the catalytically inert matrix, showing the effect on overall efficiency for the combustion of fuel-lean mixtures.
ID: 030646196; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.A.E.)--University of Central Florida, 2011.; Includes bibliographical references (p. 105-119).
M.S.A.E.
Masters
Mechanical and Aerospace Engineering
Engineering and Computer Science
Aerospace Engineering; Thermofluid Aerodynamic Systems Track
APA, Harvard, Vancouver, ISO, and other styles
39

Pethe, Akshay. "SUPER RESOLUTION 3D SCANNING USING SPATIAL LIGHT MODULATOR AND BAND CORRECTION." UKnowledge, 2008. http://uknowledge.uky.edu/gradschool_theses/550.

Full text
Abstract:
Multi Frequency Phase Measuring Profilometry is the most popular lateral contact 3-D Scanning technique. The Phase Measuring Profilometry is limited in resolution by the projector and cameras used. Conventional signal projectors have a maximum of 2000 to 4000 scan lines limiting the projector resolution. To obtain greater detail with higher resolution the PMP technique is applied to a Spatial Light Modulator (SLM) having 12000 lines, very large as compared to conventional projectors. This technology can achieve super resolution scans having varied applications. Scans achieved from PMP suffer from a certain type of artifact called “banding” which are periodic bands across the captured target. This leads to incorrect measurement of surfaces. Banding is the most limiting noise source in PMP because it increases with lower frequency and decrease in number of patterns. The requirement for lager number of patterns increases the possibility of motion banding. The requirement for higher frequency leads to the necessity for multifrequency PMP which, again leads to more patterns and longer scan times. We aim to reduce the banding by correcting the phase of the captured data.
APA, Harvard, Vancouver, ISO, and other styles
40

Satish, Likith Poovanna Kelapanda, and Vinay Sudha Ethiraj. "Human-like Super Mario Play using Artificial Potential Fields." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3146.

Full text
Abstract:
Artifi cial potential fi elds is a technique that use attractive and repelling forces to control e.g. robots, or non player characters in games. We show how this technique may be used in a controller for Super Mario in a way create a human-like playing style. By combining fi elds of progression, opponent avoidance and rewards, we get a controller that tries to collect the rewards and avoid the opponents at the same time as it is progressing towards the goal of the level. We use human test persons to improve the controller further by letting them make pair-wise comparisons with human play recordings, and use the feed-back to calibrate the bot for human-like play.
Student 1: Likith Poovanna Kelapanda Staish Mob: +46735542609 Student 2: Vinay Sudha Ethiraj Mob: +46736135683
APA, Harvard, Vancouver, ISO, and other styles
41

Lin, Frank Chi-Hao. "Super-resolution image processing with application to face recognition." Queensland University of Technology, 2008. http://eprints.qut.edu.au/16703/.

Full text
Abstract:
Subject identification from surveillance imagery has become an important task for forensic investigation. Good quality images of the subjects are essential for the surveillance footage to be useful. However, surveillance videos are of low resolution due to data storage requirements. In addition, subjects typically occupy a small portion of a camera's field of view. Faces, which are of primary interest, occupy an even smaller array of pixels. For reliable face recognition from surveillance video, there is a need to generate higher resolution images of the subject's face from low-resolution video. Super-resolution image reconstruction is a signal processing based approach that aims to reconstruct a high-resolution image by combining a number of low-resolution images. The low-resolution images that differ by a sub-pixel shift contain complementary information as they are different "snapshots" of the same scene. Once geometrically registered onto a common high-resolution grid, they can be merged into a single image with higher resolution. As super-resolution is a computationally intensive process, traditional reconstruction-based super-resolution methods simplify the problem by restricting the correspondence between low-resolution frames to global motion such as translational and affine transformation. Surveillance footage however, consists of independently moving non-rigid objects such as faces. Applying global registration methods result in registration errors that lead to artefacts that adversely affect recognition. The human face also presents additional problems such as selfocclusion and reflectance variation that even local registration methods find difficult to model. In this dissertation, a robust optical flow-based super-resolution technique was proposed to overcome these difficulties. Real surveillance footage and the Terrascope database were used to compare the reconstruction quality of the proposed method against interpolation and existing super-resolution algorithms. Results show that the proposed robust optical flow-based method consistently produced more accurate reconstructions. This dissertation also outlines a systematic investigation of how super-resolution affects automatic face recognition algorithms with an emphasis on comparing reconstruction- and learning-based super-resolution approaches. While reconstruction-based super-resolution approaches like the proposed method attempt to recover the aliased high frequency information, learning-based methods synthesise them instead. Learning-based methods are able to synthesise plausible high frequency detail at high magnification ratios but the appearance of the face may change to the extent that the person no longer looks like him/herself. Although super-resolution has been applied to facial imagery, very little has been reported elsewhere on measuring the performance changes from super-resolved images. Intuitively, super-resolution improves image fidelity, and hence should improve the ability to distinguish between faces and consequently automatic face recognition accuracy. This is the first study to comprehensively investigate the effect of super-resolution on face recognition. Since super-resolution is a computationally intensive process it is important to understand the benefits in relation to the trade-off in computations. A framework for testing face recognition algorithms with multi-resolution images was proposed, using the XM2VTS database as a sample implementation. Results show that super-resolution offers a small improvement over bilinear interpolation in recognition performance in the absence of noise and that super-resolution is more beneficial when the input images are noisy since noise is attenuated during the frame fusion process.
APA, Harvard, Vancouver, ISO, and other styles
42

Alanazi, Mohammad N. "Consistency checking in multiple UML state diagrams using super state analysis." Diss., Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Han, Shuang. "The Real-Time Multitask Threading Control." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10431.

Full text
Abstract:

In this master thesis, we design and implemented a super mode for multiple streaming signal processing applications, and got the timing budget based on Senior DSP processor. This work presented great opportunity to study the real-time system and firmware design knowledge on embedded system.

APA, Harvard, Vancouver, ISO, and other styles
44

Bako, Matúš. "Rekonstrukce nekvalitních snímků obličejů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417271.

Full text
Abstract:
In this thesis, I tackle the problem of facial image super-resolution using convolutional neural networks with focus on preserving identity. I propose a method consisting of DPNet architecture and training algorithm based on state-of-the-art super-resolution solutions. The model of DPNet architecture is trained on Flickr-Faces-HQ dataset, where I achieve SSIM value 0.856 while expanding the image to four times the size. Residual channel attention network, which is one of the best and latest architectures, achieves SSIM value 0.858. While training models using adversarial loss, I encountered problems with artifacts. I experiment with various methods trying to remove appearing artefacts, which weren't successful so far. To compare quality assessment with human perception, I acquired image sequences sorted by percieved quality. Results show, that quality of proposed neural network trained using absolute loss approaches state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Bei 1974. "Design and implementation of the Hitachi SuperH Processor Core." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/47511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Nilsson, Erik. "Super-Resolution for Fast Multi-Contrast Magnetic Resonance Imaging." Thesis, Umeå universitet, Institutionen för fysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160808.

Full text
Abstract:
There are many clinical situations where magnetic resonance imaging (MRI) is preferable over other imaging modalities, while the major disadvantage is the relatively long scan time. Due to limited resources, this means that not all patients can be offered an MRI scan, even though it could provide crucial information. It can even be deemed unsafe for a critically ill patient to undergo the examination. In MRI, there is a trade-off between resolution, signal-to-noise ratio (SNR) and the time spent gathering data. When time is of utmost importance, we seek other methods to increase the resolution while preserving SNR and imaging time. In this work, I have studied one of the most promising methods for this task. Namely, constructing super-resolution algorithms to learn the mapping from a low resolution image to a high resolution image using convolutional neural networks. More specifically, I constructed networks capable of transferring high frequency (HF) content, responsible for details in an image, from one kind of image to another. In this context, contrast or weight is used to describe what kind of image we look at. This work only explores the possibility of transferring HF content from T1-weighted images, which can be obtained quite quickly, to T2-weighted images, which would take much longer for similar quality. By doing so, the hope is to contribute to increased efficacy of MRI, and reduce the problems associated with the long scan times. At first, a relatively simple network was implemented to show that transferring HF content between contrasts is possible, as a proof of concept. Next, a much more complex network was proposed, to successfully increase the resolution of MR images better than the commonly used bicubic interpolation method. This is a conclusion drawn from a test where 12 participants were asked to rate the two methods (p=0.0016) Both visual comparisons and quality measures, such as PSNR and SSIM, indicate that the proposed network outperforms a similar network that only utilizes images of one contrast. This suggests that HF content was successfully transferred between images of different contrasts, which improves the reconstruction process. Thus, it could be argued that the proposed multi-contrast model could decrease scan time even further than what its single-contrast counterpart would. Hence, this way of performing multi-contrast super-resolution has the potential to increase the efficacy of MRI.
APA, Harvard, Vancouver, ISO, and other styles
47

Mattsson, Filip. "Evolving Mario levels for dimensions of quality : An evaluation of metrics for the subjective quality of Super Mario levels." Thesis, Jönköping University, JTH, Avdelningen för datateknik och informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-53235.

Full text
Abstract:
Procedural level generation has long since been prevalent in video games. A desire to make the methods used more generalised has recently sparked an interest in adapting machine learning for this purpose. However, this field is still relatively nascent and has several open questions. As such, this study investigated several metrics for the evaluation of machine learning assisted level generators for Super Mario Bros. This was done by using a generative adversarial network (GAN) together with evolutionary programming to generate levels that maximize the aforementioned metrics individually. Then, in order to establish correlative relationships, a user study was conducted. In this user study, participants were asked to play through the generated levels and rate them according to enjoyment, aesthetics, and difficulty. We show significant correlations between several metrics and the three dimensions of quality; some such correlations are also, seemingly, independent of prior gaming experience. We contribute to the field of machine learning assisted level generation by 1) reinforcing certain metrics’ validity for use in the evaluation of level generators, and 2) by demonstrating that this evolutionary approach can be used to control difficulty effectively.
APA, Harvard, Vancouver, ISO, and other styles
48

Zins, Matthieu. "Color Fusion and Super-Resolution for Time-of-Flight Cameras." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-141956.

Full text
Abstract:
The recent emergence of time-of-flight cameras has opened up new possibilities in the world of computer vision. These compact sensors, capable of recording the depth of a scene in real-time, are very advantageous in many applications, such as scene or object reconstruction. This thesis first addresses the problem of fusing depth data with color images. A complete process to combine a time-of-flight camera with a color camera is described and its accuracy is evaluated. The results show that a satisfying precision is reached and that the step of calibration is very important. The second part of the work consists of applying super-resolution techniques to the time-of-flight camera in order to improve its low resolution. Different types of super-resolution algorithms exist but this thesis focuses on the combination of multiple shifted depth maps. The proposed framework is made of two steps: registration and reconstruction. Different methods for each step are tested and compared according to the improvements reached in term of level of details, sharpness and noise reduction. The results obtained show that Lucas-Kanade performs the best for the registration and that a non-uniform interpolation gives the best results in term of reconstruction. Finally, a few suggestions are made about future work and extensions for our solutions.
APA, Harvard, Vancouver, ISO, and other styles
49

Forsell, Sophie. "Game Development from Nintendo 8-bit to Wii." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5524.

Full text
Abstract:
“The game begins the moment a person touches a console -- everything builds from that.” (Quote by Shiguru Miyamoto; founder of Super Mario) This report contains a well-structured analysis of the main four Super Mario games that clearly states a difference in story, hardware, software development and design. The report is structured in sections for each game to better understand the concept of the Super Mario games. The report ends with comparisons of the games for a better view of the paradigm between them. The pictures and quotations in this report are referenced to the company that has copy write and Shiguru Miyamoto that is the founder of the character Super Mario.
Rapporten innehåller välstrukturerade analyser av fyra Super Mario spel som tydligt visar en skiljning i handling, hårdvaran, mjukvaran och design. Rapporten är strukturerad i sektioner för varje spel för en bättre förståelse av Super Mario spelen. Rapporten sammanfattas i jämförelser mellan spelen för en bättre översikt över paradigm skifterna mellan spelen.
APA, Harvard, Vancouver, ISO, and other styles
50

Vassilo, Kyle. "Single Image Super Resolution with Infrared Imagery and Multi-Step Reinforcement Learning." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1606146042238906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography