To see the other types of publications on this topic, follow the link: Computer applications.

Dissertations / Theses on the topic 'Computer applications'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer applications.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tarnoff, David. "Episode 2.10 – Gray Code Conversion and Applications." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/16.

Full text
Abstract:
We continue our discussion of Gray code by presenting algorithms used to convert between the weighted numeral system of unsigned binary and the Gray code ordered sequence. We also show how to implement these algorithms in our code.
APA, Harvard, Vancouver, ISO, and other styles
2

Paisley, Jonathan. "Application and network traffic correlation of grid applications." Thesis, Connect to e-thesis, 2006. http://theses.gla.ac.uk/535/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2006.
Ph.D. thesis submitted to the Department of Computing Science, University of Glasgow, 2006. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
3

Collins, Rob. "Computer applications to special education." Thesis, Keele University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238175.

Full text
Abstract:
This thesis investigates the way in which software for adults with severe learning difficulties should be designed. Literature from educational technology, the psychology of mental handicap and computer science is reviewed from the Author's viewpoint of software engineering. The literature review points to a need for the design of systems in this area to be a multidisciplinary activity. Four case studies in software development for adults with severe learning difficulties are described. These track the development of software systems from conception, through design and development to evaluation. The thesis then proceeds to show that technically adequate software is in itself not enough and that there is a need for staff support and staff development. Systems to implement these for staff working with adults who have severe learning difficulties are proposed and evaluated. The thesis concludes with specific design criteria and argues for a more holistic view of design within software development for social settings.
APA, Harvard, Vancouver, ISO, and other styles
4

Christie, Gordon A. "Computer Vision for Quarry Applications." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/42762.

Full text
Abstract:
This thesis explores the use of computer vision to facilitate three different processes of a quarry's operation. The first is the blasting process. This is where operators determine where to drill in order to execute an efficient and safe blast. Having an operator manually determine the drilling angles and positions can lead to inefficient and dangerous blasts. By using two cameras, oriented vertically, and separated by a fixed baseline, Structure from Motion techniques can be used to create a scaled 3D model of a bench. This can then be analyzed to provide operators with borehole locations and drilling angles in relation to fixed reference targets. The second process explored is the crushing process, where the rocks pass through different crushers that reduce the rocks into smaller sizes. The crushed rocks are then dropped onto a moving conveyor belt. The maximum dimension of the rocks exiting the crushers should not exceed size thresholds that are specific to each crusher. This thesis presents a 2D vision system capable of estimating the size distribution of the rocks by attempting to segment the rocks in each image. The size distribution, based on the maximum dimension of each rock, is estimated by finding the maximum dimension in the image in pixels and converting that to inches. The third process of the quarry operations explored is where the final product is piled up to form stockpiles. For inventory purposes, operators often carry out a manual estimation of the size of a the stockpile. This thesis presents a vision system capable of providing a more accurate estimate for the size of the stockpile by using Structure from Motion techniques to create a 3D reconstruction. User interaction helps to find the points that are relevant to the stockpile in the resulting point cloud, which are then used to estimate the volume.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Chi, Wen-Hsiang. "Computer applications in counselor education /." The Ohio State University, 1985. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487259125219338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ohmer, Julius Fabian. "Computer vision applications on graphics processing units." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16463/1/Julius_Ohmer_Thesis.pdf.

Full text
Abstract:
Over the last few years, commodity Graphics Processing Units (GPUs) have evolved from fixed graphics pipeline processors into more flexible and powerful data-parallel processors. These stream processors are capable of sustaining computation rates of greater than ten times that of a single-core CPU. GPUs are inexpensive and are becoming ubiquitous in a wide variety of computer architectures including desktop and laptop computers, PDAs and cell phones. This research works investigates possible ways to use modern GPUs for real-time computer vision and pattern classification tasks. Special attention is paid to algorithms, where the power of the CPU is a limiting factor. This is in particular the case for real-time tracking algorithms on video streams, where many candidate regions must be evaluated at once to allow stable tracking of features. They impose a high computational burdon on sequential processing units such as the CPU. The proposed implementation presented in this thesis is considering standard PC platforms rather than expensive special dedicated hardware to allow a broad variety of users to benefit from powerful computer vision applications. In particular, this thesis includes following topics: 1. First, we present a framework for computer vision on the GPU, which is used as a foundation for the implementation of computer vision methods. 2. We continue with the discussion of GPU-based implementation of Kernel Methods, including Support Vector Machines and Kernel PCA. 3. Finally, we propose GPU-accelerated implementations of two tracking algorithms. The first algorithm uses geometric templates in a gradient vector field. The second algorithm is a color-based approach in a particle filter framework. Both are able to track objects in a video stream. This thesis concludes with a final discussion of the presented methods and will propose directions for further research work. It will also briefly present the features of the next generation of GPUs.
APA, Harvard, Vancouver, ISO, and other styles
7

Ohmer, Julius Fabian. "Computer vision applications on graphics processing units." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16463/.

Full text
Abstract:
Over the last few years, commodity Graphics Processing Units (GPUs) have evolved from fixed graphics pipeline processors into more flexible and powerful data-parallel processors. These stream processors are capable of sustaining computation rates of greater than ten times that of a single-core CPU. GPUs are inexpensive and are becoming ubiquitous in a wide variety of computer architectures including desktop and laptop computers, PDAs and cell phones. This research works investigates possible ways to use modern GPUs for real-time computer vision and pattern classification tasks. Special attention is paid to algorithms, where the power of the CPU is a limiting factor. This is in particular the case for real-time tracking algorithms on video streams, where many candidate regions must be evaluated at once to allow stable tracking of features. They impose a high computational burdon on sequential processing units such as the CPU. The proposed implementation presented in this thesis is considering standard PC platforms rather than expensive special dedicated hardware to allow a broad variety of users to benefit from powerful computer vision applications. In particular, this thesis includes following topics: 1. First, we present a framework for computer vision on the GPU, which is used as a foundation for the implementation of computer vision methods. 2. We continue with the discussion of GPU-based implementation of Kernel Methods, including Support Vector Machines and Kernel PCA. 3. Finally, we propose GPU-accelerated implementations of two tracking algorithms. The first algorithm uses geometric templates in a gradient vector field. The second algorithm is a color-based approach in a particle filter framework. Both are able to track objects in a video stream. This thesis concludes with a final discussion of the presented methods and will propose directions for further research work. It will also briefly present the features of the next generation of GPUs.
APA, Harvard, Vancouver, ISO, and other styles
8

Hodgkinson, Derek Anthony Martin. "Computer graphics applications in offshore hydrodynamics." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26705.

Full text
Abstract:
The results of hydrodynamic analyses of two problems involving offshore structures are displayed graphically. This form of presentation of the results and the liberal use of colour have been found to significantly help the ease in which the results are interpreted. For the transformation of waves around an artificial island, a time history of the evolution of the regular, unidirectional wave field around an artificial island is obtained. Through the use of colour, regions in which wave breaking occurs have been clearly defined. The numerical technique used is based on the finite element method using eight noded isoparametric elements. The determination of the transformed wave field takes wave breaking, wave refraction, diffraction, reflection and shoaling into account. The graphical display is achieved by using a plotting program developed for the output of finite element analyses. The motions of a semi-submersible rig are computed from the RAO curves of the rig, used to obtain its' small response in a random sea. The numerical technique used in the analysis assumes that the vertical members are slender and may be analysed using the Morison equation whereas the hulls are treated as large members which are discretised and analysed using diffraction theory. The discretisation of the cylinders and hulls together with the time history of the rig's motions are displayed graphically. Once again, the graphical display is plotted using a program developed for the output of finite element analyses for four noded elements. In this case, a finite element technique has not been employed but the results were ordered to act as though this is the case. The slender members (cylinders) and large members (hulls) are clearly distinguishable by using different colours. The elements used in the analysis are also clearly shown. The VAX 11/730 system was used to obtain the results shown. A video tape, using the results of a time stepping procedure, was made by successively recording the hardcopies produced by the VAX printer. The time stepping could also be seen, in real time, on the IRIS.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
9

Colombi, David Paul. "Computer applications for the probation service." Thesis, University of Southampton, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sofeikov, Konstantin Igorevich. "Measure concentration in computer vision applications." Thesis, University of Leicester, 2018. http://hdl.handle.net/2381/42791.

Full text
Abstract:
We live in the Information Age. In this age technological industry allows individuals to explore their personalized needs, therefore simplifying the procedure of making decisions. It also allows big global market players to leverage amounts of information they collect over time in order to excel in the markets they are operating in. Huge and often incomprehensive volumes of information collected to date constitute the phenomenon of Big Data. Big Data is a term used to describe datasets that are not suitable for processing by traditional software. To date, the commonly used way to get value out of Big Data is to employ a wide range of machine learning techniques. Machine learning is genuinely data-driven. The more data are available the better, from statistical point of view. This enables creation of an existing range of applications for broad spectrum of modeling and predictive tasks. Traditional methods of machine learning (e.g. linear models) are easy to implement and give computationally cheap solutions. These solutions, however, are not always capable to capture the underlaying complexity of Big Data. More sophisticated approaches (e.g. Convolution Neural Networks in computer vision) are show empirically to be reliable, but this reliability bears high computational costs. A natural way to overcome this obstacle appears to be reduction of Data Volume (the number of factors, attributes and records). Doing so, however, is an extremely tedious and non-trivial task itself. In this thesis we show that, thanks to well-known concentration of measure effect, it is often beneficial to keep the dimensionality of the problem high and use it to your own advantage. Measure concentration effect is a phenomenon that can only be found in high dimensional spaces. One of theoretical findings of this thesis is that using measure concentration effect allows one to correct individual mistakes of Artificial Intelligence (AI) systems in a cheap and non-intrusive way. Specifically we show how to correct AI systems errors with linear functional while not changing their inner decision making processes. As an illustration of how one can benefit from this we have developed Knowledge Transfer framework for legacy AI systems. The development of this framework is also an answer to a fundamental question: how a legacy "student" AI system could learn from "teacher" AI system without complete retraining. Theoretical findings are illustrated with several case studies in the area computer vision.
APA, Harvard, Vancouver, ISO, and other styles
11

King, Daniel E. "Computer Applications At The Village Mailbox." NSUWorks, 1993. http://nsuworks.nova.edu/gscis_etd/634.

Full text
Abstract:
The content of this dissertation includes the collection of data for the design, evaluation and distribution of a survey questionnaire to collect the information needed to develop a "Small Business Owner's Guide to Computer Applications." By working closely with the manager/owner of The Village Mailbox in Portsmouth, Virginia, computer applications areas were identified, the questionnaire was developed then validated through pilot testing. After pilot testing the questionnaire survey was distributed through three mailings to ensure maximum participation. Responses were divided into two groups (strata) and non-franchised responses were compared to franchised responses. Through the use of descriptive statistics the responses were analyzed for response rate, Pearson r, significance, degrees of freedom, standard deviation, z-test score and significance level and the Kuder Richardson KR21 was applied for a reliability score. The survey data collected was analyzed conclusions were drawn. Conclusions led to and the development of the guide to assist owners of small businesses offering mailing services during considerations about computer applications. This guide was developed to solve the problem statement of this study, give the small business owners a logical approach to computer applications and to serve as a tool for small business planning where it relates to computer applications in mailing service businesses. The survey and the data collected could be generalized to all small businesses offering mailing services. Small businesses offering mailing services should find the study interesting and the results handy in providing useful information related to computer applications. To use this information in any other environment would require research that included information specific to that environment. Information collected here was meant for use only in the area of small businesses offering mailing services.
APA, Harvard, Vancouver, ISO, and other styles
12

Guillory, Helen E. (Helen Elizabeth). "Computer Applications to Second Language Acquisition." Thesis, University of North Texas, 1991. https://digital.library.unt.edu/ark:/67531/metadc504628/.

Full text
Abstract:
This thesis is intended to give a panorama of technology in foreign language pedagogy. Although my field of study is French, the computer applications under scrutiny do not relate solely to the teaching of French. This paper begins with a criticism of the rigid listen-and-repeat language laboratory concept while tracking the rise of communicative language learning theory; follows the microprocessor revolution in language consoles; documents the development of computer-assisted instruction; showcases software evaluations of computer-assisted language learning; explores telecommunications; discusses satellite dishes and other computer peripherals; presents the results of a survey of Texas universities; and concludes with the presentation of the evolving language media center.
APA, Harvard, Vancouver, ISO, and other styles
13

Pellegrini, Lorenzo <1993&gt. "Continual learning for computer vision applications." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10401/1/Lorenzo%20Pellegrini%20-%20PhD%20Thesis.pdf.

Full text
Abstract:
One of the most visionary goals of Artificial Intelligence is to create a system able to mimic and eventually surpass the intelligence observed in biological systems including, ambitiously, the one observed in humans. The main distinctive strength of humans is their ability to build a deep understanding of the world by learning continuously and drawing from their experiences. This ability, which is found in various degrees in all intelligent biological beings, allows them to adapt and properly react to changes by incrementally expanding and refining their knowledge. Arguably, achieving this ability is one of the main goals of Artificial Intelligence and a cornerstone towards the creation of intelligent artificial agents. Modern Deep Learning approaches allowed researchers and industries to achieve great advancements towards the resolution of many long-standing problems in areas like Computer Vision and Natural Language Processing. However, while this current age of renewed interest in AI allowed for the creation of extremely useful applications, a concerningly limited effort is being directed towards the design of systems able to learn continuously. The biggest problem that hinders an AI system from learning incrementally is the catastrophic forgetting phenomenon. This phenomenon, which was discovered in the 90s, naturally occurs in Deep Learning architectures where classic learning paradigms are applied when learning incrementally from a stream of experiences. This dissertation revolves around the Continual Learning field, a sub-field of Machine Learning research that has recently made a comeback following the renewed interest in Deep Learning approaches. This work will focus on a comprehensive view of continual learning by considering algorithmic, benchmarking, and applicative aspects of this field. This dissertation will also touch on community aspects such as the design and creation of research tools aimed at supporting Continual Learning research, and the theoretical and practical aspects concerning public competitions in this field.
APA, Harvard, Vancouver, ISO, and other styles
14

Miller, G. S. P. "Computer display and manufacture of 3-D models." Thesis, University of Cambridge, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235044.

Full text
Abstract:
The thesis is concerned with describing new ways of using computers to create images of 3-dimensional designs. It also introduces novel methods for manufacturing some of these designs using numerically controlled machine tools. The work began as an extension of an existing surface design package called 'DUCT'. This was a program capable of holding descriptions of subtly curved surfaces, but which could only display them using line drawings. It was the first task of the author to examine different methods for depicting surfaces and then to decide which one was the most suitable for use in conjunction with industrial design work. Once done, this led on to ways in which the rendering methods could be improved. These improvements then enabled the package to be used in new application areas such as realistic image synthesis for advertising and animation. In tandem with this, new methods were developed for verifying the machining paths generated by DUCT for use on 3-axis milling machines. The methods developed for machining path verification were then extended to give improved techniques for the generation of such machining paths. The new approach allowed the manufacture of objects which were beyond the scope of previous surface design systems. The work on depicting objects manufactured using a 3-axis milling machine drew attention to the related problem of depicting realistic terrain. The author improved the existing methods for defining detailed surfaces such as mountains, and then went on to suggest new techniques for rendering such terrain in perspective. The new algorithms led naturally to the possibility of implementation on parallel computers and a paper study was made of the trade-offs involved in choosing different parallel computing architectures.
APA, Harvard, Vancouver, ISO, and other styles
15

Tran, Sang Cong. "Applications of formal methods in engineering." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/60452/.

Full text
Abstract:
The main idea presented in this thesis is to propose and justify a general framework for the development of safety-related systems based on a selection of criticality and the required level of integrity. We show that formal methods can be practically and consistently introduced into the system design lifecycle without incurring excessive development cost. An insight into the process of generating and validating a formal specification from an engineering point of view is illustrated, in conjunction with formal definitions of specification models, safety criteria and risk assessments. Engineering specifications are classified into two main classes of systems, memoryless and memory bearing systems. Heuristic approaches for specification generation and validation of these systems are presented and discussed with a brief summary of currently available formal systems and their supporting tools. It is further shown that to efficiently address different aspects of real-world problems, the concept of embedding one logic within another mechanised logic, in order to provide mechanical support for proofs and reasoning, is practical. A temporal logic framework, which is embedded in Higher Order Logic, is used to verify and validate the design of a real-time system. Formal definitions and properties of temporal operators are defined in HOL and real-time concepts such as timing marker, interrupt and timeout are presented. A second major case study is presented on the specification a solid model for mechanical parts. This work discusses the modelling theory with set theoretic topology and Boolean operations. The theory is used to specify the mechanical properties of large distribution transformers. Associated mechanical properties such as volumetric operations are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
16

Luhamba, John K. M. "Evaluating The Use Of Laptop Computers In Teaching Construction Computer Applications At The College Of Technology, Bowling Green State University." Bowling Green State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1185651720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Elfström, Adam. "The State of Progressive Web Applications : an investigation of the experiences and opinions of developers in the industry." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-104891.

Full text
Abstract:
Mobile applications can be developed using a variety of different techniques and technologies. One of the most recent of these techniques is the Progressive Web Application (PWA), a cross-platform solution that is built exclusively using common web technologies. The technique has great potential to become a major competitor to native applications but is currently held back by a few rather significant limitations. This project was initiated because of a significant lack of academic research on the topic of PWA, and a perceived poor level of knowledge in the industry about the technique. The goal of the project was to determine if PWA deserved broader utilization or if the current low level of adoption was justified. During the project, two surveys were conducted. The first survey asked mobile application developers from companies in different countries about things such as their knowledge of, experience with, and opinions of PWA. The second survey asked similar questions but was instead answered by lecturers in higher education in Sweden only. The results of this project show that the average level of knowledge of PWAis very low and that developers’ opinions of the technique are quite negative. The limitations of PWA were found to be few but crippling to its potential to achieve widespread adoption.
APA, Harvard, Vancouver, ISO, and other styles
18

Pai, Hsueh-Ieng 1975. "Applications of extensible markup language to mobile application patterns." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33817.

Full text
Abstract:
Mobile applications provide services that can benefit various sectors of the society. It is imperative that the realization of mobile applications be well-planned and based on standards. The interplay of Extensible Markup Language (XML)-based technologies, software engineering principles, and "best practices" formalized as patterns provides such a systematic approach. This thesis formulates a rigorous classification of mobile application patterns into four categories: architecture, process, product, and usage. To express the patterns in a universal manner, an XML-based pattern notation, the Mobile Application Patterns Markup Language (MAPML), is introduced. Based on a requirements analysis and design description, the specification of MAPML is given. MAPML is equipped with a collection of tools that includes formal grammars for MAPML, and solutions for authoring, processing, and presenting MAPML documents on the Web. The thesis concludes with a discussion of the current state of work and directions for future improvement.
APA, Harvard, Vancouver, ISO, and other styles
19

Herdman, Andy. "The readying of applications for heterogeneous computing." Thesis, University of Warwick, 2017. http://wrap.warwick.ac.uk/102343/.

Full text
Abstract:
High performance computing is approaching a potentially significant change in architectural design. With pressures on the cost and sheer amount of power, additional architectural features are emerging which require a re-think to the programming models deployed over the last two decades. Today's emerging high performance computing (HPC) systems are maximising performance per unit of power consumed resulting in the constituent parts of the system to be made up of a range of different specialised building blocks, each with their own purpose. This heterogeneity is not just limited to the hardware components but also in the mechanisms that exploit the hardware components. These multiple levels of parallelism, instruction sets and memory hierarchies, result in truly heterogeneous computing in all aspects of the global system. These emerging architectural solutions will require the software to exploit tremendous amounts of on-node parallelism and indeed programming models to address this are emerging. In theory, the application developer can design new software using these models to exploit emerging low power architectures. However, in practice, real industrial scale applications last the lifetimes of many architectural generations and therefore require a migration path to these next generation supercomputing platforms. Identifying that migration path is non-trivial: With applications spanning many decades, consisting of many millions of lines of code and multiple scientific algorithms, any changes to the programming model will be extensive and invasive and may turn out to be the incorrect model for the application in question. This makes exploration of these emerging architectures and programming models using the applications themselves problematic. Additionally, the source code of many industrial applications is not available either due to commercial or security sensitivity constraints. This thesis highlights this problem by assessing current and emerging hard- ware with an industrial strength code, and demonstrating those issues described. In turn it looks at the methodology of using proxy applications in place of real industry applications, to assess their suitability on the next generation of low power HPC offerings. It shows there are significant benefits to be realised in using proxy applications, in that fundamental issues inhibiting exploration of a particular architecture are easier to identify and hence address. Evaluations of the maturity and performance portability are explored for a number of alternative programming methodologies, on a number of architectures and highlighting the broader adoption of these proxy applications, both within the authors own organisation, and across the industry as a whole.
APA, Harvard, Vancouver, ISO, and other styles
20

Cheda, Diego. "Monocular Depth Cues in Computer Vision Applications." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/121644.

Full text
Abstract:
La percepción de la profundidad es un aspecto clave en la visión humana. El ser humano realiza esta tarea sin esfuerzo alguno con el objetivo de efectuar diversas actividades cotidianas. A menudo, la percepción de la profundidad se ha asociado con la visión binocular. Pese a esto, los seres humanos tienen una capacidad asombrosa de percibir las relaciones de profundidad, incluso a partir de una sola imagen, mediante el uso de varias pistas monoculares. En el campo de la visión por ordenador, si la información de la profundidad de una imagen estuviera disponible, muchas tareas podr´ıan ser planteadas desde una perspectiva diferente en aras de un mayor rendimiento y robustez. Sin embargo, dada una única imagen, esta posibilidad es generalmente descartada, ya que la obtención de la información de profundidad es frecuentemente obtenida por las técnicas de reconstrucción tridimensional, que requieren dos o más imágenes de la misma escena tomadas desde diferentes puntos de vista. Recientemente, algunas propuestas han demostrado que es posible obtener información de profundidad a partir de imágenes individuales. En esencia, la idea es aprovechar el conocimiento a priori de las condiciones de adquisición de la imagen y de la escena observada para estimar la profundidad empleando pistas pictóricas monoculares. Estos enfoques tratan de estimar con precisión los mapas de profundidad de la escena empleando técnicas computacionalmente costosas. Sin embargo, muchos algoritmos de visión por ordenador no necesitan un mapa de profundidad detallado de la imagen. De hecho, sólo una descripción en profundidad aproximada puede ser muy valiosa en muchos problemas. En nuestro trabajo, hemos demostrado que incluso la información aproximada de profundidad puede integrarse en diferentes tareas siguiendo una estrategia holística con el fin de obtener resultados más precisos y robustos. En ese sentido, hemos propuesto una técnica simple, pero fiable, por medio de la cual regiones de la imagen de una escena se clasifican en rangos de profundidad discretos para construir un mapa tosco de la profundidad. Sobre la base de esta representación, hemos explorado la utilidad de nuestro método en tres dominios de aplicación desde puntos de vista novedosos: la estimación de la rotación de la cámara, la estimación del fondo de una escena y la generación de ventanas de interés para la detección de peatones. En el primer caso, calculamos la rotación de la cámara montada en un veh´ıculo en movimiento mediante dos nuevos m˜A c ⃝todos que identifican elementos distantes en la imagen a través de nuestros mapas de profundidad. En la reconstrucción del fondo de una imagen, propusimos un método novedoso que penaliza las regiones cercanas en una función de coste que integra, además, información del color y del movimiento. Por último, empleamos la información geométrica y de la profundidad de una escena para la generación de peatones candidatos. Este método reduce significativamente el número de ventanas generadas, las cuales serán posteriormente procesadas por un clasificador de peatones. En todos los casos, los resultados muestran que los enfoques basados en la profundidad contribuyen a un mejor rendimiento de las aplicaciones estudidadas.
Depth perception is a key aspect of human vision. It is a routine and essential visual task that the human do effortlessly in many daily activities. This has often been associated with stereo vision, but humans have an amazing ability to perceive depth relations even from a single image by using several monocular cues. In the computer vision field, if image depth information were available, many tasks could be posed from a different perspective for the sake of higher performance and robustness. Nevertheless, given a single image, this possibility is usually discarded, since obtaining depth information has frequently been performed by three-dimensional reconstruction techniques, requiring two or more images of the same scene taken from different viewpoints. Recently, some proposals have shown the feasibility of computing depth information from single images. In essence, the idea is to take advantage of a priori knowledge of the acquisition conditions and the observed scene to estimate depth from monocular pictorial cues. These approaches try to precisely estimate the scene depth maps by employing computationally demanding techniques. However, to assist many computer vision algorithms, it is not really necessary computing a costly and detailed depth map of the image. Indeed, just a rough depth description can be very valuable in many problems. In this thesis, we have demonstrated how coarse depth information can be integrated in different tasks following holistic and alternative strategies to obtain more precise and robustness results. In that sense, we have proposed a simple, but reliable enough technique, whereby image scene regions are categorized into discrete depth ranges to build a coarse depth map. Based on this representation, we have explored the potential usefulness of our method in three application domains from novel viewpoints: camera rotation parameters estimation, background estimation and pedestrian candidate generation. In the first case, we have computed camera rotation mounted in a moving vehicle from two novels methods that identify distant elements in the image, where the translation component of the image flow field is negligible. In background estimation, we have proposed a novel method to reconstruct the background by penalizing close regions in a cost function, which integrates color, motion, and depth terms. Finally, we have benefited of geometric and depth information available on single images for pedestrian candidate generation to significantly reduce the number of generated windows to be further processed by a pedestrian classifier. In all cases, results have shown that our depth-based approaches contribute to better performances.
APA, Harvard, Vancouver, ISO, and other styles
21

Mendes, Barbosa Álvaro Manuel. "Computer-suported cooperative work for music applications." Doctoral thesis, Universitat Pompeu Fabra, 2006. http://hdl.handle.net/10803/7536.

Full text
Abstract:
Aquesta tesi recull la recerca al voltant de les pràctiques musicals mitjançant xarxes d'ordinadors realitzada al Grup de Tecnologia Musical de la Universitat Pompeu Fabra a Barcelona entre l'any 2001 i el 2005. Parteix del treball dut a terme durant la última dècada dins del camp del Treball Cooperatiu amb Ordinadors (Computer-Supported Cooperative Work, CSCW) el qual aporta els mecanismes de col·laboració els quals, des de un punt de vista musical, poden ser estudiats en diversos escenaris: composició, interpretació, improvisació i educació.
La primera contribució d'aquest treball és un anàlisi exhaustiu i una classificació sistemàtica del Treball Cooperatiu amb Ordinadors per Aplicacions Musicals. Aquest anàlisi es va centrar en la identificació de propostes innovadores, models i aplicacions, amb un especial èmfasi en la natura compartida de la comunicació mitjançant internet. El concepte d'Entorns Sonors Compartits va ser presentat i implementat en una aplicació prototip anomenada Public Sound Objects (PSOs).
La segona gran contribució d'aquesta tesi consisteix en l'estudi del possibles mètodes per reduir les interrupcions degudes als retards inherents en la comunicació musical entre xarxes molt allunyades. A partir de l'experimentació i avaluació al laboratori les tècniques Network Latency Adaptive Tempo i Individual Delayed Feed-Back van ser definides i implementades dins del prototip PSOs.
Al llarg del desenvolupament del PSOs es van haver de resoldre altres problemes, com per exemple, el disseny d'interfícies en funció del comportament per a aplicacions amb interfícies desacoblades, la superació dels diversos sistemes de seguretat de les xarxes informàtiques i les possibilitats d'escalabilitat de diverses aplicacions d'àudio per a web.
Durant l'elaboració d'aquesta tesi es van discutir diferents perspectives per resoldre problemes relacionats amb la pràctica musical mitjançant ordinadors, aplicant diferents punts de vista provinents de l'estudi psicosocial dels processos de col·laboració musical al món de la informàtica i de la tecnologia musical.
This dissertation derives from research on musical practices mediated by computer networks conducted from 2001 to 2005 in the Music Technology Group of the Pompeu Fabra University in Barcelona, Spain. It departs from work carried out over the last decades in the field of Computer-Supported Cooperative Work (CSCW), which provides us with collaborative communication mechanisms that can be regarded from a music perspective in diverse scenarios: Composition, Performance, Improvisation or Education.
The first contribution originated from this research work is an extensive survey and systematic classification of Computer-Supported Cooperative Work for Music Applications. This survey led to the identification of innovative approaches, models and applications, with special emphasis on the shared nature of geographically displaced communication over the Internet. The notion of a Shared Sonic Environments was introduced and implemented in a proof-of-concept application entitled Public Sound Objects (PSOs).
A second major contribution of this dissertation concerns methods that reduce the disrupting effect of network latency in musical communication over long distance networks. From laboratorial experimentation and evaluation, the techniques of Network Latency Adaptive Tempo and Individual Delayed Feed-Back were proposed and implemented in the PSOs prototype.
Over the course of the PSOs development other relevant and inspirational issues were addressed, such as, behavioral-driven interface design applied to interface decoupled applications, the overcome of network technology security features and system scalability for various applications in audio web services.
Throughout this dissertation conceptual perspectives of related issues to computer-mediated musical practices dissertation were widely discussed, conveying different standpoints ranging from a Psycho-Social study of collaborative music processes to the Computer Science and Music Technology point of view.
APA, Harvard, Vancouver, ISO, and other styles
22

Stumpf, Barbara A. "The learning of computer applications, students' perceptions." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq21103.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lowman, Tim. "Secure Computer Applications in an Enterprise Environment." NCSU, 1999. http://www.lib.ncsu.edu/theses/available/etd-19990401-134848.

Full text
Abstract:

Sophisticated computing environments support many of the complex tasks whicharise in modern enterprises. An enterprise environment is a collective ofthe organization's software, hardware, networking, and data systems.Typically, many user workstations communicate with shared servers, balancingcomputer processing throughout the organization. In a ``secure" modernenterprise issues of authentication, private communication, and protected,shared data space must be addressed. In this thesis we present a generalmodel for adding security to the currently popular enterprise architecture:the World Wide Web (WWW).

The results of our investigation into adding security to the generalWWW architecture are reported in this document. We focus onauthenticating users (Kerberos), establishing a secure communicationlink for private data exchange (SSL), protected space to store shareddata (AFS filesystem), and an enhanced server (Apache) to integrate thesecomponents. After presenting our secure model, we describe a prototypeapplication, built using our approach, which addresses a common problemof secure online submission of homework assignments in a universityenvironment.

APA, Harvard, Vancouver, ISO, and other styles
24

Gordon, Neil Andrew. "Finite geometry and computer algebra, with applications." Thesis, University of Hull, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Roomi, Akeel S. "Multiprocessor computer architectures : algorithmic design and applications." Thesis, Loughborough University, 1989. https://dspace.lboro.ac.uk/2134/10872.

Full text
Abstract:
The contents of this thesis are concerned with the implementation of parallel algorithms for solving partial differential equations (POEs) by the Alternative Group EXplicit (AGE) method and an investigation into the numerical inversion of the Laplace transform on the Balance 8000 MIMO system. Parallel computer architectures are introduced with different types of existing parallel computers including the Data-Flow computer and VLSI technology which are described from both the hardware and implementation points of view. The main characteristics of the Sequent parallel computer system at Loughborough University is presented, and performance indicators, i.e., the speed-up and efficiency factors are defined for the measurement of parallelism in the system. Basic ideas of programming such computers are also outlined.....
APA, Harvard, Vancouver, ISO, and other styles
26

An, Hong. "Computer-aided applications in process plant safety." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/6418.

Full text
Abstract:
Process plants that produce chemical products through pre-designed processes are fundamental in the Chemical Engineering industry. The safety of hazardous processing plants is of paramount importance as an accident could cause major damage to property and/or injury to people. HAZID is a computer system that helps designers and operators of process plants to identify potential design and operation problems given a process plant design. However, there are issues that need to be addressed before such a system will be accepted for common use. This research project considers how to improve the usability and acceptability of such a system by developing tools to test the developed models in order for the users to gain confidence in HAZID s output as HAZID is a model based system with a library of equipment models. The research also investigates the development of computer-aided safety applications and how they can be integrated together to extend HAZID to support different kinds of safety-related reasoning tasks. Three computer-aided tools and one reasoning system have been developed from this project. The first is called Model Test Bed, which is to test the correctness of models that have been built. The second is called Safe Isolation Tool, which is to define isolation boundary and identify potential hazards for isolation work. The third is an Instrument Checker, which lists all the instruments and their connections with process items in a process plant for the engineers to consider whether the instrument and its loop provide safeguards to the equipment during the hazard identification procedure. The fourth is a cause-effect analysis system that can automatically generate cause-effect tables for the control engineers to consider the safety design of the control of a plant as the table shows process events and corresponding process responses designed by the control engineer. The thesis provides a full description of the above four tools and how they are integrated into the HAZID system to perform control safety analysis and hazard identification in process plants.
APA, Harvard, Vancouver, ISO, and other styles
27

Perwass, Christian Bernd Ulrich. "Applications of geometric algebra in computer vision." Thesis, University of Cambridge, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Yuan-Fang. "Computer Vision Analysis for Vehicular Safety Applications." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596451.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
In this paper, we present our research on using computer-vision analysis for vehicular safety applications. Our research has potential applications for both autonomous vehicles and connected vehicles. In particular, for connected vehicles, we propose three image analysis algorithms that enhance the quality of a vehicle's on-board video before inter-vehicular information exchange takes place. For autonomous vehicles, we are investigating a visual analysis scheme for collision avoidance during back up and an algorithm for automated 3D map building. These algorithms are relevant to the telemetering domain as they involve determining the relative pose between a vehicle and other vehicles on the road, or between a vehicle and its 3D driving environment, or between a vehicle and obstacles surrounding the vehicle.
APA, Harvard, Vancouver, ISO, and other styles
29

Tabak, Daniel. "VLSI ORIENTED COMPUTER ARCHITECTURE AND SOME APPLICATIONS." International Foundation for Telemetering, 1985. http://hdl.handle.net/10150/615746.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1985 / Riviera Hotel, Las Vegas, Nevada
The paper surveys the particular problems, arising in the architectural design of computing systems, realized on VLSI chips. Particular difficulties due to limited on-chip density and power dissipation are discussed. The difficulties of the realization of on-chip communications between various subsystems (between themselves and between other offchip systems) are stressed. A number of design principles for the realization of on-chip communication paths is presented. Two design philosophies for the instruction set design in a VLSI environment are brought up: (a) The large microcoded instruction set, (b) The Reduced Instruction Set Computer (RISC) approach, based on the Streamlined Instruction Set Design. A survey of the author’s research group work in this area is presented. This includes the ZT-1 single chip microcomputer, RISC computing space studies, applications to a distributed traffic control and a la rge scale, reconfigurable communications system.
APA, Harvard, Vancouver, ISO, and other styles
30

Chung, Wai Hing. "Teaching computer control applications : a programming approach." Thesis, University of Edinburgh, 1986. http://hdl.handle.net/1842/19628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Maynes-Aminzade, Daniel. "Interactive visual prototyping of computer vision applications /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Whitaker, Robert Bruce. "Applying Information Visualization to Computer Security Applications." DigitalCommons@USU, 2010. https://digitalcommons.usu.edu/etd/636.

Full text
Abstract:
This thesis presents two phases of research in applying visualization to network security challenges. The first phase included discovering the most useful and powerful features in existing computer security visualizations and incorporating them into the AdviseAid visualization platform, an existing software package. The incorporation of such a complete feature set required novel resolution of software engineering, human factors, and computer graphics issues. We also designed additional novel features, such as plugin interfaces, allowing for rapid prototyping and experimentation with novel visualization features and capabilities. The second phase of the research focused on the development of novel visualization techniques themselves. These novel visualizations were designed and created within AdviseAid to demonstrate that the features of AdviseAid are functional and helpful in the development process, as well as to be effective in the analysis of computer networks in their own right.
APA, Harvard, Vancouver, ISO, and other styles
33

PAOLANTI, MARINA. "Pattern Recognition for challenging Computer Vision Applications." Doctoral thesis, Università Politecnica delle Marche, 2018. http://hdl.handle.net/11566/252904.

Full text
Abstract:
La Pattern Recognition è lo studio di come le macchine osservano l'ambiente, imparano a distinguere i pattern di interesse dal loro background e prendono decisioni valide e ragionevoli sulle categorie di modelli. Oggi l'applicazione degli algoritmi e delle tecniche di Pattern Recognition è trasversale. Con i recenti progressi nella computer vision, abbiamo la capacità di estrarre dati multimediali per ottenere informazioni preziose su ciò che sta accadendo nel mondo. Partendo da questa premessa, questa tesi affronta il tema dello sviluppo di sistemi di Pattern Recognition per applicazioni reali come la biologia, il retail, la sorveglianza, social media intelligence e i beni culturali. L'obiettivo principale è sviluppare applicazioni di computer vision in cui la Pattern Recognition è il nucleo centrale della loro progettazione, a partire dai metodi generali, che possono essere sfruttati in più campi di ricerca, per poi passare a metodi e tecniche che affrontano problemi specifici. Di fronte a molti tipi di dati, come immagini, dati biologici e traiettorie, una difficoltà fondamentale è trovare rappresentazioni vettoriali rilevanti. Per la progettazione del sistema di riconoscimento dei modelli vengono eseguiti i seguenti passaggi: raccolta dati, estrazione delle caratteristiche, approccio di apprendimento personalizzato e analisi e valutazione comparativa. Per una valutazione completa delle prestazioni, è di grande importanza collezionare un dataset specifico perché i metodi di progettazione che sono adattati a un problema non funzionano correttamente su altri tipi di problemi. I metodi su misura, adottati per lo sviluppo delle applicazioni proposte, hanno dimostrato di essere in grado di estrarre caratteristiche statistiche complesse e di imparare in modo efficiente le loro rappresentazioni, permettendogli di generalizzare bene attraverso una vasta gamma di compiti di visione computerizzata.
Pattern Recognition is the study of how machines can observe the environment, learn to distinguish patterns of interest from their background, and make sound and reasonable decisions about the patterns categories. Nowadays, the application of Pattern Recognition algorithms and techniques is ubiquitous and transversal. With the recent advances in computer vision, we now have the ability to mine such massive visual data to obtain valuable insight about what is happening in the world. The availability of affordable and high resolution sensors (e.g., RGB-D cameras, microphones and scanners) and data sharing have resulted in huge repositories of digitized documents (text, speech, image and video). Starting from such a premise, this thesis addresses the topic of developing next generation Pattern Recognition systems for real applications such as Biology, Retail, Surveillance, Social Media Intelligence and Digital Cultural Heritage. The main goal is to develop computer vision applications in which Pattern Recognition is the key core in their design, starting from general methods, that can be exploited in more fields, and then passing to methods and techniques addressing specific problems. The privileged focus is on up-to-date applications of Pattern Recognition techniques to real-world problems, and on interdisciplinary research, experimental and/or theoretical studies yielding new insights that advance Pattern Recognition methods. The final ambition is to spur new research lines, especially within interdisciplinary research scenarios. Faced with many types of data, such as images, biological data and trajectories, a key difficulty was to nd relevant vectorial representations. While this problem had been often handled in an ad-hoc way by domain experts, it has proved useful to learn these representations directly from data, and Machine Learning algorithms, statistical methods and Deep Learning techniques have been particularly successful. The representations are then based on compositions of simple parameterized processing units, the depth coming from the large number of such compositions. It was desirable to develop new, efficient data representation or feature learning/indexing techniques, which can achieve promising performance in the related tasks. The overarching goal of this work consists of presenting a pipeline to select the model that best explains the given observations; nevertheless, it does not prioritize in memory and time complexity when matching models to observations. For the Pattern Recognition system design, the following steps are performed: data collection, features extraction, tailored learning approach and comparative analysis and assessment. The proposed applications open up a wealth of novel and important opportunities for the machine vision community. The newly dataset collected as well as the complex areas taken into exam, make the research challenging. In fact, it is crucial to evaluate the performance of state of the art methods to demonstrate their strength and weakness and help identify future research for designing more robust algorithms. For comprehensive performance evaluation, it is of great importance developing a library and benchmark to gauge the state of the art because the methods design that are tuned to a specic problem do not work properly on other problems. Furthermore, the dataset selection is needed from different application domains in order to offer the user the opportunity to prove the broad validity of methods. Intensive attention has been drawn to the exploration of tailored learning models and algorithms, and their extension to more application areas. The tailored methods, adopted for the development of the proposed applications, have shown to be capable of extracting complex statistical features and efficiently learning their representations, allowing it to generalize well across a wide variety of computer vision tasks, including image classication, text recognition and so on.
APA, Harvard, Vancouver, ISO, and other styles
34

Buoncompagni, Simone <1987&gt. "Computer Vision Techniques for Ambient Intelligence Applications." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7327/1/buoncompagni_simone_tesi.pdf.

Full text
Abstract:
Ambient Intelligence (AmI) is a muldisciplinary area which refers to environments that are sensitive and responsive to the presence of people and objects. The rapid progress of technology and simultaneous reduction of hardware costs characterizing the recent years have enlarged the number of possible AmI applications, thus raising at the same time new research challenges. In particular, one important requirement in AmI is providing a proactive support to people in their everyday working and free-time activities. To this aim, Computer Vision represents a core research track since only through suitable vision devices and techniques it is possible to detect elements of interest and understand the occurring events. The goal of this thesis is presenting and demonstrating efficacy of novel machine vision research contributes for different AmI scenarios: object keypoints analysis for Augmented Reality purpose, segmentation of natural images for plant species recognition and heterogeneous people identification in unconstrained environments.
APA, Harvard, Vancouver, ISO, and other styles
35

Schiavinato, Michele <1986&gt. "Transformation synchronization with applications in computer vision." Doctoral thesis, Università Ca' Foscari Venezia, 2018. http://hdl.handle.net/10579/13457.

Full text
Abstract:
Nella comunità del Machine Learning i problemi di matching hanno dato un forte contributo l’area del riconoscimento dal momento che le corrispondenze di strutture possono rilevare aspetti di similitudine tra due oggetti. In Graph Theory e Computer Vision si cercano rispettivamente i matching tra nodi o punti di due grafi o immagini, i quali costituiscono una forma di trasformazione da una struttura all’altra. Sebbene il problema sia sempre stato trattato su due entità sta emergendo sempre di più l’esigenza di generalizzare il problema a un insieme di molteplici oggetti. In questa tesi sfruttiamo la sincronizzazione delle trasformazioni per realizzare diversi lavori di Multi-Graph e Multi-Point Set matching. Appartenenti alla prima categoria trattiamo le permutazioni realizzando tre approcci: il primo costituisce un framework che sincronizza off-line una soluzione derivata da un esterno algoritmo di Matching in modo indipendente; il secondo costituisce un processo che può integrarsi a un algoritmo di matching sincronizzando attivamente la soluzione durante l’apprendimento; il terzo generalizza ulteriormente la ricerca di corrispondenze di sottografi sullo spazio del multi simplesso in un universo comune di nodi. Appartenenti alla seconda categoria trattiamo le omografie planari tra immagini 2D, realizzando un processo di ottimizzazione in grado di determinare il piano presente nella scena e classificandone i punti appartenenti o meno a tale superficie planare.
APA, Harvard, Vancouver, ISO, and other styles
36

Mahadevan, Karthikeyan. "Estimating reliability impact of biometric devices in large scale applications." Morgantown, W. Va. : [West Virginia University Libraries], 2003. http://etd.wvu.edu/templates/showETD.cfm?recnum=3096.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2003.
Title from document title page. Document formatted into pages; contains vii, 66 p. : ill. (some col.). Vita. Includes abstract. Includes bibliographical references (p. 62-64).
APA, Harvard, Vancouver, ISO, and other styles
37

Chester, Adam P. "Towards effective dynamic resource allocation for enterprise applications." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/49959/.

Full text
Abstract:
The growing use of online services requires substantial supporting infrastructure. The efficient deployment of applications relies on the cost effectiveness of commercial hosting providers who deliver an agreed quality of service as governed by a service level agreement for a fee. The priorities of the commercial hosting provider are to maximise revenue, by delivering agreed service levels, and minimise costs, through high resource utilisation. In order to deliver high service levels and resource utilisation, it may be necessary to reorganise resources during periods of high demand. This reorganisation process may be manual or alternatively controlled by an autonomous process governed by a dynamic resource allocation algorithm. Dynamic resource allocation has been shown to improve service levels and utilisation and hence, profitability. In this thesis several facets of dynamic resource allocation are examined to asses its suitability for the modern data centre. Firstly, three theoretically derived policies are implemented as a middleware for a modern multi-tier Web application and their performance is examined under a range of workloads in a real world test bed. The scalability of state-of-the art resource allocation policies are explored in two dimensions, namely the number of applications and the quantity of servers under control of the resources allocation policy. The results demonstrate that current policies presented in the literature demonstrate poor scalability in one or both of these dimensions. A new policy is proposed which has significantly improved scalability characteristics and the new policy is demonstrated at scale through simulation. The placement of applications in across a datacenter makes them susceptible to failures in shared infrastructure. To address this issue an application placement mechanism is developed to augment any dynamic resource allocation policy. The results of this placement mechanism demonstrate a significant improvement in the worst case when compared to a random allocation mechanism. A model for the reallocation of resources in a dynamic resource allocation system is also devised. The model demonstrates that the assumption of a constant resource reallocation cost is invalid under both physical reallocation and migration of virtualised resources.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Fu. "Intelligent feature selection for neural regression : techniques and applications." Thesis, University of Warwick, 2012. http://wrap.warwick.ac.uk/49639/.

Full text
Abstract:
Feature Selection (FS) and regression are two important technique categories in Data Mining (DM). In general, DM refers to the analysis of observational datasets to extract useful information and to summarise the data so that it can be more understandable and be used more efficiently in terms of storage and processing. FS is the technique of selecting a subset of features that are relevant to the development of learning models. Regression is the process of modelling and identifying the possible relationships between groups of features (variables). Comparing with the conventional techniques, Intelligent System Techniques (ISTs) are usually favourable due to their flexible capabilities for handling real‐life problems and the tolerance to data imprecision, uncertainty, partial truth, etc. This thesis introduces a novel hybrid intelligent technique, namely Sensitive Genetic Neural Optimisation (SGNO), which is capable of reducing the dimensionality of a dataset by identifying the most important group of features. The capability of SGNO is evaluated with four practical applications in three research areas, including plant science, civil engineering and economics. SGNO is constructed using three key techniques, known as the core modules, including Genetic Algorithm (GA), Neural Network (NN) and Sensitivity Analysis (SA). The GA module controls the progress of the algorithm and employs the NN module as its fitness function. The SA module quantifies the importance of each available variable using the results generated in the GA module. The global sensitivity scores of the variables are used determine the importance of the variables. Variables of higher sensitivity scores are considered to be more important than the variables with lower sensitivity scores. After determining the variables’ importance, the performance of SGNO is evaluated using the NN module that takes various numbers of variables with the highest global sensitivity scores as the inputs. In addition, the symbolic relationship between a group of variables with the highest global sensitivity scores and the model output is discovered using the Multiple‐Branch Encoded Genetic Programming (MBE‐GP). A total of four datasets have been used to evaluate the performance of SGNO. These datasets involve the prediction of short‐term greenhouse tomato yield, prediction of longitudinal dispersion coefficients in natural rivers, prediction of wave overtopping at coastal structures and the modelling of relationship between the growth of industrial inputs and the growth of the gross industrial output. SGNO was applied to all these datasets to explore its effectiveness of reducing the dimensionality of the datasets. The performance of SGNO is benchmarked with four dimensionality reduction techniques, including Backward Feature Selection (BFS), Forward Feature Selection (FFS), Principal Component Analysis (PCA) and Genetic Neural Mathematical Method (GNMM). The applications of SGNO on these datasets showed that SGNO is capable of identifying the most important feature groups of in the datasets effectively and the general performance of SGNO is better than those benchmarking techniques. Furthermore, the symbolic relationships discovered using MBE‐GP can generate performance competitive to the performance of NN models in terms of regression accuracies.
APA, Harvard, Vancouver, ISO, and other styles
39

Mercanti, Ivan. "Models and applications for the Bitcoin ecosystem." Thesis, IMT Alti Studi Lucca, 2022. http://e-theses.imtlucca.it/357/1/Mercanti_phdthesis.pdf.

Full text
Abstract:
Cryptocurrencies are widely known and used principally as a means of investment and payment by more and more users outside the restricted circle of technologists and computer scientists. However, like fiat money, they can also be used as a means for illegal activities, exploiting their pseudo-anonymity and easiness/speed in moving capitals. This thesis aims to provide a suite of tools and models to better analyze and understand several aspect of the Bitcoin blockchain. In particular, we developed a visual tool that highlights transaction islands, i.e., the sub-graphs disconnected from the super-graph, which represents the whole blockchain. We also show the distributions of Bitcoin transactions types and define new classes of nonstandard transactions. We analyze the addresses reuse in Bitcoin, showing that it corresponds to malicious activities in the Bitcoin ecosystem. Then we investigate whether solids or weak forms of arbitrage strategies are possible by trading across different Bitcoin Exchanges. We found that Bitcoin price/exchange rate is influenced by future and past events. Finally, we present a Stochastic Model to quantitative analyze different consensus protocols. In particular, the probabilistic analysis of the Bitcoin model highlights how forks happen and how they depend on specific parameters of the protocol.
APA, Harvard, Vancouver, ISO, and other styles
40

Lalani, Nisar. "Validation of Internet Applications." Thesis, Karlstad University, Faculty of Economic Sciences, Communication and IT, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-449.

Full text
Abstract:

Today, testing applications for Internet (web sites and other applications) is being verified using

proprietary test solutions. An application is developed and another application is developed to

test the first one. Test Competence Centre at Ericsson AB has expertise on testing telecom

applications using TTCN-2 and TTCN-3 notations. These notations have lot of potential and are

being used for testing in various areas. So far, not much work has been done on using TTCN

notations for testing Internet application. This thesis was a step through which the

capabilities/possibilities of the TTCN notation (in Web testing) could be determined.

This thesis presents investigation results of the 3 different test technologies/tools (TTCN-2,

TTCN-3 and a proprietary free software, PureTest) to see which one is the best for testing

Internet Applications and what are the drawbacks/benefits each technology has.

The background topics included are brief introduction of software testing and web testing, short

introduction of TTCN language and its version 2 and 3, description of the tool set representing

the chosen technologies, conceptual view of how the tools work, a short description of HTTP

protocol and description of HTTP adapter (Test Port).

Several benefits and drawbacks were found from all the three technologies but it can be said that

at the moment proprietary test solutions (PureTest in this case) is still the best tool to test Internet

Application. It scores over other the two technologies (TTCN-2 and TTCN-3) due to reason like

flexibility, cost effectiveness, user friendliness, small lead times for competence development etc.

TTCN-3 is more of a programming language and is certainly more flexible when compared to

TTCN-2. TTCN-3 is still evolving and it can be said that it holds promise. Some of the features

are missing which are vital for testing Internet Applications but are better than TTCN-2.

APA, Harvard, Vancouver, ISO, and other styles
41

Lowe, Richard. "Content-driven superpixels and their applications." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/351734/.

Full text
Abstract:
This thesis develops a new superpixel algorithm that displays excellent visual reconstruction of the original image. It achieves high stability across multiple random initialisations, achieved by producing superpixels directly corresponding to local image complexity. This is achieved by growing superpixels and dividing them on image variation. The existing analysis was not sufficient to take these properties into account so new measures of oversegmentation provide new insight into the optimum superpixel representation. As a consequence of the algorithm, it was discovered that CDS has properties that have eluded previous attempts, such as initialisation invariance and stability. The completely unsupervised nature of CDS makes them highly suitable for tasks such as application to a database containing images of unknown complexity. These new superpixel properties have allowed new applications for superpixel pre-processing to be produced. These are image segmentation; image compression; scene classification; and focus detection. In addition, a new method of objectively analysing regions of focus has been developed using Light-Field photography.
APA, Harvard, Vancouver, ISO, and other styles
42

Truong, L. H. "Dielectrics for high temperature superconducting applications." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/355538/.

Full text
Abstract:
This thesis is concerned with the development of condition monitoring for future design of high temperature superconducting (HTS) power apparatus. In particular, the use of UHF sensing for detecting PD activity within HTS has been investigated. Obtained results indicate that fast current pulses during PD in LN2 radiate electromagnetic waves which can be captured by the UHF sensor. PD during a negative streamer in LN2 appears in the form of a series of pulses less than 1 μs apart. This sequence cannot be observed using conventional detection method due to its bandwidth limitation. Instead, a slowly damped pulse is recorded which shows the total amount of charge transferred during this period. A study into PD streamer development within LN2 has been undertaken that reveals the characteristics of pre-breakdown phenomena in LN2. For negative streamers, when the electric field exceeds a threshold value, field emission from the electrode becomes effective which leads to the formation of initial cavities. Breakdown occurs within these gaseous bubbles and results in the development of negative streamers. For positive streamers, the process is much less well-understood due to the lack of initial electrons. However, from the recorded current pulses and shadow graphs, the physical mechanism behind positive streamer development is likely to be a more direct process, such as field ionisation, compared with the step-wise expansion in the case of negative streamers. The mechanisms that cause damage to solid dielectrics immersed in LN2 have been investigated. Obtained results indicate that pre-breakdown streamers can cause significant damage to the solid insulation barrier. Damage is the result of charge bombardment and mechanical forces rather than thermal effects. Inhomogeneous materials, such as glass fibre reinforced plastic (GRP), tend to introduce surface defects which can create local trapping sites. The trapped charges when combined with those from streamers can create much larger PD events. Consequently, damage observed on GRP barriers is much more severe than that on PTFE barriers under similar experimental conditions. Thus, design of future HTS power apparatus must consider this degradation phenomenon in order to improve the reliability of the insulation system.
APA, Harvard, Vancouver, ISO, and other styles
43

Glover, Kevin. "The genitive ratio and its applications." Thesis, University of Essex, 2016. http://repository.essex.ac.uk/16463/.

Full text
Abstract:
The genitive ratio (GR) is a novel method of classifying nouns as animate, concrete or abstract. English has two genitive (possessive) constructions: possessive-s (the boy's head) and possessive-of (the head of the boy). There is compelling evidence that preference for possessive-s is strongly influenced by the possessor's animacy. A corpus analysis that counts each genitive construction in three conditions (definite, indefinite and no article) confirms that occurrences of possessive-s decline as the animacy hierarchy progresses from animate through concrete to abstract. A computer program (Animyser) is developed to obtain results-counts from phrase-searches of Wikipedia that provide multiple genitive ratios for any target noun. Key ratios are identified and algorithms developed, with specific applications achieving classification accuracies of over 80%. The algorithms, based on logistic regression, produce a score of relative animacy that can be applied to individual nouns or to texts. The genitive ratio is a tool with potential applications in any research domain where the relative animacy of language might be significant. Three such applications exemplify that. Combining GR analysis with other factors might enhance established co-reference (anaphora) resolution algorithms. In sentences formed from pairings of animate with concrete or abstract nouns, the animate noun is usually salient, more likely to be the grammatical subject or thematic agent, and to co-refer with a succeeding pronoun or noun-phrase. Two experiments, online sentence production and corpus-based, demonstrate that the GR algorithm reliably predicts the salient noun. Replication of the online experiment in Italian suggests that the GR might be applied to other languages by using English as a 'bridge'. In a mental health context, studies have indicated that Alzheimer's patients' language becomes progressively more concrete; depressed patients' language more abstract. Analysis of sample texts suggests that the GR might monitor the prognosis of both illnesses, facilitating timely clinical interventions.
APA, Harvard, Vancouver, ISO, and other styles
44

Chenyan, Xu. "Accessing the Power of Aesthetics in Human-computer Interaction." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc500128/.

Full text
Abstract:
In information systems design there are two schools of thought about what factors are necessary to create a successful information system. The first, conventional view holds that system performance is a key, so that efficiency characteristics such as system usability and task completion time are primary concerns of system designers. The second, emerging view holds that the visual design is also the key, so that visual interface characteristics such as visual appeal, in addition to efficiency characteristics, are critical concerns of designers. This view contends that visual design enhances system use. Thus, this work examines the effects of visual design on computer systems. Visual design exerts its influence on systems through two mechanisms: it evokes affective responses from IT users, such as arousal and pleasure and it influences individuals’ cognitive assessments of systems. Given that both affective and cognitive reactions are significant antecedents of user behaviors in the IT realm, it is no surprise that visual design plays a critical role in information system success. Human-computer-interaction literature indicates that visual aesthetics positively influences such information success factors as usability, online trust, user satisfaction, flow experience, and so on. Although academic research has introduced visual design into the Information Systems (IS) field and validated its effects, visual design is still very limited in three contexts: product aesthetics in e-commerce, mobile applications and commercial emails. This dissertation presents three studies to help fill these theoretical gaps respectively.
APA, Harvard, Vancouver, ISO, and other styles
45

Hansen, Christian Leland. "Towards Comparative Profiling of Parallel Applications with PPerfDB." PDXScholar, 2001. https://pdxscholar.library.pdx.edu/open_access_etds/2666.

Full text
Abstract:
Due to the complex nature of parallel programming, it is difficult to diagnose and solve performance related problems. Knowledge of program behavior is obtained experimentally, with repeated runs of a slightly modified version of the application or the same code in different environments. In these circumstances, comparative performance analysis can provide meaningful insights into the subtle effects of system and code changes on parallel program behavior by highlighting the difference in performance results across executions. I have designed and implemented modules which extend the PPerfDB performance tool to allow access to existing performance data generated by several commonly used tracing tools. Access occurs from within the experiment management framework provided by PPerfDB for the identification of system parameters, the representation of multiple sets of execution data, and the formulation of data queries. Furthermore, I have designed and implemented an additional module that will generate new data using dynamic instrumentation under the control of PPerfDB. This was done to enable the creation of novel experiments for performance hypothesis testing and to ultimately automate the diagnostic and tuning process. As data from such diverse sources has very different representations, various techniques to allow comparisons are presented. I have generalized the definition of the Performance Difference operator, which automatically detects divergence in multiple data sets, and I have defined an Overlay operation to provide uniform access to both dynamically generated and tracefile based data. The use and application of these new operations along with an indication of some of the issues involved in the creation of a fully automatic comparative profilier is presented via several case studies performed on an IBM SP2 using different versions of an MPI application.
APA, Harvard, Vancouver, ISO, and other styles
46

Kim, Jang Don. "Applications performance on reconfigurable computers." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/42711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Pandey, Amit Kumar. "Securing Web Applications From Application-Level Attack." Kent State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=kent1181098075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Ur-Rehman, Wasi. "Maintaining Web Applications Integrity Running on RADIUM." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804975/.

Full text
Abstract:
Computer security attacks take place due to the presence of vulnerabilities and bugs in software applications. Bugs and vulnerabilities are the result of weak software architecture and lack of standard software development practices. Despite the fact that software companies are investing millions of dollars in the research and development of software designs security risks are still at large. In some cases software applications are found to carry vulnerabilities for many years before being identified. A recent such example is the popular Heart Bleed Bug in the Open SSL/TSL. In today’s world, where new software application are continuously being developed for a varied community of users; it’s highly unlikely to have software applications running without flaws. Attackers on computer system securities exploit these vulnerabilities and bugs and cause threat to privacy without leaving any trace. The most critical vulnerabilities are those which are related to the integrity of the software applications. Because integrity is directly linked to the credibility of software application and data it contains. Here I am giving solution of maintaining web applications integrity running on RADIUM by using daikon. Daikon generates invariants, these invariants are used to maintain the integrity of the web application and also check the correct behavior of web application at run time on RADIUM architecture in case of any attack or malware. I used data invariants and program flow invariants in my solution to maintain the integrity of web-application against such attack or malware. I check the behavior of my proposed invariants at run-time using Lib-VMI/Volatility memory introspection tool. This is a novel approach and proof of concept toward maintaining web application integrity on RADIUM.
APA, Harvard, Vancouver, ISO, and other styles
49

Goudie, Robert J. B. "Bayesian structural inference with applications in social science." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/78778/.

Full text
Abstract:
Structural inference for Bayesian networks is useful in situations where the underlying relationship between the variables under study is not well understood. This is often the case in social science settings in which, whilst there are numerous theories about interdependence between factors, there is rarely a consensus view that would form a solid base upon which inference could be performed. However, there are now many social science datasets available with sample sizes large enough to allow a more exploratory structural approach, and this is the approach we investigate in this thesis. In the first part of the thesis, we apply Bayesian model selection to address a key question in empirical economics: why do some people take unnecessary risks with their lives? We investigate this question in the setting of road safety, and demonstrate that less satisfied individuals wear seatbelts less frequently. Bayesian model selection over restricted structures is a useful tool for exploratory analysis, but fuller structural inference is more appealing, especially when there is a considerable quantity of data available, but scant prior information. However, robust structural inference remains an open problem. Surprisingly, it is especially challenging for large n problems, which are sometimes encountered in social science. In the second part of this thesis we develop a new approach that addresses this problem|a Gibbs sampler for structural inference, which we show gives robust results in many settings in which existing methods do not. In the final part of the thesis we use the sampler to investigate depression in adolescents in the US, using data from the Add Health survey. The result stresses the importance of adolescents not getting medical help even when they feel they should, an aspect that has been discussed previously, but not emphasised.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Xinuo. "Parallelisation for data-intensive applications over peer-to-peer networks." Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/3640/.

Full text
Abstract:
In Data Intensive Computing, properties of the data that are the input for an application decide running performance in most cases. Those properties include the size of the data, the relationships inside data, and so forth. There is a class of data intensive applications (BLAST, SETI@home, Folding@Home and so on so forth) whose performances solely depend on the amount of input data. Another important characteristic of those applications is that the input data can be split into units and these units are not related to each other during the runs of the applications. This characteristic helps this class of data intensive applications to be parallelised in the way where the input data is split into units and application runs on different computer nodes for certain portion of the units. SETI@home and Folding@Home have been successfully parallelised over peer-to-peer networks. However, they suffer from the problems of single point of failure and poor scalability. In order to solve these problems, we choose BLAST as our example data intensive applications and parallelise BLAST over a fully distributed peer-to-peer network. BLAST is a popular bioinformatics toolset which can be used to compare two DNA sequences. The major usage of BLAST is searching a query of sequences inside a database for their similarities so as to identify whether they are new. When comparing single pair of sequences, BLAST is efficient. However, due to growing size of the databases, executing BLAST jobs locally produces prohibitively poor performance. Thus, methods for parallelising BLAST are sought. Traditional BLAST parallelisation approaches are all based on clusters. Clusters employ a number of computing nodes and high bandwidth interlinks between nodes. Cluster-based BLAST exhibits higher performance; nevertheless, clusters suffer from limited resources and scalability problems. Clusters are expensive, prohibitively so when the growth of the sequence database are taken into account. It involves high cost and complication when increasing the number of nodes to adapt to the growth of BLAST databases. Hence a Peer-to-Peer-based BLAST service is required. This thesis demonstrates our parallelisation of BLAST over Peer-to-Peer networks (termed ppBLAST), which utilises the free storage and computing resources in the Peer-to-Peer networks to complete BLAST jobs in parallel. In order to achieve the goal, we build three layers in ppBLAST each of which is responsible for particular functions. The bottom layer is a DHT infrastructure with the support of range queries. It provides efficient range-based lookup service and storage for BLAST tasks. The middle layer is the BitTorrent-based database distribution. The upper layer is the core of ppBLAST which schedules and dispatches task to peers. For each layer, we conduct comprehensive research and the achievements are presented in this thesis. For the DHT layer, we design and implement our DAST-DHT. We analyse balancing, maximum number of children and the accuracy of the range query. We also compare the DAST with other range query methodology and state that if the number of children is adjusted to more two, the performance of DAST overcomes others. For the BitTorrent-like database distribution layer, we investigate the relationship between the seeding strategies and the selfish leechers (freeriders and exploiters). We conclude that OSS works better than TSS in a normal situation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography