To see the other types of publications on this topic, follow the link: Computing tool.

Dissertations / Theses on the topic 'Computing tool'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computing tool.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Welliver, Terrence M. "Configuration tool prototype for the Trusted Computing Exemplar project." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Dec/09Dec%5FWelliver.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, December 2009.
Thesis Advisor(s): Irvine, Cynthia E. Second Reader: Clark, Paul C. "December 2009." Description based on title screen as viewed on January 27, 2010. Author(s) subject terms: Trusted computing exemplar, Least privilege separation kernel, Graphical user interface, Wxpython, Java, Configuration vector, LPSK, Configuration vector tool, TCX, GUI, SKPP. Includes bibliographical references (p. 97-98). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
2

PEREIRA, MARCELO FERNANDES. "UBIQUITOUS COMPUTING AS A PROJECTUAL TOOL FOR DESIGN TEACHING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=21546@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Vivemos, atualmente, em um mundo onde as tecnologias de informação trazem inúmeras possibilidades para uma situação de conexão interpessoal permanente. Através das redes sociais, das ferramentas colaborativas de criação e da computação em nuvem, mantemos contato constante com uma gama crescente de dados gerados por todos aqueles com quem convivemos em nossos círculos sociais e profissionais. Os jovens universitários de hoje não percebem essas tecnologias como maravilhas de um mundo moderno. Membros da chamada Geração do Milênio, criados em um ambiente multimídia e interconectado, eles utilizam as ferramentas digitais de comunicação de um modo natural em seu cotidiano. Com a entrada no mercado de trabalho, essas tecnologias passam a fazer parte também de suas vidas profissionais, otimizando o trabalho em equipe e aumentando sua produtividade. Entretanto, é surpreendente como, em pleno século XXI, a maior parte destes recursos não são aproveitados em sala de aula. Observa-se um total descompasso entre o modo como os alunos pensam e trabalham fora da universidade e os métodos aplicados por seus professores. Ainda hoje, a grande maioria dos docentes, independentemente de seu nível de conhecimento técnico, inibem o uso de ferramentas digitais durante as aulas, solicitando que os alunos desliguem seus celulares e computadores portáteis e eliminando qualquer possibilidade de contato com fontes externas de informação. A utilidade desses equipamentos e tecnologias é subestimada de forma exagerada, ignorando-se o fato de que eles serão peças fundamentais durante a vida profissional dos alunos. Esta pesquisa teve por objetivo investigar o uso de métodos de trabalho colaborativo através do uso das tecnologias do cotidiano dos alunos para verificar o impacto em seu desempenho acadêmico. Para isso, foram realizados quatro experimentos controlados em turmas do curso de graduação em Design da PUC-Rio, onde a aplicação progressiva de ferramentas digitais específicas visaram uma proposta de atualização metodológica das disciplinas projetuais. Através dos experimentos, percebeu-se que os alunos são capazes de integrar as ferramentas colaborativas com facilidade em seu cotidiano acadêmico apresentando um considerável aumento na qualidade de sua produção. Concluiuse, portanto, que a introdução dessas ferramentas de um modo controlado no ambiente de ensino pode fornecer aos alunos subsídios importantes para que eles possam utilizá-las com eficiência em seu futuro profissional.
We are now living in a world where information technologies give us many possibilities for permanent interpersonal connection. Through social networks, collaborative tools and cloud computing, we can keep in constant touch with a large amount of data generated by those who exist in our social and professional circles. Today’s university students don’t see those technologies as wonders from a modern world. As members of the Millennial Generation, raised in an interconnected multimedia environment, they use the digital communication tools in a very natural way in their daily lives. As they their professional lives begin, those technologies become part of their work toolset, optimizing teamwork and boosting their productivity. However, it is surprising that in the twenty-first century, most of those resources are not applied in class. There is a complete mismatch between the way the students think and work outside the university and the methods used by their tutors. It is still common to find teachers that, regardless of their technological knowledge level, inhibit the use of digital tools in class, asking their students to turn off their cellphones and portable computers and eliminating any contact with external sources of information. The usefulness of those tools are underestimated in an exaggerated way and teachers ignore the fact that they are fundamental for the students professional lives. This study was aimed at the investigation of digital collaborative methods through the use of everyday technologies as a means to verify the impact on the academic performance of the students. For this purpose, four controlled experiments were conducted in several Design classes at PUC-Rio, where the progressive implementation of digital tools led to the proposal for an update of the teaching methodologies. The experiments made it possible to verify that the students are able to integrate the collaborative tools in their academic lives with ease, demonstrating a visible improvement in their production quality. As a conclusion, the controlled introduction of those tools in the academic environment can offer important subsidies for their efficient use as the students enter their professional lives.
APA, Harvard, Vancouver, ISO, and other styles
3

Bruneau, Julien. "Developing and Testing Pervasive Computing Applications: A Tool-Based Methodology." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2012. http://tel.archives-ouvertes.fr/tel-00767395.

Full text
Abstract:
Malgré des progrès récents, développer une application d'informatique ubiquitaire reste un défi à cause d'un manque de canevas conceptuels et d'outils aidant au développement. Ce défi implique de prendre en charge des objets communicants hétérogènes, de surmonter la complexité des technologies de systèmes distribués, de définir l'architecture d'une application, et d'encoder cela dans un programme. De plus, tester des applications d'informatique ubiquitaire est problématique car cela implique d'acquérir, de tester et d'interfacer une variété d'entités logicielles et matérielles. Ce procédé peut rapidement devenir coûteux en argent et en temps lorsque l'environnement ciblé implique de nombreuses entités. Cette thèse propose une méthodologie outillée pour dévelop- per et tester des applications d'informatique ubiquitaire. Notre méthodologie fournit tout d'abord le langage de conception DiaSpec. Ce langage permet de définir une taxonomie d'entités spécifiques à un domaine applicatif, s'abstrayant ainsi de leur hétérogénéité. Ce langage inclut également une couche permettant de définir l'architecture d'une application. Notre suite outillée fournit un compilateur qui, à partir de descriptions DiaSpec, génère un canevas de programmation guidant les phases d'implémentation et de test. Afin d'aider à la phase de test, nous proposons une approche de simulation et un outil intégré dans notre méthodologie outillée : l'outil DiaSim. Notre approche utilise le support de test généré par DiaSpec pour tester les applications de manière transparente dans un environnement physique simulé. La simulation d'une application est rendue graphiquement dans un outil de visualisation 2D. Nous avons combiné DiaSim avec un langage dédié permet- tant de décrire les phénomènes physiques en tant qu'équations différentielles, permettant des simulations réalistes. DiaSim a été utilisé pour simuler des applications dans des domaines applicatifs variés. Notre approche de simulation a également été appliquée à un système avionique, démontrant la généralité de notre approche de simulation.
APA, Harvard, Vancouver, ISO, and other styles
4

Zavala-Aké, J. Miguel. "A high-performace computing tool for partitioned multi-physics applications." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/663290.

Full text
Abstract:
The simulation and modelling of complex applications involving the interaction of processes governed by different physical principles is addressed in this thesis. The interaction of a fluid with a deformable body, or the exchange of thermal energy between fluid and solid are examples of these multi-physics applications. In these two cases, the modelling strategy proposed here combines the solution of separated physical systems to account for the interactions taking place through the entire domain. As a consequence, the simulation process resulting from the use of separated systems considers independent codes to find the solution of each system, while the entire system is reconstructed through an iterative approach combining these solutions. One of the main advantages of this partitioned approach is that each parallel code can use the most appropriate model and algorithm which allow achieving an accurate solution for the complete physical system. Nevertheless, several challenges must be considered when using this approach. For instance, from a physical point of view, the most of variables involved in the modelling of a multi-physical application must be continuous across the entire domain. From a computational point of view, efficient data transference between parallel codes is required to model the physical interactions taking place through the entire system. In addition, the simulation of multi-physics applications must be robust and maintain scalability not only for each parallel code, but also for the coupling problem. The present work describes the development, validation and use of a high performance computing coupling tool designed for solving efficiently partitioned multiphysics applications. The emphasis has been placed to the development of strategies to make efficient use of large-scale computing architectures, but always keeping the robustness and accuracy of the solutions. The coupling tool developed controls the data transference between the parallel codes establishing peer-to-peer communication layouts between the processors, the dynamic localization of regions where physical interactions take place, and the possible interpolations required between the different meshes composing large-scale multi-physics application. In this work, these features are applied to solve two multi-physics applications: contact of deformable bodies, and conjugate heat transfer. The contact problem involves the interaction of two or more solids which could deform. In this work, a parallel algorithm to deal with this problem is described. The continuity of the variables involved in the coupling problem is ensured using a domain decomposition method. The regions of the surface for each body where the contact takes place are identified using the localization process implemented in the coupling tool. The results show that the parallel algorithm used here for the solution of contact problems agrees well with those achieved by the elastic contact theory as well as those obtained by commercial codes. The conjugate heat transfer problem referes to the thermal interaction between a fluid and a solid. In this case, the coupled process is similar to the contact problem. The results show the capability of the framework developed in this thesis to deal with practical engineering applications. In order to demonstrate the capability of the coupling tool to deal with large-scale applications, a parallel performance study of the partitioned approach is developed in this thesis. The study leads to a load balance strategy that allows estimating the optimal performance of a parallel multi-physics application. The parallel performance analysis of a conjugate heat transfer problem shows that the optimal efficiency of this application is well represented by the expressions derived in this study.
La simulación y modelado de aplicaciones complejas que implican la interacción de procesos caracterizados por diferentes principios físicos es abordado en esta tesis. La interacción de un fluido con un sólido deformable o el intercambio de energía térmica entre un fluido y un sólido son ejemplos de aplicaciones multi-física. En estos casos, la estrategia propuesta en esta tesis combina la solución de los sistemas físicos que constituyen una aplicación multi-física para modelar el sistema completo. El proceso de simulación resultante del uso de sistemas físicos separados, considera códigos numéricos independientes para encontrar la solución de cada sistema, mientras que el problema completo es reconstruido a través de una aproximación iterativa combinando estas soluciones. Una de las principales ventajas de esta aproximación es que cada código puede usar los modelos y algoritmos paralelos más apropiados que le permitan encontrar la solución más precisa en el sistema físico completo. A pesar de esto, existen varios obstáculos que deben de ser considerados. Por ejemplo, desde un punto de vista físico, las variables implicadas en el modelado debe de ser continuas a través del dominio completo así como su primera derivada. Desde un punto de vista computacional, la transferencia de datos entre códigos paralelos es necesario para modelar las interacciones físicas que tienen lugar en el sistema completo. Adicionalmente, las simulaciones de aplicaciones multi-física deben de ser robustas y mantener la escalabilidad, no solo de cada código paralelo, sino también del problema acoplado. Esta tesis describe el desarrollo, validación y uso de una herramienta de acople diseñada para resolver eficientemente aplicaciones multi-físicas haciendo uso de aproximaciones particionadas. El énfasis ha sido puesto en el desarrollo de estrategias que hacen un eficiente uso de sistemas de cómputo de altas prestaciones, siempre manteniendo la robustez y precisión de las soluciones. La herramienta de acople desarrollada controla la transferencia de datos entre los códigos paralelos usados en la simulación, la localización de las regiones donde las interacciones tienen lugar, y las posibles interpolaciones requeridas entre las diferentes mallas usadas para modelar un sistema multi-físico. Estas características son usadas en la solución de dos sistemas: contacto entre cuerpos deformables, y en trasferencia de calor conjugada entre fluido y sólido. El problema de contacto implica la interacción de dos o más sólidos que pueden deformarse. En este trabajo, un algoritmo paralelo para hacer frente este problema es descrito. La continuidad de las variables involucradas en este problema acoplado es garantizada por medio del uso de un método de descomposición de dominios. Las regiones de la superficie de cada partición en donde el contacto tiene lugar son identificadas por un proceso de localización el cual es parte de esencial de la herramienta de acople presentada. Los resultados muestran que el algoritmo paralelo usado aquí para la solución de problemas de contacto coincide bien con aquellos resultados reportados en la teoría de contacto elástico, así como también con aquellos obtenidos a través de códigos comerciales. El problema de transferencia de calor conjugada implica intercambio de energía térmica. El estado de este sistema requiere determinar la distribución de temperatura y del flujo de calor a través de la interfaz fluido-sólido. En este caso, el proceso de acople es similar al aplicado al problema de contacto. Los resultados muestran la precisión del método desarrollado en esta tesis, así como también la capacidad para hacer frente a problemas relevantes de ingeniería. Finalmente, un estudio relacionado con del rendimiento paralelo de las estrategias de acople mencionadas anteriormente es usado para mostrar la eficiencia del acople desarrollada para resolver aplicaciones representadas por las expresiones derivadas en este estudio.
APA, Harvard, Vancouver, ISO, and other styles
5

Singh, Neeta S. "An automatic code generation tool for partitioned software in distributed computing." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cai, Meng. "A plotting tool for Internet based on client/server computing model." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ64076.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rezk, Ehab William Aziz. "Matwin: A java tool for computing and experimenting in dynamical systems." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ganduri, Rajasekhar. "Network Security Tool for a Novice." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc862873/.

Full text
Abstract:
Network security is a complex field that is handled by security professionals who need certain expertise and experience to configure security systems. With the ever increasing size of the networks, managing them is going to be a daunting task. What kind of solution can be used to generate effective security configurations by both security professionals and nonprofessionals alike? In this thesis, a web tool is developed to simplify the process of configuring security systems by translating direct human language input into meaningful, working security rules. These human language inputs yield the security rules that the individual wants to implement in their network. The human language input can be as simple as, "Block Facebook to my son's PC". This tool will translate these inputs into specific security rules and install the translated rules into security equipment such as virtualized Cisco FWSM network firewall, Netfilter host-based firewall, and Snort Network Intrusion Detection. This tool is implemented and tested in both a traditional network and a cloud environment. One thousand input policies were collected from various users such as staff from UNT departments' and health science, including individuals with network security background as well as students with a non-computer science background to analyze the tool's performance. The tool is tested for its accuracy (91%) in generating a security rule. It is also tested for accuracy of the translated rule (86%) compared to a standard rule written by security professionals. Nevertheless, the network security tool built has shown promise to both experienced and inexperienced people in network security field by simplifying the provisioning process to result in accurate and effective network security rules.
APA, Harvard, Vancouver, ISO, and other styles
9

Skarpas, Daniel. "CAD tool emulation for a two-level reconfigurable DSP architecture." Online access for everyone, 2007. http://www.dissertations.wsu.edu/Thesis/Spring2007/D_Skarpas_050407.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tanfener, Ozan. "Design and Evaluation of a Microservice Testing Tool for Edge Computing Environments." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287171.

Full text
Abstract:
Edge computing can provide decentralized computation and storage resources with low latency and high bandwidth. It is a promising infrastructure to host services with stringent latency requirements, for example autonomous driving, cloud gaming, and telesurgery to the customers. Because of the structural complexity associated with the edge computing applications, research topics like service placement gain great importance. To provide a realistic and efficient general environment for evaluating service placement solutions that can be used to analyze latency requirements of services at scale, a new testing tool for mobile edge cloud is designed and implemented in this thesis. The proposed tool is implemented as a cloud native application, and allows deploying applications in an edge computing infrastructure that consists of Kubernetes and Istio, it can be easily scaled up to several hundreds of microservices, and deployment into the edge clusters is automated. With the help of the designed tool, two different microservice placement algorithms are evaluated in an emulated edge computing environment based on Federated Kubernetes. The results have shown how the performance of algorithms varies when the parameters of the environment, and the applications instantiated and deployed by the tool are changed. For example, increasing the request rate 200% can increase the delay by 100% for different algorithms. Moreover, complicating the mobile network can improve the latency performance up to 20% depending on the microservice placement algorithm.
Edge computing kan ge decentraliserad beräkning och lagringsresurser med låg latens och hög bandbredd. Det är en lovande infrastruktur för att vara värd för tjänster med strängt prestandakrav, till exempel autonom körning, molnspel och telekirurgi till kunderna. På grund av den strukturella komplexiteten som är associerad med edge computing applikationerna, får forskningsämnen som tjänsteplacering stor betydelse. För att tillhandahålla en realistisk och effektiv allmän miljö för utvärdering av lösningar för tjänsteplacering, designas och implementeras ett nytt testverktyg för mobilt kantmoln i denna avhandling. Det föreslagna verktyget implementeras på molnmässigt sätt som gör det möjligt att distribuera applikationer i en edge computing-infrastruktur som består av Kubernetes och Istio. Med hjälp av det konstruerade verktyget utvärderas två olika placeringsalgoritmer för mikrotjänster i en realistisk edge computing miljö. Resultaten visar att en ökning av förfrågningsgraden 200 % kan öka förseningen med 100 % för olika algoritmer. Dessutom kan komplicering av mobilnätet förbättra latensprestanda upp till 20% beroende på algoritmen för mikroserviceplaceringen.
APA, Harvard, Vancouver, ISO, and other styles
11

Ullah, Kazi Wali. "Automated Security Compliance Tool for the Cloud." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for telematikk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-19104.

Full text
Abstract:
Security, especially security compliance, is a major concern that is slowing down the large scale adoption of cloud computing in the enterprise environment. Business requirements, governmental regulations and trust are among the reasons why the enterprises require certain levels of security compliance from cloud providers. So far, this security compliance or auditing information has been generated by security specialists manually. This process involves manual data collection and assessment which is slow and incurs a high cost. Thus, there is a need for an automated compliance tool to verify and express the compliance level of various cloud providers. Such a tool can reduce the human intervention and eventually reduce the cost and time by verifying the compliance automatically. Also, the tool will enable the cloud providers to share their security compliance information using a common framework. In turn, the common framework allows clients to compare various cloud providers based on their security needs. Having these goals in mind, we have developed an architecture to build an automated security compliance tool for a cloud computing platform. We have also outlined four possible approaches to achieve this automation. These possible four approaches refer to four design patterns to collect data from the cloud system and these are: API, vulnerability scanning, log analysis and manual entry. Finally, we have implemented a proof-of-concept prototype of this automated security compliance tool using the proposed architecture. This prototype implementation is integrated with OpenStack cloud platform, and the results are exposed to the users of the cloud following the CloudAudit API structure defined by Cloud Security Alliance.
APA, Harvard, Vancouver, ISO, and other styles
12

Johnson, Buxton L. Sr. "HYBRID PARALLELIZATION OF THE NASA GEMINI ELECTROMAGNETIC MODELING TOOL." UKnowledge, 2017. http://uknowledge.uky.edu/ece_etds/99.

Full text
Abstract:
Understanding, predicting, and controlling electromagnetic field interactions on and between complex RF platforms requires high fidelity computational electromagnetic (CEM) simulation. The primary CEM tool within NASA is GEMINI, an integral equation based method-of-moments (MoM) code for frequency domain electromagnetic modeling. However, GEMINI is currently limited in the size and complexity of problems that can be effectively handled. To extend GEMINI’S CEM capabilities beyond those currently available, primary research is devoted to integrating the MFDlib library developed at the University of Kentucky with GEMINI for efficient filling, factorization, and solution of large electromagnetic problems formulated using integral equation methods. A secondary research project involves the hybrid parallelization of GEMINI for the efficient speedup of the impedance matrix filling process. This thesis discusses the research, development, and testing of the secondary research project on the High Performance Computing DLX Linux supercomputer cluster. Initial testing of GEMINI’s existing MPI parallelization establishes the benchmark for speedup and reveals performance issues subsequently solved by the NASA CEM Lab. Implementation of hybrid parallelization incorporates GEMINI’s existing course level MPI parallelization with Open MP fine level parallel threading. Simple and nested Open MP threading are compared. Final testing documents the improvements realized by hybrid parallelization.
APA, Harvard, Vancouver, ISO, and other styles
13

Konda, Niranjan. "Evaluating a Reference Enterprise Application as a Teaching Tool in a Distributed Enterprise Computing Class." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1376986294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hanus, Tomáš. "Implementace služby poskytující frontu zpráv v technologii cloud computing." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385955.

Full text
Abstract:
Thesis discusses about different ways of a communication between components of a distributed system. It describes a communication using a message exchange and at the same time talks about other alternatives. It adds details about various models of a message exchange, various message types and about various specifications as well. Commercial tools ActiveMQ, RabbitMQ and Kafka are presented. Special emphasis is placed on describing the way these tools exchange messages, scalability options and others. The web service is designed according to the described features. Its main purpose is management and monitoring of the tool by user choice and easy replacement of this tool with another one. Designed application is implemented using the Kotlin language for selected tool RabbitMQ. The implemented solution allows a simple exchange of messages through the REST api.
APA, Harvard, Vancouver, ISO, and other styles
15

Zimmermann, Andreas [Verfasser]. "Context Management and Personalisation : A Tool Suite for Context- and User-Aware Computing / Andreas Zimmermann." Aachen : Shaker, 2007. http://d-nb.info/1164341197/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Payne, John C. "Fault tolerant computing testbed : a tool for the analysis of hardware and software fault handling techniques /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA359579.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, December 1998.
"December 1998." Thesis advisor(s): Alan A. Ross. Includes bibliographical references (p. 169). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
17

Ali, Muhammad Usman. "Cloud Computing as a Tool to Secure and Manage Information Flow in Swedish Armed Forces Networks." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-6139.

Full text
Abstract:
In the last few years cloud computing has created much hype in the IT world. It has provided new strategies to cut down costs and provide better utilization of resources. Apart from all drawbacks, the cloud infrastructure has been long discussed for its vulnerabilities and security issues. There is a long list of service providers and clients, who have implemented different service structures using cloud infrastructure. Despite of all these efforts many organizations especially with higher security concerns have doubts about the data privacy or theft protection in cloud. This thesis aims to encourage Swedish Armed Forces (SWAF) networks to move to cloud infrastructures as this is the technology that will make a huge difference and revolutionize the service delivery models in the IT world. Organizations avoiding it would lag behind but at the same time organizations should consider to adapt a cloud strategy most reliable and compatible with their requirements. This document provides an insight on different technologies and tools implemented specifically for monitoring and security in cloud. Much emphasize is given on virtualization technology because cloud computing highly relies on it. Amazon EC2 cloud is analyzed from security point of view. An intensive survey has also been conducted to understand the market trends and people’s perception about cloud implementation, security threats, cost savings and reliability of different services provided.
APA, Harvard, Vancouver, ISO, and other styles
18

Larson, Jonathan Karl. "CAD tool emulation for a two-level reconfigurable cell array for digital signal processing." Online access for everyone, 2005. http://www.dissertations.wsu.edu/Thesis/Fall2005/j%5Flarson%5F120805.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Richardson, W. Ryan. "Using Concept Maps as a Tool for Cross-Language Relevance Determination." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/28191.

Full text
Abstract:
Concept maps, introduced by Novak, aid learnersâ understanding. I hypothesize that concept maps also can function as a summary of large documents, e.g., electronic theses and dissertations (ETDs). I have built a system that automatically generates concept maps from English-language ETDs in the computing field. The system also will provide Spanish translations of these concept maps for native Spanish speakers. Using machine translation techniques, my approach leads to concept maps that could allow researchers to discover pertinent dissertations in languages they cannot read, helping them to decide if they want a potentially relevant dissertation translated. I am using a state-of-the-art natural language processing system, called Relex, to extract noun phrases and noun-verb-noun relations from ETDs, and then produce concept maps automatically. I also have incorporated information from the table of contents of ETDs to create novel styles of concept maps. I have conducted five user studies, to evaluate user perceptions about these different map styles. I am using several methods to translate node and link text in concept maps from English to Spanish. Nodes labeled with single words from a given technical area can be translated using wordlists, but phrases in specific technical fields can be difficult to translate. Thus I have amassed a collection of about 580 Spanish-language ETDs from Scirus and two Mexican universities and I am using this corpus to mine phrase translations that I could not find otherwise. The usefulness of the automatically-generated and translated concept maps has been assessed in an experiment at Universidad de las Americas (UDLA) in Puebla, Mexico. This experiment demonstrated that concept maps can augment abstracts (translated using a standard machine translation package) in helping Spanish speaking users find ETDs of interest.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Rehana, Jinat. "Model Driven Development of Web Application with SPACE Method and Tool-suit." Thesis, Norwegian University of Science and Technology, Department of Telematics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10905.

Full text
Abstract:
Enterprise level software development using traditional software engineeringapproaches with third-generation programming languages is becoming morechallenging and cumbersome task with the increased complexity of products,shortened development cycles and heightened expectations of quality. MDD(Model Driven Development) has been counting as an exciting and magicaldevelopment approach in the software industry from several years. The ideabehind MDD is the separation of business logic of a system from its implementationdetails expressing problem domain using models. This separation andmodeling of problem domain simplify the process of system design as well asincrease the longevity of products as new technologies can be adopted easily.With appropriate tool support, MDD shortens the software development lifecycle drastically by automating a significant portion of development steps.MDA (Model Driven Architecture) is a framework launched by OMG (ObjectManagement Group) to support MDD. SPACE is an engineering methodfor rapid creation of services, developed at NTNU (Norwegian University ofScience and Technology) which follows MDA framework. Arctis and Ramsesare tool suits, also developed at NTNU to support SPACE method. Severalsolutions have been developed on Arctis tool suit covering several domainslike mobile services, embedded systems, home automation, trust managementand web services.This thesis presents a case study on the web application domain with Arctis,where the underlying technologies are AJAX (asynchronous JavaScriptand XML), GWT (Google Web Toolkit) framework and Java Servlet. Inorder to do that, this thesis contributes building up some reusable buildingblocks with Arctis tool suit. This thesis also describes a use case scenario touse those building blocks. This thesis work tries to implement the specifiedsystem and evaluates the resulting work.
APA, Harvard, Vancouver, ISO, and other styles
21

Nikoli, Maria. "SOUNDMAT : A Sonic and Kinesthetic Tool for Architects." Thesis, Malmö universitet, Institutionen för konst, kultur och kommunikation (K3), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43353.

Full text
Abstract:
This thesis project aims to bring together knowledge and methods from embodied interaction design in order to help architects expand their current repertoire of sketching tools and methods. As argued by Bernard Tschumi (1996) and Juhani Pallasmaa (2012), architecture is a sight-dominated design field, and architects are faced with the paradox of having to design embodied, multisensory experiences with visual means and from a disembodied perspective.  Situated in the genre of physical computing, the outcome of this thesis is the prototype of a sensor-based tool for sketching with sound and kinesthesia. The prototype is primarily targeted to architects, but may also be of interest to professionals from other fields who are involved in space-making, such as interaction designers, artists, scenographers, and interior designers, among others. The findings of this thesis intend to contribute to the field of interaction design, and especially the subfield of embodied interaction. This thesis addresses the aforementioned problem domain, which was first identified when I practiced the profession of architecture, and then further understood during this project, namely during literature review and user research. Building upon three main areas of theory, this project finds its grounding in embodied interaction theory, phenomenological concepts, as well as a contemporary view of the soma as a united self of mind and body. Fieldwork was a very important part of the process, and methods such as interviews, surveys, and cultural proves were employed to ground the project in user research. Ideation mainly consisted of sketching with embodied methods. Lastly, the user testing of a Wizard-of-Oz prototype was essential in assessing and evaluating the final design.
APA, Harvard, Vancouver, ISO, and other styles
22

McCreadie, Christopher Andrew. "Widening stakeholder involvement : exploiting interactive 3D visualisation and protocol buffers in geo-computing." Thesis, Abertay University, 2014. https://rke.abertay.ac.uk/en/studentTheses/c26f344f-29d1-4eea-8879-b83862c63143.

Full text
Abstract:
Land use change has an impact on regional sustainability which can be assessed using social, economic and environmental indicators. Stakeholder engagement tools provide a platform that can demonstrate the possible future impacts land use change may have to better inform stakeholder groups of the impact of policy changes or plausible climatic variations. To date some engagement tools are difficult to use or understand and lack user interaction whilst other tools demonstrate model environments with a tightly coupled user interface, resulting in poor performance. The research and development described herein relates to the development and testing of a visualisation engine for rendering the output of an Agent Based Model (ABM) as a 3D Virtual Environment via a loosely-coupled data driven communications protocol called Protocol Buffers. The tool, named Rural Sustainability Visualisation Tool (R.S.V.T) is primarily aimed to enhance nonexpert knowledge and understanding of the effects of land use change, driven by farmer decision making, on the sustainability of a region. Communication protocols are evaluated and Protocol Buffers, a binarybased communications protocol is selected, based on speed of object serialization and data transfer, to pass message from the ABM to the 3D Virtual Environment. Early comparative testing of R.S.V.T and its 2D counterpart RepastS shows R.S.V.T and its loosely-coupled approach offers an increase in performance when rendering land use scenes. The flexibility of Protocol Buffer’s and MongoDB are also shown to have positive performance implications for storing and running of loosely-coupled model simulations. A 3D graphics Application Programming Interface (API), commonly used in the development of computer games technology is selected to develop the Virtual Environment. Multiple visualisation methods, designed to enhance stakeholder engagement and understanding, are developed and tested to determine their suitability in both user preference and information retrieval. The application of a prototype is demonstrated using a case study based in the Lunan catchment in Scotland, which has water quality and biodiversity issues due to intense agriculture. The region is modelled using three scenario storylines that broadly describe plausible futures. Business as Might Be Usual (BAMBU), Growth Applied Strategy (GRAS) and the Sustainable European Development Goal (SEDG) are the applied scenarios. The performance of the tool is assessed and it is found that R.S.V.T can run faster than its 2D equivalent when loosely coupled with a 3D Virtual Environment. The 3D Virtual Environment and its associated visualisation methods are assessed using non-expert stakeholder groups and it is shown that 3D ABM output is generally preferred to 2D ABM output. Insights are also gained into the most appropriate visualisation techniques for agricultural landscapes. Finally, the benefit of taking a loosely-coupled approach to the visualisation of model data is demonstrated through the performance of Protocol Buffers during testing, showing it is capable of transferring large amounts of model data to a bespoke visual front-end.
APA, Harvard, Vancouver, ISO, and other styles
23

Jaoudi, Yassine. "Evaluating Online Learning Anomaly Detection on Intel Neuromorphic Chip and Memristor Characterization Tool." University of Dayton / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1628082991706349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Huang, Kevin. "Exploring In-Home Monitoring of Rehabilitation and Creating an Authoring Tool for Physical Therapists." Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/668.

Full text
Abstract:
Physiotherapy is a key part of treatment for neurological and musculoskeletal disorders, which affect millions in the U.S. each year. Physical therapy treatments typically consist of an initial diagnostic session during which patients’ impairments are assessed and exercises are prescribed to improve the impaired functions. As part of the treatment program, exercises are often assigned to be performed at home daily. Patients return to the clinic weekly or biweekly for check-up visits during which the physical therapist reassesses their condition and makes further treatment decisions, including readjusting the exercise prescriptions. Most physical therapists work in clinics or hospitals. When patients perform their exercises at home, physical therapists cannot supervise them and lack quantitative exercise data reflecting the patients’ exercise compliance and performance. Without this information, it is difficult for physical therapists to make informed decisions or treatment adjustments. To make informed decisions, physical therapists need to know how often patients exercise, the duration and/or repetitions of each session, exercise metrics such as the average velocities and ranges of motion for each exercise, patients’ symptom levels (e.g. pain or dizziness) before and after exercise, and what mistakes patients make. In this thesis, I evaluate and work towards a solution to this problem. The growing ubiquity of mobile and wearable technology makes possible the development of “virtual rehabilitation assistants.” Using motion sensors such as accelerometers and gyroscopes that are embedded in a wearable device, the “assistant” can mediate between patients at home and physical therapists in the clinic. Its functions are to:  use motion sensors to record home exercise metrics for compliance and performance and report these metrics to physical therapists in real-time or periodically;  allow physical therapists and patients to quantify and see progress on a fine-grain level;  record symptom levels to further help physical therapists gauge the effectiveness of exercise prescriptions;  offer real-time mistake recognition and feedback to the patients during exercises; One contribution of this thesis is an evaluation of the feasibility of this idea in real home settings. Because there has been little research on wearable virtual assistants in patient homes, there are many unanswered questions regarding their use and usefulness: Q1. What patient in-home data could wearable virtual assistants gather to support physical therapy treatments? Q2. Can patient data gathered by virtual assistants be useful to physical therapists? 3 Q3. How is this wearable in-home technology received by patients? I sought to answer these questions by implementing and deploying a prototype called “SenseCap.” SenseCap is a small mobile device worn on a ball cap that monitors patients’ exercise movements and queries them about their symptoms. A technology probe study showed that the virtual assistant could gather important compliance, performance, and symptom data to assist physical therapists’ decision-making, and that this technology would be feasible and acceptable for in-home use by patients. Another contribution of this thesis is the development of a tool to allow physical therapists to create and customize virtual assistants. With current technology, virtual assistants require engineering and programming efforts to design, implement, configure and deploy them. Because most physical therapists do not have access to an engineering team they and their patients would be unable to benefit from this technology. With the goal of making virtual assistants accessible to any physical therapist, I explored the following research questions: Q4. Would a user-friendly rule-specification interface make it easy for physical therapists to specify correct and incorrect exercise movements directly to a computer? What are the limitations of this method of specifying exercise rules? Q5. Is it possible to create a CAD-type authoring tool, based on a usable interface, that physical therapists could use to create their own customized virtual assistant for monitoring and coaching patients? What are the implementation details of such a system and the resulting virtual assistant? Q6. What preferences do PTs have regarding the delivery of coaching feedback for patients? Q7. What is the recognition accuracy of a virtual rehabilitation assistant created by this tool? This dissertation research aims to improve our understanding of the barriers to rehabilitation that occur because of the invisibility of home exercise behavior, to lower these barriers by making it possible for patients to use a widely-available and easily-used wearable device that coaches and monitors them while they perform their exercises, and improve the ability of physical therapists to create an exercise regime for their patients and to learn what patients have done to perform these exercises. In doing so, treatment should be better suited to each patient and more successful.
APA, Harvard, Vancouver, ISO, and other styles
25

Roumen, Geert Jacob. "Arduino Action : Arduino Action is a collaborative tool for understanding and creating with physical computing in high school." Thesis, Umeå universitet, Designhögskolan vid Umeå universitet, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-173392.

Full text
Abstract:
Within the field of education, computers and micro-controllers like Arduino are increasingly being used to teach students relevant skills, attitude and knowledge around technology. Education around these tools are often set in group contexts and collaboration is often considered an important part of the learning, however much of the currently available software is still designed around a laptop programming paradigm that which in itself tends to restrict collaboration and cementing rather than encouraging shifting of roles and activities among group members. This thesis explores how we could design tools that better invite collaborative interactions in these settings, in particular how mobile software tools could allow for sketching and iterating more fluidly. Based on interviews with experts, observations in the classroom setting, reflection with teachers and a workshop with Arduino Education this thesis sketches a future vision that re-designs the tools to be more collaborative and fluid, so that reflection, action and reaction cycles could be smaller and allow for more exploration and learning.
APA, Harvard, Vancouver, ISO, and other styles
26

Sugishita, Satomi H. "System Design Quality and Efficiency of System Analysts: An Automated CASE Tool Versus a Manual Method." UNF Digital Commons, 1992. http://digitalcommons.unf.edu/etd/75.

Full text
Abstract:
The purpose of the current research study is to find out if CASE tools help to increase the software design quality and efficiency of system analysts and designers when they modify a system design document. Results of the experimental data analysis show that only the experience level of subjects had an effect on quality of their work. Results indicated that the design methods, either CASE tools or manual, do not have a significant effect on quality of the modification task nor the efficiency of system analysts and designers.
APA, Harvard, Vancouver, ISO, and other styles
27

Chaimov, Nicholas. "Insightful Performance Analysis of Many-Task Runtimes through Tool-Runtime Integration." Thesis, University of Oregon, 2017. http://hdl.handle.net/1794/22731.

Full text
Abstract:
Future supercomputers will require application developers to expose much more parallelism than current applications expose. In order to assist application developers in structuring their applications such that this is possible, new programming models and libraries are emerging, the many-task runtimes, to allow for the expression of orders of magnitude more parallelism than currently existing models. This dissertation describes the challenges that these emerging many-task runtimes will place on performance analysis, and proposes deep integration between runtimes and performance tools as a means of producing correct, insightful, and actionable performance results. I show how tool-runtime integration can be used to aid programmer understanding of performance characteristics and to provide online performance feedback to the runtime for Unified Parallel C (UPC), High Performance ParalleX (HPX), Apache Spark, the Open Community Runtime, and the OpenMP runtime.
APA, Harvard, Vancouver, ISO, and other styles
28

Gomes, Berto de Tácio Pereira. "AGST (Autonomic Grid Simulation Tool): uma ferramenta para modelagem, simulação e avaliação de abordagens autonômicas para grades de computadores." Universidade Federal do Maranhão, 2012. http://tedebc.ufma.br:8080/jspui/handle/tede/483.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:53:20Z (GMT). No. of bitstreams: 1 dissertacao Berto Pereira Gomes.pdf: 2014882 bytes, checksum: f2db4e98c8101d26cfdc062169be8627 (MD5) Previous issue date: 2012-03-09
Computer Grids are characterized by the high dynamism of its execution environment, resources and tasks heterogeneity, and high scalability. These features turn tasks such as configuration, maintenance and failure recovery quite challenging and is becoming increasingly difficult to perform them only by human agents. The autonomic computing term denotes computer systems capable of changing their behavior dynamically in response to changes in the execution environment. For achieving this, the software is generally organized following the MAPE-K (Monitoring, Analysis, Planning, Execution and Knowledge) model, in which autonomic managers perform of the execution environment sensing activities, context analysis, planning and execution of dynamic reconfiguration actions, based on shared knowledge about the controlled system. Several recent research efforts seek to apply autonomic computing techniques to grid computing, providing more autonomy and reducing the need for human intervention in the maintenance and management of these computing environments, thus creating the concept an autonomic grid. This thesis presents a new simulator tool for assisting the development and evaluation of autonomic grid approaches called AGST (Autonomic Grid Simulation Tool). The major contribution of this tool is the definition and implementation of a simulation model based on the MAPE-K autonomic management cycle, that can be used to simulate the monitoring, analysis and planning, control and execution functions, allowing the simulation of an autonomic computing grid. AGST also provides support for parametric and compositional dynamic adaptations of managed elements. This work also presents two case studies where the proposed tool was successfully used for the modeling, simulation and evaluation of approaches to grid computing.
Grades de computadores são caracterizadas pelo alto dinamismo de seu ambiente de execução, alta heterogeneidade de recursos e tarefas, e por requererem grande escalabilidade. Essas características tornam tarefas como configuração, manutenção e recuperação em caso de falhas bastante desafiadoras e cada vez mais difíceis de serem realizadas exclusivamente por agentes humanos. O termo Computação Autonômica denota sistemas computacionais capazes de mudar seu comportamento dinamicamente em resposta a variações do ambiente de execução. Para isso, o software é geralmente organizado seguindo-se a arquitetura MAPE-K (Monitoring, Analysis, Planning, Execution and Knowledge), na qual gerentes autonômicos realizam as atividades de monitoramento do ambiente de execução, análise de informações de contexto, planejamento e execução de ações de reconfiguração dinâmica, compartilhando algum conhecimento sobre o sistema controlado. Diversos esforços de pesquisa recentes buscam aplicar técnicas de computação autonômica à computação em grade, provendo-se maior autonomia e reduzindo-se a necessidade de intervenção humana na manutenção e gerenciamento destes ambientes computacionais, criando assim o conceito de grade autonômica. Esta dissertação apresenta uma nova ferramenta de simulação que tem por objetivo auxiliar o desenvolvimento e avaliação de abordagens autonômicas para grades de computadores denominada AGST (Autonomic Grid Simulation Tool). A principal contribuição dessa ferramenta é a definição e implementação de um modelo de simulação baseado na arquitetura MAPE-K, que pode ser utilizado para simular todas as funções de monitoramento, analise e planejamento, controle e execução, permitindo assim a simulação de grades autonômicas. AGST provê ainda o suporte à execução de adaptações paramétricas e composicionais dos elementos gerenciados. Este trabalho também apresenta dois estudos de caso nos quais a ferramenta proposta foi utilizada com sucesso no processo de modelagem, simulação e avaliação de abordagens para grades computacionais.
APA, Harvard, Vancouver, ISO, and other styles
29

Palleis, Henri [Verfasser], and Heinrich [Akademischer Betreuer] Hußmann. "The tool space : designing indirect touch input techniques for personal muti-surface computing devices / Henri Palleis ; Betreuer: Heinrich Hußmann." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2017. http://d-nb.info/1137466723/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Summers, David C. "Implementation of a fault tolerant computing testbed a tool for the analysis of hardware and software fault handling techniques /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA380203.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, June 2000.
Thesis advisor(s): Ross, Alan A.; Loomis, Herschel H. "June 2000." Includes bibliographical references (p. 163-165). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
31

Andrex, D. L. "Upgrade human-machine interface, provide additional analysis tool, and upgrade and migrate scheduling CPCI in existing major computing system." Master's thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/42735.

Full text
Abstract:

This project was inspired by an ENG 5004 session that explored how humans process information coupled with a complaint in my workplace about how difficult it was to analyze our management data. The workplace problem lay in the technology in use: character based terminals presenting data in tabular format regarding schedule data for work on maps. This data tends to be graphically and geographically related. and more easily processed visually as symbols by humans.

The challenge in this project is that it attempts to engineer a business process and implement it in software (in an existing and operating system). The conceptual solution is to provide a graphical user interface (GUI) which presents schedule elements graphically (GANTT, PERT, and Resource Use Charts) in the visual paradigm with which managers are familiar. Further. the scheduled work is geographically based. so a graphical device that shows where a job is located is useful, especially when adjacent jobs that contended for data at their borders are also shown. And finally, given the graphic tools for reporting and analysis, the capabilities to use these tools to create and implement schedules would provide managers with greatly improved efficiency.

The conceptual solution indicates an evolution in technology for this customer. A move from the mainframe driven operations and character based display to more distributed processing and graphically based displays is indicated. The capability required is a small subset of the existing system which is not to be disturbed during integration and installation.

The solution to be implemented is to migrate the needed functions to a PC based terminal running a graphical user interface. The desired applications are hosted locally on the PC which is connected to the mainframe through existing networks. An application on the PC provides the interface to the mainframe for data extraction. and later, a data interface. Scheduling, database, and Geographic information systems (GIS) are resident on the PC. and are integrated to support customer use. The PC is then the Integrated Management Workstation (IMWS). The interface and database elements are essentially invisible to the manager. It is the manager's job to strategize and implement work plans. not worry about the inner workings of the computer system. The scheduling and GIS applications are represented to the manager who interacts, analyzes. and decides. The manager is the last and decisive element in the system, and uses the new capabilities to help manage the work. This project defines the problem, provides the conceptual solution, and provides the engineering management plans and system requirements to implement the solution. This project does not build anything and no code is written. These tasks are to be accomplished by the team that implements this project according to the guidance and stipulations contained in the project documents.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
32

Haen, Christophe. "Phronesis, a diagnosis and recovery tool for system administrators." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00950700.

Full text
Abstract:
The LHCb online system relies on a large and heterogeneous IT infrastructure made from thousands of servers on which many different applications are running. They run a great variety of tasks : critical ones such as data taking and secondary ones like web servers. The administration of such a system and making sure it is working properly represents a very important workload for the small expert-operator team. Research has been performed to try to automatize (some) system administration tasks, starting in 2001 when IBM defined the so-called "self objectives" supposed to lead to "autonomic computing". In this context, we present a framework that makes use of artificial intelligence and machine learning to monitor and diagnose at a low level and in a non intrusive way Linux-based systems and their interaction with software. Moreover, the shared experience approach we use, coupled with an "object oriented paradigm" architecture increases a lot our learning speed, and highlight relations between problems.
APA, Harvard, Vancouver, ISO, and other styles
33

Hoffman, John Jared. "PPerfGrid: A Grid Services-Based Tool for the Exchange of Heterogeneous Parallel Performance Data." PDXScholar, 2004. https://pdxscholar.library.pdx.edu/open_access_etds/2664.

Full text
Abstract:
This thesis details the approach taken in developing PPerfGrid. Section 2 discusses other research related to this project. Section 3 provides general background on the technologies utilized in PPerfGrid, focusing on the components that make up the Grid services architecture. Section 4 provides a description of the architecture of PPerfGrid. Section 5 details the implementation of PPerfGrid. Section 6 presents tests designed to measure the overhead and scalability of the PPerfGrid application. Section 7 suggests future work, and Section 8 concludes the thesis.
APA, Harvard, Vancouver, ISO, and other styles
34

Mahanga, Mwaka. "Unknown Exception Handling Tool Using Humans as Agents." UNF Digital Commons, 2015. http://digitalcommons.unf.edu/etd/563.

Full text
Abstract:
In a typical workflow process, exceptions are the norm. Exceptions are defined as deviations from the normal sequence of activities and events. Exceptions can be divided into two broad categories: known exceptions (i.e., expected and predefined deviations) and unknown exceptions (i.e., unexpected and undefined deviations). Business Process Execution Language (BPEL) has become the de facto standard for executing business workflows with the use of web services. BPEL includes exception handling methods that are sufficient for known exception scenarios. Depending on the exception and the specifics of the exception handling tools, processes may either halt or move to completion. Instances of processes that are halted or left incomplete due to unhandled exceptions affect the performance of the workflow process, as they increase resource utilization and process completion time. However, designing efficient process handlers to avoid the issue of unhandled exceptions is not a simple task. This thesis provides a tool that handles unknown exceptions using provisions for exception handling with the involvement of human activities by using the BPEL4PEOPLE specification. BPEL4PEOPLE, an extension of BPEL, offers the ability to specify human activities within BPEL processes. The approach considered in this thesis involves humans in exception handling tools by providing an alternate sub process within a given business process. A prototype application has been developed implementing the tool that handles unknown exceptions. The prototype application monitors the progress of an automated workflow process and permits human involvement to reroute the course of a workflow process when an unknown exception occurs. The utility of the prototype and the tool using the Scenario Walkthrough and Inspection Methods (SWIMs) are demonstrated. We demonstrate the utility of the tool through loan application process scenarios, and offer a walkthrough of the system by using examples of instances with examples of known and unknown exceptions, as well as a claims analysis of process instances results.
APA, Harvard, Vancouver, ISO, and other styles
35

Kim, Hyungsin. "The ClockMe system: computer-assisted screening tool for dementia." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47516.

Full text
Abstract:
Due to the fastest growing senior population, age-related cognitive impairments, including Alzheimer's disease, are becoming among the most common diseases in the United States. Currently, prevention through delay is considered the best way to tackle Alzheimer's disease and related dementia, as there is no known cure for those diseases. Early detection is crucial, in that screening individuals with Mild Cognitive Impairment may delay its onset and progression. For my dissertation work, I investigate how computing technologies can help medical practitioners detect and monitor cognitive impairment due to dementia, and I develop a computerized sketch-based screening tool. In this dissertation, I present the design, implementation, and evaluation of the ClockMe System, a computerized Clock Drawing Test. The traditional Clock Drawing Test (CDT) is a rapid and reliable instrument for the early detection of cognitive dysfunction. Neurologists often notice missing or extra numbers in the clock drawings of people with cognitive impairments and use scoring criteria to make a diagnosis and treatment plan. The ClockMe System includes two different applications - (1) the ClockReader for the patients who take the Clock Drawing Test and (2) the ClockAnalyzer for clinicians who use the CDT results to make a diagnosis or to monitor patients. The contributions of this research are (1) the creation of a computerized screening tool to help clinicians identify cognitive impairment through a more accessible and quick-and-easy screening process; (2) the delivery of computer-collected novel behavioral data, which may offer new insights and a new understanding of a patient's cognition; (3) an in-depth understanding of different stakeholders and the identification of their common user needs and desires within a complicated healthcare workflow system; and (4) the triangulation of multiple data collection methods such as ethnographical observations, interviews, focus group meetings, and quantitative data from a user survey in a real-world deployment study.
APA, Harvard, Vancouver, ISO, and other styles
36

Al-Sammarraie, Mareh Fakhir. "An Empirical Investigation of Collaborative Web Search Tool on Novice's Query Behavior." UNF Digital Commons, 2017. http://digitalcommons.unf.edu/etd/764.

Full text
Abstract:
In the past decade, research efforts dedicated to studying the process of collaborative web search have been on the rise. Yet, a limited number of studies have examined the impact of collaborative information search processes on novices’ query behaviors. Studying and analyzing factors that influence web search behaviors, specifically users’ patterns of queries when using collaborative search systems can help with making query suggestions for group users. Improvements in user query behaviors and system query suggestions help in reducing search time and increasing query success rates for novices. This thesis investigates the influence of collaboration between experts and novices as well as the use of a collaborative web search tool on novices’ query behavior. We used SearchTeam as our collaborative search tool. This empirical study involves four collaborative team conditions: SearchTeam and expert-novice team, SearchTeam and novice-novice team, traditional and expert-novice team, and traditional and novice-novice team. We analyzed participants’ query behavior in two dimensions: quantitatively (e.g. the query success rate), and qualitatively (e.g. the query reformulation patterns). The findings of this study reveal that the successful query rate is higher in expert-novice collaborative teams, who used the collaborative search tools. Participants in expert-novice collaborative teams who used the collaborative search tools, required less time to finalize all tasks compared to expert-novice collaborative teams, who used the traditional search tools. Self-issued queries and chat logs were major sources of terms that novice participants in expert-novice collaborative teams who used the collaborative search tools used. Novices as part of expert-novice pairs who used the collaborative search tools, employed New and Specialization more often as query reformulation patterns. The results of this study contribute to the literature by providing detailed investigation regarding the influence of utilizing collaborative search tool (SearchTeam) in the context of software troubleshooting and development. This study highlights the possible collaborative information seeking (CIS) activities that may occur among software developers’ interns and their mentors. Furthermore, our study reveals that there are specific features, such as awareness and built-in instant messaging (IM), offered by SearchTeam that can promote the CIS activities among participants and help increase novices’ query success rates. Finally, we believe the use of CIS tools, designed to support collaborative search actions in big software development companies, has the potential to improve the overall novices’ query behavior and search strategies.
APA, Harvard, Vancouver, ISO, and other styles
37

Soni, Neha. "An Empirical Performance Analysis Of IaaS Clouds With CloudStone Web 2.0 Benchmarking Tool." UNF Digital Commons, 2015. http://digitalcommons.unf.edu/etd/583.

Full text
Abstract:
Web 2.0 applications have become ubiquitous over the past few years because they provide useful features such as a rich, responsive graphical user interface that supports interactive and dynamic content. Social networking websites, blogs, auctions, online banking, online shopping and video sharing websites are noteworthy examples of Web 2.0 applications. The market for public cloud service providers is growing rapidly, and cloud providers offer an ever-growing list of services. As a result, developers and researchers find it challenging when deciding which public cloud service to use for deploying, experimenting or testing Web 2.0 applications. This study compares the scalability and performance of a social-events calendar application on two Infrastructure as a Service (IaaS) cloud services – Amazon EC2 and HP Cloud. This study captures and compares metrics on three different instance configurations for each cloud service such as the number of concurrent users (load), as well as response time and throughput (performance). Additionally, the total price of the three different instance configurations for each cloud service is calculated and compared. This comparison of the scalability, performance and price metrics provides developers and researchers with an insight into the scalability and performance characteristics of the three instance configurations for each cloud service, which simplifies the process of determining which cloud service and instance configuration to use for deploying their Web 2.0 applications. This study uses CloudStone – an open-source, three-tier web application benchmarking tool that simulates Web 2.0 application activities – as a realistic workload generator and to capture the intended metrics. The comparison of the collected metrics indicate that all of the tested Amazon EC2 instance configurations provide better scalability and lower latency at a lower cost than the respective HP Cloud instance configurations; however, the tested HP Cloud instance configurations provide a greater storage capacity than the Amazon EC2 instance configurations, which is an important consideration for data-intensive Web 2.0 applications.
APA, Harvard, Vancouver, ISO, and other styles
38

Muthiah, Karthika Ms. "Performance Evaluation of Hadoop based Big Data Applications with HiBench Benchmarking tool on IaaS Cloud Platforms." UNF Digital Commons, 2017. https://digitalcommons.unf.edu/etd/771.

Full text
Abstract:
Cloud computing is a computing paradigm where large numbers of devices are connected through networks that provide a dynamically scalable infrastructure for applications, data and storage. Currently, many businesses, from small scale to big companies and industries, are changing their operations to utilize cloud services because cloud platforms could increase company’s growth through process efficiency and reduction in information technology spending [Coles16]. Companies are relying on cloud platforms like Amazon Web Services, Google Compute Engine, and Microsoft Azure, etc., for their business development. Due to the emergence of new technologies, devices, and communications, the amount of data produced is growing rapidly every day. Big data is a collection of large dataset, typically hundreds of gigabytes, terabytes or petabytes. Big data storage and the analytics of this huge volume of data are a great challenge for companies and new businesses to handle, which is a primary focus of this paper. This research was conducted on Amazon’s Elastic Compute Cloud (EC2) and Microsoft Azure platforms using the HiBench Hadoop Big Data Benchmark suite [HiBench16]. Processing huge volumes of data is a tedious task that is normally handled through traditional database servers. In contrast, Hadoop is a powerful framework is used to handle applications with big data requirements efficiently by using the MapReduce algorithm to run them on systems with many commodity hardware nodes. Hadoop’s distributed file system facilitates rapid storage and data transfer rates of big data among the nodes and remains operational even when a node failure has occurred in a cluster. HiBench is a big data benchmarking tool that is used for evaluating the performance of big data applications whose data are handled and controlled by the Hadoop framework cluster. Hadoop cluster environment was enabled and evaluated on two cloud platforms. A quantitative comparison was performed on Amazon EC2 and Microsoft Azure along with a study of their pricing models. Measures are suggested for future studies and research.
APA, Harvard, Vancouver, ISO, and other styles
39

Zuberovic, Aida. "Surface Modified Capillaries in Capillary Electrophoresis Coupled to Mass Spectrometry : Method Development and Exploration of the Potential of Capillary Electrophoresis as a Proteomic Tool." Doctoral thesis, Uppsala universitet, Analytisk kemi, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9554.

Full text
Abstract:
The increased knowledge about the complexity of the physiological processes increases the demand on the analytical techniques employed to explore them. A comprehensive analysis of the entire sample content is today the most common approach to investigate the molecular interplay behind a physiological deviation. For this purpose a method that offers a number of important properties, such as speed and simplicity, high resolution and sensitivity, minimal sample volume requirements, cost efficiency and robustness, possibility of automation, high-throughput and wide application range of analysis is requested. Capillary electrophoresis (CE) coupled to mass spectrometry (MS) has a great potential and fulfils many of these criteria. However, further developments and improvements of these techniques and their combination are required to meet the challenges of complex biological samples. Protein analysis using CE is a challenging task due to protein adsorption to the negatively charged fused-silica capillary wall. This is especially emphasised with increased basicity and size of proteins and peptides. In this thesis, the adsorption problem was addressed by using an in-house developed physically adsorbed polyamine coating, named PolyE-323. The coating procedure is fast and simple that generates a coating stable over a wide pH range, 2-11. By coupling PolyE-323 modified capillaries to MS, either using electrospray ionisation (ESI) or matrix-assisted laser desorption/ionisation (MALDI), successful analysis of peptides, proteins and complex samples, such as protein digests and crude human body fluids were obtained. The possibilities of using CE-MALDI-MS/MS as a proteomic tool, combined with a proper sample preparation, are further demonstrated by applying high-abundant protein depletion in combination with a peptide derivatisation step or isoelectric focusing (IEF). These approaches were applied in profiling of the proteomes of human cerebrospinal fluid (CSF) and human follicular fluid (hFF), respectively. Finally, a multiplexed quantitative proteomic analysis was performed on a set of ventricular cerebrospinal fluid (vCSF) samples from a patient with traumatic brain injury (TBI) to follow relative changes in protein patterns during the recovery process. The results presented in this thesis confirm the potential of CE, in combination with MS, as a valuable choice in the analysis of complex biological samples and clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
40

Silva, Leonardo Marcus Ribeiro da. "Projeto e desenvolvimento de uma ferramenta de baixa intrusão para administração e gerência de aglomerados de computadores." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-09082016-152543/.

Full text
Abstract:
Este trabalho apresenta uma ferramenta denominada FAGAC que se destina à administração e gerência de aglomerados de computadores, através de uma interface Web. A ferramenta tem a característica de ser pouco intrusiva no ambiente, ou seja, consumir poucos recursos computacionais a fim de não causar atrasos na execução dos serviços e processos do sistema. Inclui também funcionalidades que geram informações para o cliente ou administrador do sistema, a respeito do estado de ocupação de memória e de CPU, monitoramento do estado da carga de cada computador, tráfego gerado na rede, espaços em disco, informações de hardware e configurações do sistema. A validação da ferramenta foi feita por meio de experimentos comparativos das cinco principais funcionalidades comuns entre o FAGAC e o Ganglia, mostrando melhores resultados nas cinco funcionalidades, e que FAGAC é menos intrusivo que o Ganglia.
This research presents a tool named FAGAC for cluster management and administration of agglomerated of computers, through a web interface. This tool has the characteristic of being little intrusive in the environment, what means that it should consume a little computational resource in order to not delay the services and processes in execution at the system. The tool has functions to inform the customer or system administrator about the status of memory and CPU occupation, monitorating the load of each computer, the traffic generated in the net, disk space, hardware informations and configurations of the system. It was validated by comparing the results of the experiments from the main similar functions between FAGAC and Ganglia, showing best results for five functions tested, and that FAGAC is less intrusive than Ganglia.
APA, Harvard, Vancouver, ISO, and other styles
41

Conceição, Calebe Micael de Oliveira. "Uma arquitetura de co-processador para simulação de algoritmos quânticos em FPGA." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/81297.

Full text
Abstract:
Simuladores quânticos têm tido um importante papel no estudo e desenvolvimento da computação quântica ao longo dos anos. A simulação de algoritmos quânticos em computadores clássicos é computacionalmente difícil, principalmente devido à natureza paralela dos sistemas quânticos. Para acelerar essas simulações, alguns trabalhos propõem usar hardware paralelo programável como FPGAs, o que diminui consideravelmente o tempo de execução. Contudo, essa abordagem tem três problemas principais: pouca escalabilidade, já que apenas transfere a complexidade do domínio do tempo para o domínio do espaço; a necessidade de re-síntese a cada mudança no algoritmo; e o esforço extra ao projetar o código RTL para simulação. Para lidar com esses problemas, uma arquitetura de um co-processador SIMD é proposta, cujas operações das portas quânticas está baseada no modelo Network of Butterflies. Com isso, eliminamos a necessidade de re-síntese com mudanças pequenas no algoritmo quântico simulado, e eliminamos a influência de um dos fatores que levam ao crescimento exponencial do uso de recursos da FPGA. Adicionamente, desenvolvemos uma ferramenta para geração automática do código RTL sintetizável do co-processador, reduzindo assim o esforço extra de projeto.
Quantum simulators have had a important role on the studying and development of quantum computing throughout the years. The simulation of quantum algorithms on classical computers is computationally hard, mainly due to the parallel nature of quantum systems. To speed up these simulations, some works have proposed to use programmable parallel hardware such as FPGAs, which considerably shorten the execution time. However this approach has three main problems: low scalability, since it only transfers the complexity from time domain to space domain; the need of re-synthesis on every change on the algorithm; and the extra effort on designing the RTL code for simulation. To handle these problems, an architecture of a SIMD co-processor is proposed, whose operations of quantum gates are based on Network of Butterflies model. Thus, we eliminate the need of re-synthesis on small changes on the simulated quantum algorithm, and we eliminated the influence of one of the factors that lead to the exponential growth on the consumption of FPGA resources. Aditionally, we developed a tool to automatically generate the synthesizable RTL code of the co-processor, thus reducing the extra design effort.
APA, Harvard, Vancouver, ISO, and other styles
42

Kadleček, Vít. "Efektivní využití energie při spalování odpadů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231794.

Full text
Abstract:
The diploma thesis deals with an increasing of utilization of energy during a combustion of waste. The introductory part deals with a presentation of the specific waste to energy unit and its combined heat and power production. In the next part is described a computing tool and the principle of its function. The main part od the thesis deals with a description of cumputing tool testing and with a summary of achieved results.
APA, Harvard, Vancouver, ISO, and other styles
43

Obiala, Renata. "Geometric decomposition tools for parallel computing." Thesis, Heriot-Watt University, 2007. http://hdl.handle.net/10399/2042.

Full text
Abstract:
This thesis describes new geometric decomposition tools for parallel computing. A new complete process of model preparation for parallel analysis is proposed 'and investigated. The process focuses on applying geometrical entities rather than mesh elements to the decomposition problem. The study starts with an exploration of different geometrical representations in order to select the most suitable representation-f6i:- the purpose of this research. Next, the model is orthogollalised and cut into blocks to create a decomposed orthogonal model. The blocks composing the model are allocated to a giyen number of processors using weight factors determined by a grid mesh generated for the model. Finally, the dccomposed orthogonal model is mapped back into its original shape while preserving the relationship between the mesh elements and geometrical entities. A number of different methodologies are successfully applied to perform the whole process. Fuzzy Logic and Genetic Algorithms are used to orthogonalise the original model. The Gcnetic Algorithms are also llscd for a graph partitioning problem, where a weighted graph is designed to represent the decomposed model. Additionally, Extreme Vertices Model inspired the model's representation required for decomposition. Each part of the whole process presented in this thesis is followed by examples and discussion.
APA, Harvard, Vancouver, ISO, and other styles
44

Knauth, Thomas. "Energy Efficient Cloud Computing: Techniques and Tools." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-164391.

Full text
Abstract:
Data centers hosting internet-scale services consume megawatts of power. Mainly for cost reasons but also to appease environmental concerns, data center operators are interested to reduce their use of energy. This thesis investigates if and how hardware virtualization helps to improve the energy efficiency of modern cloud data centers. Our main motivation is to power off unused servers to save energy. The work encompasses three major parts: First, a simulation-driven analysis to quantify the benefits of known reservation times in infrastructure clouds. Virtual machines with similar expiration times are co-located to increase the probability to power down unused physical hosts. Second, we propose and prototyped a system to deliver truly on-demand cloud services. Idle virtual machines are suspended to free resources and as a first step to power off the physical server. Third, a novel block-level data synchronization tool enables fast and efficient state replication. Frequent state synchronization is necessary to prevent data unavailability: powering down a server disables access to the locally attached disks and any data stored on them. The techniques effectively reduce the overall number of required servers either through optimized scheduling or by suspending idle virtual machines. Fewer live servers translate into proportional energy savings, as the unused servers must no longer be powered.
APA, Harvard, Vancouver, ISO, and other styles
45

Cassidy, John R. "Providing effective productivity tools : computing for the physically-challenged." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/834524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Miller, Andrew D. "Social tools for everyday adolescent health." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52238.

Full text
Abstract:
In order to support people's everyday health and wellness goals, health practitioners and organizations are embracing a more holistic approach to medicine---supporting patients both as individuals and members of their families and communities, and meeting people where they are: at home, work, and school. This 'everyday' approach to health has been enabled by new technologies, both dedicated-devices and services designed specifically for health sensing and feedback -- and multipurpose --such as smartphones and broadband-connected computers. Our physical relationship with computing has also become more intimate, and personal health devices can now track and report an unprecedented amount of information about our bodies, following their users around to an extent no doctor, coach or dietitian ever could. But we still have much to learn about how pervasive health devices can actually help promote the adoption of new health practices in daily life. Once they're `in the wild,' such devices interact with their users, but also the physical, social and political worlds in which those users live. These external factors---such as the walkablity of a person's neighborhood or the social acceptability of exercise and fitness activities---play a significant role in people's ability to change their health behaviors and sustain that change. Specifically, social theories of behavior change suggest that peer support may be critical in changing health attitudes and behaviors. These theories---Social Support Theory, Social Cognitive Theory and Social Comparison Theory among them---offer both larger frameworks for understanding the social influences of health behavior change and specific mechanisms by which that behavior change could be supported through interpersonal interaction. However, we are only beginning to understand the role that pervasive health technologies can play in supporting and mediating social interaction to motivate people's exploration and adoption of healthy behaviors. In this dissertation I seek to better understand how social computing technologies can help people help each other live healthier lives. I ground my research in a participant-led investigation of a specific population and condition: adolescents and obesity prevention. I want to understand how social behavior change theories from psychology and sociology apply to pervasive social health technology. Which mechanisms work and why? How does introducing a pervasive social health system into a community affect individuals' behaviors and attitudes towards their health? Finally, I want to contribute back to those theories, testing their effectiveness in novel technologically mediated situations. Adolescent obesity is a particularly salient domain in which to study these issues. In the last 30 years, adolescent obesity rates in the US alone have tripled, and although they have leveled off in recent years they remain elevated compared to historical norms. Habits formed during adolescence can have lifelong effects, and health promotion research shows that even the simple act of walking more each day has lasting benefits. Everyday health and fitness research in HCI has generally focused on social comparison and "gamified" competition. This is especially true in studies focused on adolescents and teens. However, both theory from social psychology and evidence from the health promotion community suggest that these direct egocentric models of behavior change may be limited in scope: they may only work for certain kinds of people, and their effects may be short-lived once the competitive framework is removed. I see an opportunity for a different approach: social tools for everyday adolescent health. These systems, embedded in existing school and community practices, can leverage scalable, non-competitive social interaction to catalyze positive perceptions of physical activity and social support for fitness, while remaining grounded in the local environment. Over the last several years I have completed a series of field engagements with middle school students in the Atlanta area. I have focused on students in a majority-minority low-income community in the Atlanta metropolitan area facing above-average adult obesity levels, and I have involved the students as informants throughout the design process. In this dissertation, I report findings based on a series of participatory design-based formative explorations; the iterative design of a pedometer-based pervasive health system to test these theories in practice; and the deployment of this system---StepStream---in three configurations: a prototype deployment, a `self-tracking' deployment, and a `social' deployment. In this dissertation, I test the following thesis: A school-based social fitness approach to everyday adolescent health can positively influence offline health behaviors in real-world settings. Furthermore, a noncompetitive social fitness system can perform comparably in attitude and behavior change to more competitive or direct-comparison systems, especially for those most in need of behavior change}. I make the following contributions: (1) The identification of tensions and priorities for the design of everyday health systems for adolescents; (2) A design overview of StepStream, a social tool for everyday adolescent health; (3) A description of StepStream's deployment from a socio-technical perspective, describing the intervention as a school-based pervasive computing system; (4) An empirical study of a noncompetitive awareness system for physical activity; (5) A comparison of this system in two configurations in two different middle schools; (6) An analysis of observational learning and collective efficacy in a pervasive health system.
APA, Harvard, Vancouver, ISO, and other styles
47

Alshammari, Dhahi. "Evaluation of cloud computing modelling tools : simulators and predictive models." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/41050/.

Full text
Abstract:
Experimenting with novel algorithms and configurations for the automatic management of Cloud Computing infrastructures is expensive and time consuming on real systems. Cloud computing delivers the benefits of using virtualisation techniques to data centers instead of physical servers for customers. However, it is still complex for researchers to test and run their experiments on data center due to the cost for repeating the experiments. To address this, various tools are available to enable simulators, emulators, mathematical models, statistical models and benchmarking. Despite this, there are different methods used by researchers to avoid the difficulty of conducting Cloud Computing research on actual large data centre infrastructure. However, it is still difficult to chose the best tool to evaluate the proposed research. This research focuses on investigating the level of accuracy of existing known simulators in the field of cloud computing. Simulation tools are generally developed for particular experiments, so there is little assurance that using them with different workloads will be reliable. Moreover, a predictive model based on a data set from a realistic data center is delivered as an alternative model of simulators as there is a lack of their sufficient accuracy. So, this work addresses the problem of investigating the accuracy of different modelling tools by developing and validating a procedure based on the performance of a target micro data centre. Key insights and contributions are: Involving three alternative models for Cloud Computing real infrastructure showing the level of accuracy of selected simulation tools. Developing and validating a predictive model based on a Raspberry Pi small scale data centre. The use of predictive model based on Linear Regression and Artificial Neural Net- works models based on training data set drawn from a Raspberry Pi Cloud infrastructure provides better accuracy.
APA, Harvard, Vancouver, ISO, and other styles
48

Forsyth, Jason Brinkley. "Exploring Electronic Storyboards as Interdisciplinary Design Tools for Pervasive Computing." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/73538.

Full text
Abstract:
Pervasive computing proposes a new paradigm for human-computer interaction. By embedding computation, sensing, and networking into our daily environments, new computing systems can be developed that become helpful, supportive, and invisible elements of our lives. This tight proximity between the human and computational worlds poses challenges for the design of these systems - what disciplines should be involved in their design and what tools and processes should they follow? We address these issues by advocating for interdisciplinary design of pervasive computing systems. Based upon our experiences teaching courses in interactive architecture, product design, physical computing and through surveys of existing literature, we examine the challenges faced by interdisciplinary teams when developing pervasive computing systems. We find that teams lack accessible prototyping tools to express their design ideas across domains. To address this issue we propose a new software-based design tool called electronic storyboards. We implement electronic storyboards by developing a domain-specific modeling language in the Eclipse Graphical Editor Framework. The key insight of electronic storyboards is to balance the tension between the ambiguity in drawn storyboards and the requirements of implementing computing systems. We implement a set of user-applied tags, perform layout analysis on the storyboard, and utilize natural language processing to extract behavioral information from the storyboard in the form of a timed automaton. This behavioral information is then transformed into design artifacts such as state charts, textual descriptions, and source code. To evaluate the potential impact of electronic storyboards on interdisciplinary design teams we develop of user study based around ``boundary objects''. These objects are frequently used within computer-supported collaborative work to examine how objects mediate interactions between individuals. Teams of computing and non-computing participants were asked to storyboard pervasive computing systems and their storyboards were evaluated using a prototype electronic storyboarding tool. The study examines how teams use traditional storyboarding, tagging, tool queries, and generated artifacts to express design ideas and iterate upon their designs. From this study we develop new recommendations for future tools in architecture and fashion design based upon electronic storyboarding principles. Overall, this study contributes to the expanding knowledge base of pervasive computing design tools. As an emerging discipline, standardized tools and platforms have yet to be developed. Electronic storyboards offer a solution to describe pervasive computing systems across application domains and in a manner accessible to multiple disciplines.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
49

Frier, Jason Ross. "Genetic Algorithms as a Viable Method of Obtaining Branch Coverage." UNF Digital Commons, 2017. http://digitalcommons.unf.edu/etd/722.

Full text
Abstract:
Finding a way to automate the generation of test data is a crucial aspect of software testing. Testing comprises 50% of all software development costs [Korel90]. Finding a way to automate testing would greatly reduce cost and labor involved in the task of software testing. One of the ways to automate software testing is to automate the generation of test data inputs. For example, in statement coverage, creating test cases that will cover all of the conditions required when testing that program would be costly and time-consuming if undertaken manually. Therefore, a way must be found that allows the automation of creating test data inputs to satisfy all test requirements for a given test. One such way of automating test data generation is the use of genetic algorithms. Genetic algorithms use the creation of generations of test inputs, and then choose the most fit test inputs, or those test inputs that are most likely to satisfy the test requirement, as the test inputs that will be passed to the next generation of inputs. In this way, the solution to the test requirement problem can be found in an evolutionary fashion. Current research suggests that comparison of genetic algorithms with random test input generation produces varied results. While results of these studies show promise for the future use of genetic algorithms as an answer to the issue of discovering test inputs that will satisfy branch coverage, what is needed is additional experimental research that will validate the performance of genetic algorithms in a test environment. This thesis makes use of the EvoSuite plugin tool, which is a software plugin for the IntelliJ IDEA Integrated Development Environment that runs using a genetic algorithm as its main component. The EvoSuite tool is run against 22 Java classes, and the EvoSuite tool will automatically generate unit tests and will also execute those unit tests while simultaneously measuring branch coverage of the unit tests against the Java classes under test. The results of this thesis’ experimental research are that, just as the literature indicates, the EvoSuite tool performed with varied results. In particular, Fraser’s study of the EvoSuite tool as an Eclipse plugin was accurate in depicting how the EvoSuite tool would come to perform as an IntelliJ plugin, namely that the EvoSuite tool would perform poorly for a large number of classes tested.
APA, Harvard, Vancouver, ISO, and other styles
50

Snyder, Brett W. "Tools and Techniques for Evaluating the Reliability of Cloud Computing Systems." University of Toledo / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1371685877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography