Siga este enlace para ver otros tipos de publicaciones sobre el tema: Veriflex.

Tesis sobre el tema "Veriflex"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Veriflex".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Wildmoser, Martin. "Verified proof carrying code". [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=980401208.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Klein, Gerwin. "Verified Java bytecode verification". [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=967128749.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Bujila, Razvan y Johan Kuru. "OMSI Test Suite verifier development". Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-96266.

Texto completo
Resumen
The purpose of the Open Mobile Service Interface (OMSI) is to simplify the device management process for mobile devices from different manufacturers with a single PC application, instead of using one unique application for every manufacturer. The different OMSI use cases include device management for application vendors, points-of-sale, repair centers or for self-service. This will lead to higher service level for end users, faster repair times, better control over service transactions, an open market for compatible applications and an easy plug-and-play installation. Manufacturers are currently in the process of developing their own specific modules and there is an increasing need for test and verification software to certify OMSI conformance. In order for phone manufacturers to be able to efficiently verify that their OMSI modules and catalogs support and comply with the OMSI standard, there is a need for automated module tests and manual catalog tests. Development of such tests is the main purpose of this Master thesis work. The implementation of the different verification processes have been divided into different sub-projects to create a more structured view of the OMSI Test Suite project and easier management. The first part of the thesis work deals with the module verification process, second part with the client verification process while the third and final part deals with the catalog verification process. The thesis work has been performed in a project form, where the development of the project plan was a part of the thesis work. The final version of the Module Interface Verifier was implemented in C#, in a Visual Studio .NET 2003 environment. The software functioned as expected, both towards a sample module and Sony Ericsson’s and Nokia’s respective modules. The Client Interface Verifier was developed in a C++ environment and functioned according to the given specifications. The Catalog Interface Verifier was developed in C# environment, built on an already existing part of the OMSI Implementation Framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Spiwack, Arnaud. "Verified computing in homological algebra". Palaiseau, Ecole polytechnique, 2011. http://pastel.archives-ouvertes.fr/docs/00/60/58/36/PDF/thesis.spiwack.pdf.

Texto completo
Resumen
L'objet de cette thèse est d'étudier les capacités du système Coq à mélanger démonstrations et programmes en pratique en essayant d'y implémenter une part du programme Kenzo, un outil de calcul formel en algèbre homologique. À cet effet, nous travaillons sous trois contrainte: nous voulons essayer de lire le programme comme une démonstration avec un contenu calculatoire, ces démonstrations doivent calculer efficacement et nous cherchons à éviter de dupliquer des morceaux de démonstration. Nous montrons dans un premier temps comment le soucis d'efficacité conduit à reconsidérer certains aspects des mathématiques traditionnelle. Nous proposons une abstraction catégorielle adaptée, qui répond à la fois à un soucis de clareté et au besoin de partager les démonstration quand c'est possible. Cette abstraction, bien que différente des propositions traditionnelles, permet de formuler les notions d'algèbre homologique dans un style proche de celui de Kenzo. Nous proposons par ailleurs des modifications au programme de Coq pour rendre les démonstrations plus confortables, et pour permettre d'écrire des programmes plus efficaces. La première modification permet d'utiliser des tactiques plus fines, souvent nécessaire quand l'utilisation des types dépendents se fait commune. La seconde permet d'utiliser les capacité du processeur pour faire des calculs plus efficaces sur des entiers. Pour finir nous proposons quelques pistes pour améliorer le partage et la clareté du code. Malheureusement, nous nous heurtons aux limites du système. Nous montrons ainsi que Coq ne tiens pas forcément ses promesses et qu'il y aura besoin de travaux théoriques pour comprendre comment lever ces limites
The object of this thesis is the study of the ability of the Coq system to mix proofs and programs in practice. Our approach consists in implementing part of the program Kenzo, a computer algebra tool for homological algebra under some constraint. We want to be able to read the program as a proof with a computational content, these proofs much compute efficiently, and we try to avoid duplication of proofs or part thereof. We show, first, how the requirement of efficiency leads to revise some aspects of traditional mathematics. We propose a suitable categorical abstraction, both for clarity and to avoid duplications. This abstraction, though different from what is customary in mathematics, allow to formulate the constructs of homological algebra in a style much like that of Kenzo. We propose, then, modifications to the Coq programm. A first one to make proofs more convenient, by allowing the use of more fine grain tactics which are often necessary when dependent types are common. The second modification to leverage the arithmetical abilities of the processor to compute more efficiently on integers. Finally, we propose some leads to improve both sharing and clarity of the proofs. Unfortunately, they push the system beyond its limits. Hence, we show that Coq is not always up to its promises and that theoretical works will be necessary to understand how these limits can be relaxed
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Boehm, Peter. "Incremental modelling for verified communication architectures". Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:ec6c9e06-7395-4af4-b961-b2ed837fda89.

Texto completo
Resumen
Modern computer systems are advancing from multi-core to many-core designs and System-on-chips (SoC) are becoming increasingly complex while integrating a great variety of components, thus constituting complex distributed systems. Such architectures rely on extremely complex communication protocols to exchange data with required performance. Arguing formally about the correctness of communication is an acknowledged verification challenge. This thesis presents a generic framework that formalises the idea of incremental modelling and step-wise verification to tackle this challenge: to control the overall complexity, features are added incrementally to a simple initial model and the complexity of each feature is encapsulated into an independent modelling step. Two main strategies reduce the verification effort. First, models are constructed with verification support in mind and the verification process is spread over the modelling process. Second, generic correctness results for framework components allow the verification to be reduced to discharging local assumptions when a component is instantiated. Models in the framework are based on abstract state machines formalised in higher order logic using the Isabelle theorem prover. Two case studies show the utility and breadth of the approach: the ARM AMBA Advanced High-performance Bus protocol, an arbiter-based master-slave bus protocol, represents the family of SoC protocols; the PCI Express protocol, an off-chip point-to-point protocol, illustrates the application of the framework to sophisticated, performance-related features of current and future on-chip protocols. The presented methodology provides an alternative to the traditional monolithic and post-hoc verification approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Katz, Theodore(Theodore Robert). "Verified compilation of abstract network policies". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123032.

Texto completo
Resumen
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 67-68).
Configuring large networks can be very complex. A network administrator typically has a set of high-level policies in mind when creating a network configuration, but implementing the configuration onto existing hardware often requires specifying many low-level details. As a result, configuring a network is currently a very error-prone process, and misconfigurations resulting in network outages and security vulnerabilities occur frequently in practice. We present a formally verified compiler from high-level network policies to low-level executable routing rules, to simplify the process of correctly conguring networks and enforcing network policies.
by Theodore Katz.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Adcock, Bruce M. "Working Towards the Verified Software Process". The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1293463269.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Perna, Juan Ignacio. "A verified compiler for Handel-C". Thesis, University of York, 2010. http://etheses.whiterose.ac.uk/585/.

Texto completo
Resumen
The recent popularity of Field Programmable Gate Array (FPGA) technology has made the synthesis of Hardware Description Language (HDL) programs into FPGAs a very attractive topic for research. In particular, the correctness in the synthesis of an FPGA programming file from a source HDL program has gained significant relevance in the context of safety or mission-critical systems. The results presented here are part of a research project aiming at producing a verified compiler for the Handel-C language. Handel-C is a high level HDL based on the syntax of the C language extended with constructs to deal with parallel behaviour and process communications based on CSP. Given the complexity of designing a provably correct compiler for a language like Handel-C, we have adopted the algebraic approach to compilation as it oers an elegant solution to this problem. The idea behind algebraic compilation is to create a sound reasoning framework in which the a formal model of the source Handel-C program can be embedded and refined into a formal abstraction of the target hardware. As the algebraic rules used to compile the program are proven to preserve the semantics, the correctness of the entire compilation process (i.e., semantic equivalence between source and target programs) can be argued by construction, considering each programming construct in isolation, rather than trying to assert the correctness of the compilation in a single step. Regarding hardware synthesis, the algebraic approach has already been applied to subsets of Occam and Verilog. Our work builds on some ideas from these works but focuses on the more complex timing model imposed by Handel-C. Moreover, our work covers features like shared variables, multi-way communications and priorities which, to our knowledge, have never been addressed within the framework of algebraic compilation. Finally, one characteristic of the algebraic approach is that the basic reduction laws in the reasoning framework are postulated as axioms. As an invalid axiom would allow us to prove invalid results (up to the extent of being able to prove a false theorem) we are also concerned about the consistency of the basic postulates in our theory. We addressed this by providing denotational semantics for Handel-C and its reasoning extensions in the context of the Unifying Theories of Programming (UTP). Our UTP denotational semantics not only provided a model for our theory (hence, proving its consistency) but also allowed us to prove all the axioms in the compilation framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Owny, Hassan Badry Mohamed el. "Verified solution of parametric interval linear systems". [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=985111623.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Thaler, Justin R. "Practical Verified Computation with Streaming Interactive Proofs". Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:11086.

Texto completo
Resumen
As the cloud computing paradigm has gained prominence, the need for verifiable computation has grown urgent. Protocols for verifiable computation enable a weak client to outsource difficult computations to a powerful, but untrusted, server. These protocols provide the client with a (probabilistic) guarantee that the server performed the requested computations correctly, without requiring the client to perform the computations herself.
Engineering and Applied Sciences
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Dalvandi, Mohammad Sadegh. "Developing verified sequential programs with Event-B". Thesis, University of Southampton, 2018. https://eprints.soton.ac.uk/422225/.

Texto completo
Resumen
The constructive approach to software correctness aims at formal modelling of the intended behaviour and structure of a system in different levels of abstraction and verifying properties of models. The target of analytical approach is to verify properties of the final program code. A high level look at these two approaches suggests that the constructive and analytical approaches should complement each other well. The aim of this thesis is to build a link between Event-B (constructive approach) and Dafny (analytical approach) for developing sequential verified programs. The first contribution of this thesis is a tool supported method for transforming Event-B models to simple Dafny code contracts (in the form of method pre- and post-conditions). Transformation of Event-B formal models to Dafny method declarations and code contracts is enabled by a set of transformation rules. Using this set of transformation rules, one can generate code contracts from Event-B models but not implementations. The generated code contracts must be seen as an interface that can be implemented. If there is an implementation that satisfies the generated contracts then it is considered to be a correct implementation of the abstract Event-B model. A tool for automatic transformation of Event-B models to simple Dafny code contracts is presented. The second contribution of this thesis is an approach for derivation of algorithmic structure in Event-B refinement. To facilitate this, we augment Event-B with a scheduling language that allows modeller to explicitly define the control flow between Event-B events in each refinement level. The scheduling language supports both non-deterministic (choices and iterations) and deterministic (conditionals and loops) control structures and treat Event-B events as its atoms. We provide a set of schedule refinement rules for refining an abstract scheduling language to a concrete program structure. We also provide a set of rules allowing the elimination of event guards at the concrete level. The final contribution of this thesis is a method for transforming scheduled Event-B models to Dafny code and contracts. We formulate the transformation of a scheduled Event-B model to Dafny program constructs and show how the actions of an atomic event can be sequentialised in the final program. We introduce an approach for generation of Dafny contracts in the form of assertions in order to verify the correctness of the sequentialisation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Ramseyer, Jennifer. "Implementing a verified FTP client and server". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105966.

Texto completo
Resumen
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 54-56).
I present my implementation of an FTP: File Transfer Protocol system with GRASShopper. GRASShopper is a program verifier that ensures that programs are memory safe. I wrote an FTP client and server in SPL, the GRASShopper programming language. SPL integrates the program logic's pre- and post- conditions, along with loop invariants, into the language, so that programs that compile in GRASShopper are proven correct. Because of that, my client and server are guaranteed to be secure and correct. I am supervised by Professor Martin Rinard and Dr. Damien Zufferey, a post-doctoral researcher in Professor Rinard's laboratory.
by Jennifer Ramseyer.
M. Eng.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Jourdan, Jacques-Henri. "Verasco : a Formally Verified C Static Analyzer". Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC021.

Texto completo
Resumen
Afin de développer des logiciels plus sûrs pour des applications critiques, certains analyseurs statiques tentent d'établir, avec une certitude mathématique, l'absence de certains types de bugs dans un programme donné. Une limite possible à cette approche est l'éventualité d'un bug affectant la correction de l'analyseur lui-même, éliminant ainsi les garanties qu'il est censé apporter. Dans cette thèse, nous proposons d'établir des garanties formelles sur l'analyseur lui-même : nous présentons la conception, l'implantation et la preuve de sûreté en Coq de Verasco, un analyseur statique formellement vérifié utilisant l'interprétation abstraite pour le langage ISO C99 avec l'arithmétique flottante IEEE754 (à l'exception de la récursion et de l'allocation dynamique de mémoire). Verasco a pour but d'établir l'absence d'erreur à l'exécution des programmes donnés. Il est conçu selon une architecture modulaire et extensible contenant plusieurs domaines abstraits et des interfaces bien spécifiées. Nous détaillons le fonctionnement de l'itérateur abstrait de Verasco, son traitement des entiers bornés de la machine, son domaine abstrait d'intervalles, son domaine abstrait symbolique et son domaine abstrait d'octogones. Verasco a donné lieu au développement de nouvelles techniques pour implémenter des structures de données avec partage dans Coq
In order to develop safer software for critical applications, some static analyzers aim at establishing, with mathematical certitude, the absence of some classes of bug in the input program. A possible limit to this approach is the possibility of a soundness bug in the static analyzer itself, which would nullify the guarantees it is supposed to deliver. In this thesis, we propose to establish formal guarantees on the static analyzer itself: we present the design, implementation and proof of soundness using Coq of Verasco, a formally verified static analyzer based on abstract interpretation handling most of the ISO C99 language, including IEEE754 floating-point arithmetic (except recursion and dynamic memory allocation). Verasco aims at establishing the absence of erroneous behavior of the given programs. It enjoys a modular extendable architecture with several abstract domains and well-specified interfaces. We present the abstract iterator of Verasco, its handling of bounded machine arithmetic, its interval abstract domain, its symbolic abstract domain and its abstract domain of octagons. Verasco led to the development of new techniques for implementing data structure with sharing in Coq
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Ibraev, Suiunbek. "A new parallel method for verified global optimization". [S.l.] : [s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=963452304.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Green, Alexander S. "Towards a formally verified functional quantum programming language". Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11457/.

Texto completo
Resumen
This thesis looks at the development of a framework for a functional quantum programming language. The framework is first developed in Haskell, looking at how a monadic structure can be used to explicitly deal with the side-effects inherent in the measurement of quantum systems, and goes on to look at how a dependently-typed reimplementation in Agda gives us the basis for a formally verified quantum programming language. The two implementations are not in themselves fully developed quantum programming languages, as they are embedded in their respective parent languages, but are a major step towards the development of a full formally verified, functional quantum programming language. Dubbed the “Quantum IO Monad”, this framework is designed following a structural approach as given by a categorical model of quantum computation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Zhang, Nan. "Generating verified access control policies through model-checking". Thesis, University of Birmingham, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433707.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Konradi, Alexander V. "Performance optimization of the VDFS verified file system". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113179.

Texto completo
Resumen
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 51-54).
Formal verification of software has become a powerful tool for creating software systems and proving their correctness. While such systems provide strong guarantees about their behavior, they frequently exhibit poor performance relative to their unverified counterparts. Verified file systems are not excepted, and their poor performance limits their utility. These limitations, however, are not intrinsic to verification techniques, but are the result of designing for proofs, not performance. This thesis proposes a design for large files and in-memory caches that are amenable both to a high-performance implementation and proofs of correctness. It then describes their usage in VDFS, a verified high-performance file system with deferred durability guarantees. The evaluation of VDFS' performance shows that these additions measurably improve performance over previous verified file systems, and make VDFS competitive with unverified file system implementations. This thesis contributes implementation techniques for large files and in-memory caches that can be applied to increase performance of verified systems..
by Alexander V. Konradi.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Mason, George. "Safe reinforcement learning using formally verified abstract policies". Thesis, University of York, 2018. http://etheses.whiterose.ac.uk/22450/.

Texto completo
Resumen
Reinforcement learning (RL) is an artificial intelligence technique for finding optimal solutions for sequential decision-making problems modelled as Markov decision processes (MDPs). Objectives are represented as numerical rewards in the model where positive values represent achievements and negative values represent failures. An autonomous agent explores the model to locate rewards with the goal to learn behaviour which will cumulate the largest reward possible. Despite RL successes in applications ranging from robotics and planning systems to sensing, it has so far had little appeal in mission- and safety-critical systems where unpredictable agent actions could lead to mission failure, risks to humans, itself or other systems, or violations of legal requirements. This is due to the difficulty of encoding non-trivial requirements of agent behaviour through rewards alone. This thesis introduces assured reinforcement learning (ARL), a safe RL approach that restricts agent actions, during and after learning. This restriction is based on formally verified policies synthesised for a high-level, abstract MDP that models the safety-relevant aspects of the RL problem. The resulting actions form overall solutions whose properties satisfy strict safety and optimality requirements. Next, ARL with knowledge revision is introduced, allowing ARL to still be used if the initial knowledge for generating action constraints proves to be incorrect. Additionally, two case studies are introduced to test the efficacy of ARL: the first is an adaptation of the benchmark flag collection navigation task and the second is an assisted-living planning system. Finally, an architecture for runtime ARL is proposed to allow ARL to be utilised in real-time systems. ARL is empirically evaluated and is shown to successfully satisfy strict safety and optimality requirements and, furthermore, with knowledge revision and action reuse, it can be successfully applied in environments where initial information may prove incomplete or incorrect.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Cline, Jacquelyn Fern. "The implementation of a subset data dictionary verifier". Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9829.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Gross, Jason S. "An extensible framework for synthesizing efficient, verified parsers". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101581.

Texto completo
Resumen
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 73-75).
Parsers have a long history in computer science. This thesis proposes a novel approach to synthesizing efficient, verified parsers by refinement, and presents a demonstration of this approach in the Fiat framework by synthesizing a parser for arithmetic expressions. The benefits of this framework may include more flexibility in the parsers that can be described, more control over the low-level details when necessary for performance, and automatic or mostly automatic correctness proofs.
by Jason S. Gross.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Kolberg, Mariana Luderitz. "Parallel self-verified solver for dense linear systems". Pontifícia Universidade Católica do Rio Grande do Sul, 2009. http://hdl.handle.net/10923/1600.

Texto completo
Resumen
Made available in DSpace on 2013-08-07T18:43:07Z (GMT). No. of bitstreams: 1 000415011-Texto+Completo-0.pdf: 9818822 bytes, checksum: 000259a328a840b445d92337ab6707ce (MD5) Previous issue date: 2009
This thesis presents a free, fast, reliable and accurate solver for point and interval dense linear systems. The idea was to implement a solver for dense linear systems using a verified method, interval arithmetic and directed roundings based on MPI communication primitives associated to optimized libraries, aiming to provide both self-verification and speed-up at the same time. A first parallel implementation was developed using the C-XSC library. However, the CXSC parallel method did not achieve the expected overall performance since the solver was not 100% parallelized due to its implementation properties (special variables and optimal scalar product). C-XSC did not seem to be the most efficient tool for time critical applications, consequently we proposed and implemented a new sequential verified solver for dense linear systems for point and interval input data using both infimum-supremum and midpoint-radius arithmetic based on highly optimized libraries (BLAS/ LAPACK). Performance tests showed that the midpointradius algorithm needs approximately the same time to solve a linear system with point or interval input data, while the infimum-supremum algorithm needs much more time for interval data. Considering that, midpoint-radius arithmetic was the natural choice for the next step of this work: the parallel implementation. We then developed a new parallel verified solver for point and interval dense linear systems using midpoint-radius arithmetic, directed roundings and optimized libraries (PBLAS/ ScaLAPACK). The performance results showed that it was possible to achieve very good speed-ups in a wide range of processor numbers for large matrix dimensions for both point and interval input data. In order to overcome the memory limitation imposed by the generation of the whole matrix in one processor, we decided to generate sub-matrices of the input matrix individually on each available node, allowing a better use of the global memory. These modifications made it possible to solve dense systems with up to 100 000 dimension. In addition to that, in order to investigate the portability of the proposed solution, during this thesis, tests were performed using 3 different clusters in Germany (ALiCEnext, XC1 and IC1) with distinct configurations presenting significant results, indicating that the parallel solver scales well even for very large dense systems over many processors. Further investigations were done in two directions: study of the use of dedicated threads to speed up the solver of dense linear systems on shared memory, specially dual-core processors and the use of the ideas presented in this thesis to speed-up the C-XSC library.
Esta tese apresenta uma ferramenta de resolução de sistemas lineares densos pontuais e intervalares. As principais características desta ferramenta são rapidez, confiabilidade e precisão. Esta ferramenta é baseada em um método de resolução de sistemas densos verificado usando arredondamentos direcionados e aritmética intervalar associados a bibliotecas otimizadas e primitivas MPI para prover resultados confiáveis e alto desempenho. A primeira versão paralela foi desenvolvida usando a biblioteca C-XSC. Esta versão não alcançou o desempenho global esperado uma vez que não foi paralelizada totalmente devido a particularidades do C-XSC (variáveis especiais e produto escalar ótimo). Como o C-XSC não se mostrou eficiente para aplicações de grande porte, foi proposta e implementada uma nova versão seqüencial para sistemas lineares densos usando tanto a aritmética de ínfimo e supremo como a aritmética de ponto médio e raio, baseada nas bibliotecas BLAS e LAPACK. Testes de desempenho mostraram que o algoritmo que implementa a aritmética de ponto médio e raio possui um desempenho melhor do que o algoritmo que implementa a aritmética de ínfimo e supremo. Considerando este resultado, a aritmética de ponto médio e raio foi escolhida para a próxima etapa: a implementação paralela. Uma versão paralela para solução de sistemas lineares pontuais e intervalares densos foi então desenvolvida utilizando a aritmética de ponto médio e raio, arredondamentos direcionados e as bibliotecas otimizadas PBLAS e ScaLAPACK. Os resultados mostraram que foi possível alcançar um bom desempenho utilizando um número de processadores variado e proporcionando considerável aceleração na obtenção dos resultados para diferentes tamanhos de matrizes (pontuais e intervalares).A fim de superar as limitações impostas pelo uso da memória na geração de toda a matriz em um só processador, uma nova versão foi implementada. Esta versão gera as sub-matrizes da matriz principal em cada processador, permitindo uma melhor utilização da memória global disponibilizada pelo Cluster. Estas alterações tornaram possível resolver sistemas densos de dimensão 100 000. Para investigar a portabilidade da solução proposta, os testes foram realizados em 3 Clusters diferentes na Alemanha (ALiCEnext, XC1 e IC1). Cada um destes Clusters possui configurações distintas e apresentaram resultados significativos, indicando que a versão paralela possui uma boa escalabilidade para sistemas lineares muito grandes usando um número variado de processadores. Outros estudos foram realizados em duas direções. O primeiro diz respeito ao uso de threads dedicadas para aumentar o desempenho da solução de sistemas lineares usando memória compartilhada (em especial para processadores dual-core). Também foi estudada a utilização dessas idéias para aumentar o desempenho da solução usando C-XSC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Anderson, Kai. "Use of Verified Twitter Accounts During Crisis Events". DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7075.

Texto completo
Resumen
This thesis reports on the use of verified Twitter accounts during crisis events. Twitter is a social media platform that allows users to broadcast and exchange public text messages and it can be used as a communication tool during crisis events. Verified Twitter accounts are those accounts that Twitter has investigated and found to be genuinely maintained by the claimed owner. Celebrities, public officials, and other well-known persons or companies often seek this account status. The owners of these accounts are likely to provide more accurate or relevant information during a crisis event because they represent a brand, whether themselves or an organization. To study the role verified Twitter accounts play in a crisis event, information was collected from Twitter’s API (Application Programming Interface) from February 28, 2018 through March 3, 2018 during a powerful storm on the East Coast of the United States called a Nor’easter. Through data collection and analysis, this thesis describe show verified Twitter accounts communicated during a crisis event. Three exploratory questions were proposed to better understand the use of verified Twitter accounts: Who are the verified Twitter users that tweet about a crisis event? What types of information do verified Twitter users tweet about a crisis event? When do verified Twitter users tweet about a crisis event? Results show that verified Twitter accounts create more original messages, share more informative messages, and spread less spam than their non-verified counterparts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Harrison, Paul Michael. "Experimentally verified reduced models of neocortical pyramidal cells". Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/69126/.

Texto completo
Resumen
Reduced neuron models are essential tools in computational neuroscience to aid understanding from the single cell to network level. In this thesis I use these models to address two key challenges: introducing experimentally verifi�ed heterogeneity into neocortical network models, and furthering understanding of post-spike refractory mechanisms. Neocortical network models are increasingly including cell class diversity. However, within these classes significant heterogeneity is displayed, an aspect often neglected in modelling studies due to the lack of empirical constraints on the variance and covariance of neuronal parameters. To address this I quantified the response of pyramidal cells in neocortical layers 2/3-5 to square-pulse and naturalistic current stimuli. I used standard and dynamic I-V protocols to measure electrophysiological parameters, a byproduct of which is the straightforward extraction of reduced neuron models. I examined the between- and within-class heterogeneity, culminating in an algorithm to generate populations of exponential integrate-and-�re (EIF) neurons adhering to the empirical marginal distributions and covariance structure. This provides a novel tool for investigating heterogeneity in neocortical network models. Spike threshold is dynamic and, on spike initiation, displays a jump and subsequent exponential decay back to baseline. I examine extensions to the EIF model that include these dynamics, fi�nding that a simple renewal process model well captures the cell's response. It has been previously noted that a two-variable EIF model describing the voltage and threshold dynamics can be reduced to a single-variable system when the membrane and threshold time constants are similar. I examine the response properties of networks of these models by taking a perturbative approach to solving the corresponding Fokker-Planck equation, �finding the results in agreement with simulations over the physiological range of the membrane to threshold time constant ratio. Finally, I found that the observed threshold dynamics are not fully described by the inclusion of slow sodium-channel inactivation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Belgiovine, Mauro. "Advanced industrial OCR using Autoencoders". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13807/.

Texto completo
Resumen
Il contenuto di questa tesi di laurea descrive il lavoro svolto durante un tirocinio di sei mesi presso Datalogic ADC. L'obiettivo del lavoro è stato quello di utilizzare uno specifico tipo di rete neurale, chiamata Autoencoder, per scopi legati al riconoscimento o alla convalida di caratteri in un sistema OCR industriale. In primo luogo è stato creato un classificatore di immagini di caratteri basato su Denoising Autoencoder; successivamente, è stato studiato un metodo per utilizzare l'Autoencoder come un classificatore di secondo livello, per meglio distinguere le false attivazioni da quelle corrette in condizioni di incertezza di un classificatore generico. Entrambe le architetture sono state valutate su dataset reali di clienti di Datalogic e i risultati sperimentali ottenuti sono presentati in questa tesi.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Syrko, Ariel. "Development and evaluation of a framework for semi-automated formalization of automotive requirements". Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-11644.

Texto completo
Resumen
Quantity and intricacy of features implemented in vehicle have expanded rapidly over a past few years. Currently vision of autonomous vehicle is no longer a dream or SF movie, but instead a coming reality. In order to reach the better quality and high safety, advanced verification techniques are required. Simulink Design Verifier is a model checking tool based on formal verification, which can be effectively used to solve problems concerning error detection and testing at earlier stages of project. The transformation of requirements written in traditional form into Simulink Design Verifier objectives can be time consuming as well as requiring knowledge of system model and the verification tools. In order to reduce time consumption and to guide a user through the system model and the verification tool, the semi-automated framework has been developed. An implementation of restricted English grammar patterns into Simulink objects supports description of patterns to engineers and reduces time consumption. The developed framework is flexible and intuitive hence can be a solution for other branches of industry, but further tests and verification would be required. This thesis highlights the whole process of transformation system requirements written in natural language into Simulink Design Verifier objectives. The Fuel Level Display System model currently used by almost all Scania’s vehicles is analysed. Limitations and errors encountered during development process like a flexibility of Simulink Design Verifier to capture requirements and the patterns behaviour or ambiguity of system requirements are analysed and described in this thesis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Bauer, David Allen. "Preserving privacy with user-controlled sharing of verified information". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31676.

Texto completo
Resumen
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Blough, Douglas; Committee Member: Ahamad, Mustaque; Committee Member: Liu, Ling; Committee Member: Riley, George; Committee Member: Yalamanchili, Sudha. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Doyon, Stéphane. "On the security of Java, the Java bytecode verifier". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0004/MQ41890.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Larsson, Erik y Carl Svensson. "El Gamal Mix-Nets and Implementation of a Verifier". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142608.

Texto completo
Resumen
A mix-net is a cryptographic protocol based on public key cryptography which enables untraceable communication through a collection of nodes. One important application is electronic voting where it enables the construction of systems which satisfies many voting security requirements, including verifiability of correct execution. Verificatum is an implementation of a mix-net by Douglas Wikström. This report concerns the implementation of a verifier and evaluation of the implementation manual for the Verificatum mix-net. The purpose of the document is to enable third parties to convince themselves that the mixnet has behaved correctly without revealing any secret information. This implementation is a simple version of the verifier using the document and some test vectors generated by the mix-net. The document contains all information but there are still some possibilities for further clarification in order to make it comprehensible to a larger audience.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Baird, Mark E. "Towards a verified mechanistic model of plankton population dynamics". Thesis, University of Warwick, 1999. http://wrap.warwick.ac.uk/1123/.

Texto completo
Resumen
Plankton are a signicant component of the biogeochemical cycles that impact on the global climate. Plankton ecosystems constitute around 40 % of the annual global primary productivity, and the sinking of plankton to the deep ocean (the so-called biological pump) is the largest permanent loss of carbon from the coupled atmosphere-surface ocean-land system. The biological pump need only increase by 25 % to cancel the anthropogenically-released ux of CO2 into the atmosphere. Mechanistic models of atmosphere-ocean dynamics have proved to have superior predictive capabilities on climate phenomena, such as the El Ni~no, than empirical models. Mechanistic models are based on fundamental laws describing the underlying processes controlling a particular system. Existing plankton population models are primarily empirical, raising doubts to their ability to forecast the behaviour of the plankton system, especially in an altered global climate. This thesis works towards a mechanistic model of plankton population dynamics based primarily on physical laws, and using laboratory-determined parameters. The processes modelled include: diusion and convection to the cell surface, light capture by photosynthetic pigments, sinking and encounter rates of predators and prey. The growth of phytoplankton cells is modelled by analogy to chemical kinetics. The equations describing each process are veried by comparison to existing laboratory experiments. Process-based model verication is proposed as a superior diagnostic tool for model validation than verication based on the changing state of the system over time. To increase our ability to undertake process-based verication, a model of stable isotope fractionation during phytoplankton growth is developed and tested. The developed model has been written to complement other process-based models of biogeochemical cycles. A suite of process-based, biogeochemical models, coupled to an atmosphere-ocean circulation model, will have superior predictive capabilities compared with present global climate models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Kriener, Jael Elisabeth. "Towards a verified determinacy analysis for Prolog including cut". Thesis, University of Kent, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.655653.

Texto completo
Resumen
The question of determinacy is constantly on the mind of a good Prolog programmer. To write correct, efficient programs, it is extremely important to understand how many different answers a goal will compute and whether it will compute the same answer more than once. This thesis presents a backward determinacy analysis that derives conditions on the parameters of a predicate which guarantee that a call will be deterministic. The virtue of a backward analysis as compared to a forward checking of determinacy of a whole predicate lies in its generality. It can determine that some classes of calls to a predicate are deterministic, even though the predicate is not deterministic in general. The backward analysis presented here advances the state-of-the-art in its treatment of cut; it takes into account the fact that cut is used to make programs more deterministic, by allowing determinacy conditions to be weakened by the occurrence of a cut. This backward analysis is an instance of a style of program analysis based on semantic reasoning and abstract interpretation, which can benefit from formal verification. Indeed there has been much work in recent years on verifying such analyses of imperative and functional programs. However, in the area of logic program analysis, the amount of work on formal verification in semantic analysis has been modest. This thesis aims at filling this gap by applying recent advances in verification to Prolog analysis. To this end Chapter 2 presents Coq-formalisations of the mathematics underlying fixpoint semantics in Prolog. Chapter 3 presents formalisations in Coq of different styles of semantics for cut-free Prolog, and of relative correctness proofs between them. It thus demonstrates the applicability of verification in general, and Coq in particular, to the task of proving. semantic analyses of Prolog correct. These formalisations form the basis for the formal definition and correctness argument of backward analysis. To argue correctness of the backward determinacy analysis, Chapter 4 presents a denotational semantics for Prolog with cut, again formally defined in Coq. It uses techniques of normalisation and stratification to treat the cut in a uniform, contextual fashion. The result is a semantics which is amenable to abstraction, and which allows reasoning about cut to be disentangled from reasoning about divergence, which previous denotational semantics do not facilitate. This semantics is intended to form the basis of a verified proof ?f semantic correctness of the determinacy analysis . . Finally, Chapter 6 addresses the problem of computational efficiency, and specifically an operator called 'mutual exclusion', which is at the heart of any determinacy inference and, initially, constitutes a computational bottleneck. The insight in improving the efficiency of computing mutual exclusion is that it is closely related to Craig-interpolation. Interpolants, and in particular 'good' interpolants, are notoriously hard to compute. In the context of determinacy analysis, 'good' means 'weak' or 'less constraining'; furthermore, in this context it is reasonable to work with a size abstraction that produces linear constraints, which disambiguate separate answers to a goal. Chapter 6 shows how to apply a highly optimised linear solver, to efficiently compute very weak (though not always weakest) interpolants between sets of linear constraints. It thus lays the foundation for an efficient implementation of the analysis, by showing how mutual exclusion can be formulated in terms of interpolants.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Lu, Weiyun. "Formally Verified Code Obfuscation in the Coq Proof Assistant". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39994.

Texto completo
Resumen
Code obfuscation is a software security technique where transformations are applied to source and/or machine code to make them more difficult to analyze and understand to deter reverse-engineering and tampering. However, in many commercial tools, such as Irdeto's Cloakware product, it is not clear why the end user should believe that the programs that come out the other end are still the same program"! In this thesis, we apply techniques of formal specification and verification, by using the Coq Proof Assistant and IMP (a simple imperative language within it), to formulate what it means for a program's semantics to be preserved by an obfuscating transformation, and give formal machine-checked proofs that these properties hold. We describe our work on opaque predicate and control flow flattening transformations. Along the way, we also employ Hoare logic as an alternative to state equivalence, as well as augment the IMP program with Switch statements. We also define a lower-level flowchart language to wrap around IMP for modelling certain flattening transformations, treating blocks of codes as objects in their own right. We then discuss related work in the literature on formal verification of data obfuscation and layout obfuscation transformations in IMP, and conclude by discussing CompCert, a formally verified C compiler in Coq, along with work that has been done on obfuscation there, and muse on the possibility of implementing formal methods in the next generation of real-world obfuscation tools.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Zinzindohoué-Marsaudon, Jean-Karim. "Secure, fast and verified cryptographic applications : a scalable approach". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE052/document.

Texto completo
Resumen
La sécurité des applications sur le web est totalement dépendante de leur design et de la robustesse de l'implémentation des algorithmes et protocoles cryptographiques sur lesquels elles s'appuient. Cette thèse présente une nouvelle approche, applicable à de larges projets, pour vérifier l'état de l'art des algorithmes de calculs sur les grands nombres, tel que rencontrés dans les implémentations de référence. Le code et les preuves sont réalisés en F*, un langage orienté preuve et qui offre un système de types riche et expressif. L'implémentation et la vérification dans un langage d'ordre supérieur permet de maximiser le partage de code mais nuit aux performances. Nous proposons donc un nouveau langage, Low*, qui encapsule un sous ensemble de C en F* et qui compile vers C de façon sûre. Low* conserve toute l'expressivité de F* pour les spécifications et les preuves et nous l'utilisons pour implémenter de la cryptographie, en y intégrant les optimisations des implémentations de référence. Nous vérifions ce code en termes de sûreté mémoire, de correction fonctionnelle et d'indépendance des traces d'exécution vis à vis des données sensibles. Ainsi, nous présentons HACL*, une bibliothèque cryptographique autonome et entièrement vérifiée, dont les performances sont comparables sinon meilleures que celles du code C de référence. Plusieurs algorithmes de HACL* font maintenant partie de la bibliothèque NSS de Mozilla, utilisée notamment dans Firefox et dans RedHat. Nous appliquons les mêmes concepts sur miTLS, une implémentation de TLS vérifiée et montrons comment étendre cette méthodologie à des preuves cryptographiques, du parsing de message et une machine à état
The security of Internet applications relies crucially on the secure design and robust implementations of cryptographic algorithms and protocols. This thesis presents a new, scalable and extensible approach for verifying state-of-the-art bignum algorithms, found in popular cryptographic implementations. Our code and proofs are written in F∗, a proof-oriented language which offers a very rich and expressive type system. The natural way of writing and verifying higher-order functional code in F∗ prioritizes code sharing and proof composition, but this results in low performance for cryptographic code. We propose a new language, Low∗, a fragment of F∗ which can be seen as a shallow embedding of C in F∗ and safely compiled to C code. Nonetheless, Low∗ retains the full expressiveness and verification power of the F∗ system, at the specification and proof level. We use Low∗ to implement cryptographic code, incorporating state-of-the-art optimizations from existing C libraries. We use F∗ to verify this code for functional correctness, memory safety and secret in- dependence. We present HACL∗, a full-fledged and fully verified cryptographic library which boasts performance on par, if not better, with the reference C code. Several algorithms from HACL∗ are now part of NSS, Mozilla’s cryptographic library, notably used in the Firefox web browser and the Red Hat operating system. Eventually, we apply our techniques to miTLS, a verified implementation of the Transport Layer Security protocol. We show how they extend to cryptographic proofs, state-machine implementations and message parsing verification
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Schumacher, Kash Tucker. "What is the value of a Health Verified Program". Thesis, Kansas State University, 2011. http://hdl.handle.net/2097/11930.

Texto completo
Resumen
Master of Agribusiness
Department of Agricultural Economics
Ted C. Schroeder
The beef cattle industry is one of the last industries in production agriculture that is not heavily integrated. Therefore each segment of the industry is constantly looking for opportunities to increase the value of their cattle. In recent years, one of those opportunities available to cow-calf producers was verification of certain production practices (i.e. Age and Source, Natural, and Non-Hormone Treated). The value flows from the consumer to the cow-calf producer. The packers need these verified cattle to fill export contracts therefore they are willing to pay a premium for these types of cattle. The objective of the thesis was to determine the value of a Health Verified Program (HPV) to feedlot operators. HPV is not required to export beef like other verified programs, but it does verify the procedures that a group of calves has received from the previous owner. Since the feedlot is a deciding factor of value for HPV, feedlot managers were asked from across the United States not only what value they place on HPV but other questions that could be beneficial to others involved in the beef cattle industry. Regression models were used along with a correlation analysis to determine value. There is value to a health verified program along with other procedures that are available to cow-calf producers. Individual producers need to determine which verifications and procedures are economical and efficient for their individual operations with all factors considered.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Setoaba, Mabule. "A comparative study between Altman Z-Score and verifier". Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/59769.

Texto completo
Resumen
The identification of reliable early warnings signs which encompass qualitative and quantitate inputs to business distress and failure prediction could reduce the incidence of business failure if companies take corrective action early enough as the signals of distress emerge. The concept of verifier determinants as early warning signs of business failure and distress as introduced by Holtzhauzen & Pretorius (2013) has largely been theoretical and unexamined in terms of the methodology's ability to identify business distress. The performance of the model is tested against the well-established Altman Z-Score model of prediction. This study tests the consistency of the classification of companies as falling, grey and nonfailing by applying the Altman Z-Score model and the verifier determinants theory to a sample 38 JSE listed companies. 19 Suspended companies were selected and matched with another 19 companies of similar size and operating in the same industries. The consistency of the classifications was tested via a simple measure of percentage agreement using a cross tabulation, then a Cohen Kappa coefficient was applied to test for agreement over and above agreement by chance. The study further applied a Spearman correlation coefficient to determine the level of association between the results produced by the two models. The findings of the study indicate a statistically significant association between the Altman ZScore and the aggregate score of default as calculated through the application of verifier determinants theory. The study further identifies two verifier determinants (i) Late submission of financial information and (ii) Underutilisation of assets which have the strongest association with the Altman model and overall aggregate score of default. We argue that these individual verifier determinants could be used as a proxy for the overall model to monitor the risk of company distress
Mini Dissertation (MBA)--University of Pretoria, 2017.
zk2017
Gordon Institute of Business Science (GIBS)
MBA
Unrestricted
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Visser, Schalk W. J. (Schalk Willem Jacobus). "Data capturing system using cellular phone, verified against propagation models". Thesis, Stellenbosch : University of Stellenbosch, 2004. http://hdl.handle.net/10019.1/16462.

Texto completo
Resumen
Thesis (MScIng)--University of Stellenbosch, 2004.
ENGLISH ABSTRACT: Data capturing equipment are an expensive part of testing the coverage of a deployed or planned wireless service. This thesis presents the development of such a data capturing system that make use of 1800MHz GSM base stations as transmitters and a mobile phone connected to a laptop as receiver. The measurements taken, are then verified against know propagation models. Datavaslegging toerusting wat gebruik word om die dekking van draadlose stelsels te toets is baie duur en moeilik bekombaar. Hierdie tesis beskryf die ontwikkeling van so ’n datavaslegger wat baie goedkoper is en maklik gebruik kan word. Dit maak gebruik van ’n sellulêr foon en GPS gekoppel aan ’n skootrekenaar, wat die ontvanger is. Cell C basis staties word gebruik as die senders. Die data wat gemeet is word dan geverifieer deur gebruik te maak van bestaande radio frekwensie voortplanting modelle.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Steenson, Leo V. "Experimentally verified model predictive control of a hover-capable AUV". Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/355697/.

Texto completo
Resumen
This work presents the development of control systems that enable a hover-capable AUV to operate throughout a wide speed range. The Delphin2 AUV was built as part of this project and is used to experimentally verify the prototype control systems. This vehicle is over-actuated with; four throughbody tunnel thrusters, four independently-actuated control surfaces and a rear propeller. The large actuator set allows the Delphin2 to operate at low speeds, using the through-body tunnel thrusters, and at high speeds, using the rear propeller and control surfaces. There lies a region between slow and high speed where neither the control surfaces nor tunnel thrusters are operating optimally. To maintain depth stability, both actuator sets are required to operate simultaneously. The model predictive control (MPC) algorithm is used to control the vehicle given its ability to handle multiple inputs and outputs along with system uncertainties. The basis of MPC is a mathematical model of the system to be controlled. Several experiments were conducted with the Delphin2 AUV to acquire the data necessary to develop this model. Bollard pull tests were used to measure thruster performance whilst wind-tunnel and open water experiments provided a measure of the control surfaces, hull and propeller performance. Depth control is the primary focus of this Thesis, however, pitch and surge control are also addressed. Three controllers are developed in this work, of increasing complexity; a depth and pitch controller for low speed operations, a depth and surge velocity controller for medium to high speed operation, and �nally, a depth and surge velocity controller for operation from low to high speed operations. All three controllers are multi-input multi-output (MIMO) and use the MPC algorithm. Input constraints are imposed on both the absolute limits and the rate of change limits. Simulations re performed to aid in the design of each controller before it is implemented on the Delphin2 AUV and experimentally verified. The depth and pitch controller, developed for low speed operation, uses the front and rear vertical thrusters as the system inputs. This case demonstrates the implementation of the MPC algorithm and studies the effects of the various tuning parameters. A model sensitivity study is performed, showing that the controller can handle modelling errors of up to �30%. The controller is experimentally tested and shows excellent performance with zero steady-state errors although there is an undesirably large overshoot of the depth demand. The simulation and experimental results match closely. The depth and surge controller uses the control surfaces and rear propeller as system inputs. Many of the forces and moments within this system are non-linear functions of the vehicles surge velocity. Therefore the standard MPC algorithm, that utilizes just one linearised model, would not be sufficient to capture the system dynamics of the vehicle throughout the full operational envelope. A time-variant MPC (TV-MPC) algorithm is developed and shown in simulation to have excellent performance. The controller did not perform as well when tested experimentally, however, depth regulation of �0:3 m was achieved. This degradation in performance is due to inaccuracies in the estimation of the vehicles surge velocity. The final controller is also a depth and surge velocity controller, however, it is tasked with maintaining stability through-out the full speed range of the vehicle. All of the system inputs used for depth control are utilised by this controller; the two vertical through-body tunnel thrusters, horizontal control surfaces and the rear propeller. The design of the controller makes use of the TV MPC algorithm. To improve system performance a modi�cation to the controllers cost function, used within the optimisation process, was made to penalise the use of the thrusters at high speeds. This enables the controller to use the thrusters at low speeds, when performing close range inspections, but then as surge velocity increases and the thrusters are no longer required, they are switched o�. Both simulation and experimental results show excellent performance, although when the thrusters switch o�, the depth control is similar to that of the previous controller due to poor surge velocity estimation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Baklanova, Nadezhda. "Formally verified analysis of resource sharing conflicts in multithreaded Java". Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2498/.

Texto completo
Resumen
Les systèmes multi-tâches temps-réels deviennent de plus en plus répandus de nos jours. La correction des systèmes multi-tâches est difficile à assurer, pourtant, la correction est critique pour les logiciels temps-réels. La vérification formelle aide à trouver les erreurs potentielles. Les conflits de partage de ressources qui peuvent produire une incohérence des données sont une sorte d'erreurs. Une solution est le verrouillage exclusif des ressources partagées qui peut mener à des temps d'exécution difficile à prédire, ou à l'interblocage dans le pire cas. La vérification des programmes est souvent effectuée par model checking. Un formalisme répandu de model checking des programmes temps-réels sont les automates temporisés. Ils permettent de vérifier certaines propriétés temporelles dans le modèle du programme et de trouver la séquence d'actions qui mènent à l'erreur. Il existe des algorithmes de vérification effectives pour des automates temporisés qui sont réalisés dans des outils de model checking largement utilisés. Nous avons développé un outil pour l'analyse statique de programmes Java multi-tâches qui trouve des conflits de partage de ressources possibles. Les programmes Java sont annotés avec des informations temporelles, et le modèle du programme est construit, en se basant sur les annotations. Le modèle est un système d'automates temporisés qui est vérifié par le model checker Uppaal. Des conflits de partage de ressources possibles sont trouvés par la vérification. Nous avons développé une étude de cas pour illustrer cette approche. L'analyse est complète: lorsqu'un conflit de partage de ressources peut apparaître dans un programme Java, il est détecté par notre analyse. Le modèle abstrait peut aussi sortir des alertes "faux positifs" qui ne correspondent pas à une configuration accessible dans le programme Java original. Pour s'assurer que la traduction des programmes Java vers des automates temporisés est correcte, nous avons formalisé la traduction dans l'assistant de preuves Isabelle. Nous avons prouvé que la traduction préserve la correspondance entre le programme et son modèle. Pour cela, nous avons développé une sémantique formelle de Java multi-tâche avec des annotations et des automates temporisés. Les preuves montrent que le modèle simule le comportement du programme Java original. Cela veut dire que chaque pas de la sémantique du code Java a une séquence de pas correspondante dans le modèle qui a le même effet sur l'état, c. à. D. Des valeurs des variables, temps ou objets verrouillés. Le code vérifié de traduction est généré à partir de la traduction formalisée en utilisant le générateur de code d'Isabelle. Puis notre outil utilise le code vérifié pour générer le modèle d'un programme Java
Multithreaded real-time systems become widespread nowadays. Correctness is critical for real-time applications but it is difficult to ensure by usual methods like testing. Formal verification helps to find possible errors. One kind of errors are resource sharing conflicts which lead to data corruption. A common solution is exclusive locking which can lead to unpredictable delays in execution or even deadlocks in the worst case. Program verification is often done by model checking. A popular model checking formalism for real-time programs are timed automata. It allows to verify certain timing properties in a model of a program and to find a sequence of actions which lead to an error. There exist effective verification algorithms for timed automata which are implemented in widely used model checking tools. We have developed a tool for static analysis of multithreaded Java programs which finds possible resource sharing conflicts. Java programs are annotated with timing information, and a model of the program is built based on the annotations. The model is a system of timed automata which is verified by the Uppaal model checker, and possible resource sharing conflicts are found. A case study has been developed to illustrate the approach. The analysis is complete: whenever a resource sharing conflict occurs in a Java program, it is detected by the our analysis. The abstract model may also output "false positive" warnings which do not correspond to a reachable configuration in the source Java program. In order to make sure that the abstraction of Java programs to timed automata is correct, we have formalized the translation is the Isabelle proof assistant. We have proved that the translation preserve correspondence between a program and its model. For this, we have developed a formal semantics both of multithreaded Java with annotations and of timed automata. The proofs show that the model simulates the behavior of the source Java program. This means that each semantic step made in the Java code has a corresponding sequence of steps in the model which has the same effect on the state, i. E. Variable values, time or locked objects. The verified code for translation is generated from the formalized translation using the Isabelle code generator. Then our tool uses the verified code to generate a model of a Java program
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Rodriguez-Velez, Ayshka Elise. "Power Mobility Sensor Data Collection Verified through Standardized Pediatric Assessments". UNF Digital Commons, 2018. https://digitalcommons.unf.edu/etd/828.

Texto completo
Resumen
The collaboration between the School of Engineering and the Department of Physical Therapy at the University of North Florida has introduced the possibility of creating a new environment for pediatric physical therapy assessments. There are currently no methods for remotely monitoring children with impairments. However, with embedded sensor technology in the form of power mobility and accepted therapy assessment tools, remote monitoring can become a possibility. As a part of this work, a ride-on toy car was developed as a remote monitoring device and a case study with a child with a mobility impairment was used as a proof of concept. In this thesis, the background information on the project, the case study diagnosis and history, and the model used to develop this project are detailed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

St-Martin, Michel. "A Verified Algorithm for Detecting Conflicts in XACML Access Control Rules". Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/20539.

Texto completo
Resumen
The goal of this thesis is to find provably correct methods for detecting conflicts between XACML rules. A conflict occurs when one rule permits a request and another denies that same request. As XACML deals with access control, we can help prevent unwanted access by verifying that it contains rules that do not have unintended conflicts. In order to help with this, we propose an algorithm to find these conflicts then use the Coq Proof Assistant to prove correctness of this algorithm. The algorithm takes a rule set specified in XACML and returns a list of pairs of indices denoting which rules conflict. It is then up to the policy writer to see if the conflicts are intended, or if they need modifying. Since we will prove that this algorithm is sound and complete, we can be assured that the list we obtain is complete and only contains true conflicts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Mavalankar, Vikram. "Extensions and an explanation module for the iRODS rule oriented verifier". Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p1447797.

Texto completo
Resumen
Thesis (M.S.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed February 5, 2008). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 65-66).
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Hölle, Stefan [Verfasser]. "Numerical MMATh Verified essential algorithms for solving differential equations / Stefan Hölle". Konstanz : KOPS Universität Konstanz, 2019. http://d-nb.info/1203067925/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Bailey, J. S. L. "Experimentally verified fluid loading models for slender horizontal cylinders in waves". Thesis, University of Sussex, 2000. http://sro.sussex.ac.uk/id/eprint/737/.

Texto completo
Resumen
This thesis reports on research work aimed at improving methods for predicting the fluid loading on fixed- and compliant offshore structures in waves. In focusing on slender member fluid-interaction models, the limitations and uncertainties associated with the widely-used Morison equation are examined. An improved empirical model has been developed and tested extensively alongside the Morison equation, using real experimental data. This improved model gives a better representation of the frequency dependency of the fluid-loading coefficients: this is particularly important in compliant motion conditions where the so-called relative velocity concept still needs to be verified under carefully controlled experimental conditions. The model is based entirely on the use of linear wave kinematics, thus simplifying calibration in irregular conditions and avoiding the need for a consistent non-linear wave theory (which is still lacking). By appropriate adaptation the improved model can also be extended to include amplitude dependency in the loading coefficients. The Improved Model has been developed through an analysis of experimental data. For this purpose the experimental work was focused on a horizontal cylinder, at model scale, located in a wave tank at the University of Sussex. The fluid loading experienced by a fixed cylinder, in both regular and irregular waves conditions, was measured and examined in detail. In addition, a comprehensive study of the loading on compliant cylinders, in both regular and irregular waves, was undertaken. Extensive use was made of appropriate parameter estimation techniques with initial attention (using simulated data) given to their accuracy for use with noisy experimental measurements. The effects of subtle (but undesirable) tank characteristics were also carefully taken into account. The study shows that, for fixed horizontal cylinders, benefits can be clearly identified in using the improved model, with frequency dependent coefficients, over the frequency dependent Morison equation. Moreover, the study shows that the relative velocity concept is more appropriate for use with the improved model than with the Morison model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Ioannidis, Eleftherios Ioannis. "Extracting and optimizing low-level bytecode from high-level verified Coq". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121675.

Texto completo
Resumen
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 51-53).
This document is an MEng thesis presenting MCQC, a compiler for extracting verified systems programs to low-level assembly, with no Runtime or Garbage Collection requirements and an emphasis on performance. MCQC targets the Gallina functional language used in the Coq proof assistant. MCQC translates pure and recursive functions into C++17, while compiling monadic effectful functions to imperative C++ system calls. With a series of memory and performance optimizations, MCQC combines verifiability with memory and runtime performance. By handling effectful and pure functions MCQC can generate executable code directly from Gallina and link it with trusted code, reducing the effort of implementing and executing verified systems.
by Eleftherios Ioannidis.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Merten, Samuel A. "A Verified Program for the Enumeration of All Maximal Independent Sets". Ohio University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1479829000576398.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Holtzhauzen, G. T. D. (Gerhardus Theodoris Daniel). "Modeling business turnaround strategies using verifier determinants from early warning signs theory". Thesis, University of Pretoria, 2011. http://hdl.handle.net/2263/28693.

Texto completo
Resumen
The management dilemma emanates from the inadequacy and weakly detailed turnaround models available for use by entrepreneurs and turnaround practitioners in South Africa. To add to this problem previous legislation did not provide any protection to the debtor in any turnaround attempts. New debtor friendly legislation comes into effect in 2011. This research aims to identify the verifiers for signs and causes of potential failure. The construct verifier determinant is theoretically defined and included into a practical turnaround framework. The primary objectives of the study are to:
  • Identify and theoretically define early warning sign “verifier determinants”
  • To design and include “verifier determinants” as an integral part of a turnaround plan that supports corrective action.
The secondary objectives of this study are to:
  • Research the current formal turnaround practices, which are applied in the United States of America, Canada, Australia, Africa and informal practices evident in South Africa. These findings are aligned to include the changes in the applicable South African legislation.
  • Design and propose a framework for use by turnaround practitioners and entrepreneurs alike (conforming to new legislation).
  • Identify which “verifier determinants” will confirm the early warning and apply this outcome to the design of a reliable turnaround framework, acceptable to all creditors and financial institutions.
  • The final objective is to contribute to the South African entrepreneurial, turnaround body of knowledge, and future formal studies in this academically ill-represented field.
The effectiveness of business turnaround depends on the chosen strategy. The literature review in this proposal deals with the following aspects; venture risk propensity, early warning signs and failure models, legal constraints / opportunities and finally turnaround. Current formal turnaround routes are, due to various negativities and high costs, often not practical and a more informal approach is favoured. Methodology:
  • Through comprehensive literature research to identify and theoretically define “verifier determinants” that confirm the early warning sign and causes. Apply in depth interviews to identify the use of verifier determinants by specialist turnaround practitioners.
  • Confirm the actual use and value of the verifier determinants by experts and practitioners during turnarounds, Design and include “verifier determinants” as an integral part of a turnaround framework that supports rehabilitation of the business.
  • Compare the formal turnaround practices, which are applied in other jurisdictions such as the United States of America, Canada, Australia, Africa will be investigated.
  • Adapt the framework cognisant of Chapter six of the companies Act, Act 71 of 2008 requirements and recommend to formal and informal turnaround practices relevant in South Africa.
For this study, a leading commercial bank was selected as the organisation of choice, due to the accessibility to information, research data, and turnaround respondents. For selecting the case studies used for evaluation during interviews, the researcher relied on businesses that were already subjected to BASEL II Accord categorisation criteria and had ex post facto histories. The study applied two research methods. An interview method was used to identify actual verifier determinants used in practice. The interrogation of the participants was done, using the Repertory Grid method, thus forcing choices and explanation of interviewee reasoning. Participants were purposely selected to ensure representation within the identified risk categories. As result, a comprehensive turnaround framework is compiled. The study aligns these findings with the new South African legislation, and designs a turnaround framework for use by turnaround professional practitioners, entrepreneurs and affected persons alike. This study introduced a number of new constructs that can be used in a business turnaround context, namely:
  • business triage
  • verifier determinant
  • turnaround framework, introducing the constructs “business triage” and “verifier determinant” a timeline schedule for executing the rescue process
This study highlighted the importance of establishing the true value of a business in the early stages of the turnaround process. Verifiers can be used successfully to determine the extent of the problem (“depth of the rot”), the difficulties involved and reduce time requirements for analysis.
Thesis (PhD)--University of Pretoria, 2011.
Business Management
unrestricted
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Zrinzo, L. "MRI guided and MRI verified deep brain stimulation : accuracy, safety and efficacy". Thesis, University College London (University of London), 2011. http://discovery.ucl.ac.uk/1325642/.

Texto completo
Resumen
This thesis investigates a systematic approach to the use of MRI-guided and MRI-verified deep brain stimulation (DBS) in clinical practice. The concept of individual targeting of visualised brain structures without microelectrode recording (MER) was examined with respect to accuracy, safety and efficacy. Accurate MRI localisation of the pedunculopontine nucleus, an investigational new DBS target for parkinsonian gait disorders, is described and proof-of-principle confirmed in a cadaver study using MR-microscopy and histological examination. The impact of surgical trajectory on stereotactic accuracy in routine clinical practice was examined at two centres using MER: trajectories involving the ventricle suffered from significantly greater targeting errors compared to those that did not (p<0.001) and multiple brain-passes were more likely to be required to reach the intended target (p<0.01). Subcortical brain shift between pre and postoperative stereotactic images was minimal after MRI-verified procedures without MER (136 procedures); the observed shift did not adversely affect targeting accuracy or clinical outcome. A simple calibration process improved mean targeting errors by 0.6 mm (p<0.001) to 0.9 ± 0.5 mm from the intended target point. A large patient series was compared to a systematic literature review to determine factors associated with haemorrhage; an image-guided and image-verified approach carried a significantly lower risk of haemorrhage and associated permanent deficit than other surgical methodologies (p=0.001). Another study confirmed that, when observing certain precautions, cranial MR images could be obtained with an extremely low risk in patients with implanted DBS hardware. Outcome data from patients undergoing MRI-guided and MRI-verified surgery for Parkinson’s disease and primary generalised dystonia compared favourably to reports from the literature. Mode of anaesthesia did not impact on surgical outcome. In conclusion, this thesis demonstrates that a meticulous approach to MRI-guided and MRIverified DBS is safe and accurate, with clinical outcomes comparable to other techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Nguyen, Hong Diep. "Efficient algorithms for verified scientific computing : Numerical linear algebra using interval arithmetic". Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00680352.

Texto completo
Resumen
Interval arithmetic is a means to compute verified results. However, a naive use of interval arithmetic does not provide accurate enclosures of the exact results. Moreover, interval arithmetic computations can be time-consuming. We propose several accurate algorithms and efficient implementations in verified linear algebra using interval arithmetic. Two fundamental problems are addressed, namely the multiplication of interval matrices and the verification of a floating-point solution of a linear system. For the first problem, we propose two algorithms which offer new tradeoffs between speed and accuracy. For the second problem, which is the verification of the solution of a linear system, our main contributions are twofold. First, we introduce a relaxation technique, which reduces drastically the execution time of the algorithm. Second, we propose to use extended precision for few, well-chosen parts of the computations, to gain accuracy without losing much in term of execution time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Leitner, Florian. "Evaluation of the Matlab Simulink Design Verifier versus the model checker SPIN". [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:352-opus-61257.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Dibley, James. "A development method for deriving reusable concurrent programs from verified CSP models". Thesis, Rhodes University, 2019. http://hdl.handle.net/10962/72329.

Texto completo
Resumen
This work proposes and demonstrates a novel method for software development that applies formal verification techniques to the design and implementation of concurrent programs. This method is supported by a new software tool, CSPIDER, which translates machine-readable Communicating Sequential Processes (CSP) models into encapsulated, reusable components coded in the Go programming language. In relation to existing CSP implementation techniques, this work is only the second to implement a translator and it provides original support for some CSP language constructs and modelling approaches. The method is evaluated through three case studies: a concurrent sorting array, a trialdivision prime number generator, and a component node for the Ricart-Agrawala distributed mutual exclusion algorithm. Each of these case studies presents the formal verification of safety and functional requirements through CSP model-checking, and it is shown that CSPIDER is capable of generating reusable implementations from each model. The Ricart-Agrawala case study demonstrates the application of the method to the design of a protocol component. This method maintains full compatibility with the primary CSP verification tool. Applying the CSPIDER tool requires minimal commitment to an explicitly defined modelling style and a very small set of pre-translation annotations, but all of these measures can be instated prior to verification. The Go code that CSPIDER produces requires no intervention before it may be used as a component within a larger development. The translator provides a traceable, structured implementation of the CSP model, automatically deriving formal parameters and a channel-based client interface from its interpretation of the CSP model. Each case study demonstrates the use of the translated component within a simple test development.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Wagner, Gerd. "Vivid Agents: How They Deliberate, How they React, How They Are Verified". Universität Leipzig, 1996. https://ul.qucosa.de/id/qucosa%3A34510.

Texto completo
Resumen
We propose a model of an agent which is both logical and operational. Our model of vivid agents takes into account that agents need not only the ability to draw inferences but also to update their current knowledge state, to represent and to perform (and to simulate the execution of) actions in order to generate and exe- cute plans, and to react and interact in response to perception and communication events. We illustrate our formalization of this basic functionality of an agent by means of examples. We also show how our model fits into the transition system semantics of concurrent reactive systems by identifying the five basic transitions of vivid agent systems: perception, reaction, planning, action, and replanning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía