To see the other types of publications on this topic, follow the link: Implementations.

Dissertations / Theses on the topic 'Implementations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Implementations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

O'Rourke, Colleen Marie. "Efficient NTRU implementations." Link to electronic thesis, 2002. http://www.wpi.edu/Pubs/ETD/Available/etd-0430102-111906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lembo, Domenico. "Dealing with Inconsistency and Incompleteness in Data Integration." Doctoral thesis, La Sapienza, 2004. http://hdl.handle.net/11573/917064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Khasian, Nooshin, and Sara Goodarzian. "OBDD–based Set Implementations." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-21056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nemec, Matus <1992&gt. "Challenging RSA cryptosystem implementations." Doctoral thesis, Università Ca' Foscari Venezia, 2019. http://hdl.handle.net/10579/17849.

Full text
Abstract:
In questo lavoro di tesi abbiamo studiato le proprietà di sicurezza delle implementazioni del cifrario RSA. Nella prima parte, analizziamo i metodi per la generazioni di chiavi RSA di diverse librerie crittografiche. Mostriamo, in particolare, un'applicazione pratica basata su ‘bias’ nelle chiavi RSA che permette di misurare la popolarità delle varie librerie crittografiche e sviluppiamo un metodo di fattorizzazione che rompe un algoritmo proprietario di generazione di chiavi. Nella seconda parte, esaminiamo diversi problemi implementativi del protocollo TLS, come ad esempio i 'padding oracle' di RSA, nel contesto, più ampio, dell'ecosistema Web. Il nostro lavoro mostra come RSA, nonostante la sua apparente semplicità, richieda grande cautela per essere implementato correttamente. Diversamente da RSA, la crittografia basata su curve ellittiche (ECC) utilizza sequenze casuali come chiavi e non richiede padding, risultando, tra le altre cose, più resistente a errori di configurazione. La nostra raccomandazione, quindi, è che gli sviluppatori seguano l’esempio di TLS versione 1.3, in cui è cessato l’utilizzo di RSA in favore di ECC.
The aim of our research was to study security properties of real-world deployments of the RSA cryptosystem. First we analyze RSA key generation methods in cryptographic libraries. We show a practical application of biases in RSA keys for measuring popularity of cryptographic libraries and we develop a factorization method that breaks a proprietary key generation algorithm. Later we examine published implementation issues in the TLS protocol, such as RSA padding oracles, in the wider context of the Web ecosystem. Our work helps to demonstrate how RSA, a seemingly simple and intuitive cryptosystem, requires a lot of knowledge to be implemented correctly. Unlike RSA, elliptic curve cryptography (ECC) algorithms do not require padding, and parameters can be chosen such that random strings serve as keys. It is more resistant to bad user configurations and provides many other benefits. We conclude that practitioners should follow the example of TLS version 1.3 and stop using RSA in favor of ECC.
APA, Harvard, Vancouver, ISO, and other styles
5

Hiles, Charmelle Amanda. "Using experience from previous failed implementations to improve future lean implementation strategy." Thesis, Nelson Mandela Metropolitan University, 2015. http://hdl.handle.net/10948/11047.

Full text
Abstract:
The main objective of the study was to ascertain the presence of the identified critical success factors for lean sustainability in a chemical manufacturing company in Port Elizabeth. The critical success factors that formed part of the research included leadership involvement and commitment, management involvement and commitment, employee engagement and organisational culture. The methodology used was one of a positivistic approach. A questionnaire was utilized and the responses were analyzed using various statistical methods. Based on the results from the analysis, recommendations and conclusions could be drawn. The inferential results of the study indicated that all the critical success factors identified for this study were present within the organisation. However, there were still a large percentage of respondents that remained neutral across all the questions which could indicate reasons why previous attempts in lean implementation failed. The recommendations provided were based on the findings of the study. An implementation strategy was identified and outlined. This strategy and recommendations will assist in providing a sound platform for a sustainable lean initiative within the organisation.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Xuan. "Verification of digital controller implementations /." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd1073.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hedenström, Felix. "Trial Division : Improvements and Implementations." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-211090.

Full text
Abstract:
Trial division is possibly the simplest algorithm for factoring numbers.The problem with Trial division is that it is slow and wastes computationaltime on unnecessary tests of division. How can this simple algorithms besped up while still being serial? How does this algorithm behave whenparallelized? Can a superior serial and a parallel version be combined intoan even more powerful algorithm?To answer these questions the basics of trial divisions where researchedand improvements where suggested. These improvements where later im-plemented and tested by measuring the time it took to factorize a givennumber.A version using a list of primes and multiple threads turned out to bethe fastest for numbers larger than 10 10 , but was beaten when factoringlower numbers by its serial counterpart. A problem was detected thatcaused the parallel versions to have long allocation times which slowedthem down, but this did not hinder them much.
Trial division är en av de enklaste algoritmerna när det kommer till attfaktorisera tal. Problemet med trial division är att det är relativt långsamtoch att det gör onödiga beräkningar. Hur kan man göra denna algoritmsnabbare utan att inte göra den seriell? Hur beter sig algoritmen när denär parallelliserad? Kan en förbättrad seriell sedan bli parallelliserad?För att besvara dessa frågor studerades trial division och dess möjligaförbättringar. Dessa olika förbättringar implementerades i form av flerafunktioner som sedan och testades mot varandra.Den snabbaste versionen byggde på att använda en lista utav primtaloch trådar för att minimera antal ’trials’ samt att dela upp arbetet. Denvar dock inte alltid snabbast, då den seriella versionen som också användeen lista av primtal var snabbare för siffror under 10 10 . Sent upptäck-tes ett re-allokeringsproblem med de parallella implementationerna, meneftersom de ändå var snabbare fixades inte detta problemet.
APA, Harvard, Vancouver, ISO, and other styles
8

Valencia-Palomo, Guillermo. "Efficient implementations of predictive control." Thesis, University of Sheffield, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.537995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yanco, Holly A. (Holly Ann). "Robot communication : issues and implementations." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/37729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pinto, de Mendonça José Rogério 1963. "Business impacts of CRM implementations." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8467.

Full text
Abstract:
Thesis (S.M.M.O.T.)--Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 2002.
Includes bibliographical references (leaves 75-76).
This thesis aims at examining CRM implementations and at obtaining theoretical and practical evidence to three questions: ** What patterns emerge in successful CRM implementations, and general factors that prevent organizations from achieving expected results in such projects; ** What business benefits and impacts (e.g., return on investment, customer value, and redesign of business processes), are typically observed in CRM initiatives; ** How companies rearrange their organizational structures to, maximize the benefits of CRM. To accomplish these goals the author conducted a review of available literature, arid then interviewed members of 3 companies that implemented CRM and 2 system integrators with comprehensive experience in implementation of CRM. This practical experience intended to confirm the findings obtained through the literature review. The 3 companies researched are market leaders in the Financial Service Industry in Latin America. Consistently, the System Integrators interviewed actively serve the same industry. The major findings of this work are the following: ** Technology components as well as vendor selection is secondary as a key success factor; ** Companies usually do not reorganize themselves due to CRM implementations. Structure models; seems to be much more dependent on intrinsic cultural aspects; ** Observed business benefits have a high degree of variation, depending much on the situation before the implementation - all researched cases were considered to be successful. Although the sample analyzed is not sufficient to establish generalizations, due to its size and to the impossibility of obtaining reliable numeric or quantitative data, we report our results and interpret as a contribution to the growing body of evidence. Most of the conclusions are consistent with the literature review findings, with the exception of the observed absence of 'business cases' in the analyzed companies. The literature claims that elaboration of detailed business cases is critical, whereas in the analyzed companies a less rigorous, but nevertheless detailed, planning was sufficient to ensure success. Apart from the limitation of the size of the researched sample, due to the relative newness of the theme, part of the literature reviewed was composed by white papers published by CRM vendors, management consulting firms, and independent research and advisory companies. The research suggests that such implementations have important and lasting effects on the business. It also indicates that the magnitude of the business impacts are intrinsically dependent of the realities of particular companies, and cannot be generalized even within the specific financial services sector. Most of conclusions are based on qualitative analysis. since the number of cases, complexity and variability of the implementations, prevent from generating statically sound analysis. It would be valuable if this research could be extended through other industry sectors, in Latin America, or alternatively to encompass financial service companies from other regions.
by José Rogério Pinto de Mendonça.
S.M.M.O.T.
APA, Harvard, Vancouver, ISO, and other styles
11

Bharioke, Arjun. "Neural implementations of sensory computations." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jansen, Roelof. "Evaluation of Doherty Amplifier Implementations." Thesis, Stellenbosch : Stellenbosch University, 2008. http://hdl.handle.net/10019.1/20445.

Full text
Abstract:
Thesis (MScIng)--Stellenbosch University, 2008.
ENGLISH ABSTRACT: Modern communication systems demand efficient, linear power amplifiers. The amplifiers are often operated in the backed-off power levels at which linear amplifiers such as class B amplifier are particularly inefficient. The Doherty amplifier provides an improvement as it increases efficiency at backed of power levels. Doherty amplifiers consists of two amplifiers, a carrier amplifier and a peaking amplifier, of which the output is combined in a novel way. Implementation of the Doherty amplifier with transistors is not ideal. One of the main problems is the insufficient current production of the peaking amplifier at peak envelope power (PEP) if it is implemented as a class C amplifier. A suggested solution to this problem is a bias adaption system that controls the peaking amplifier gate voltage dynamically depending on the input power levels. The design and evaluation of such a adaptive Doherty amplifier is the main goal of this thesis. A classical Doherty amplifier with and an uneven Doherty amplifier with unequal power division between the carrier and peaking amplifiers are also evaluated and compared with the adaptive Doherty amplifier. The amplifiers are designed using a 10 W LDMOS FET device, the MRF282. The adaptive Doherty amplifier and the uneven Doherty amplifier show significant improvements in efficiency and output power over the even Doherty amplifier. At PEP the adaptive Doherty delivers 42.4 dBm at 39.75 % power added efficiency (PAE), the uneven Doherty amplifier 41.9 dBm at 40.75 % PAE and the even Doherty amplifier 40.8 dBm at 38.6 % PAE. At 3dB backed-off input power the adaptive Doherty amplifier has an efficiency of 34.3%, compared to 34.9 5% for the uneven Doherty amplifier and 29.75 % for the even Doherty amplifier.
AFRIKAANSE OPSOMMING: Moderne kommunikasie stelsels vereis effektiewe, linieêre drywing versterkers. Die versterkers word dikwels in laer drywings vlakke bedryf waar linieêre versterkers soos ’n klas B versterker besondere lae effektiwiteit het. Die Doherty versterker bied ’n uitweg omdat dit verbeterde effektiwiteit by lae drywings vlakke bied. ’n Doherty versterker bestaan uit twee versterkers, die hoof versterker en die aanvullende versterker, waarvan die uittrees met ’n spesiale kombinasie netwerk bymekaar gevoeg word. Die implementasie van Doherty versterkers met transistors is nie ideaal nie. Een van die hoof probleme is die onvoldoende stroom wat deur die aanvullings versterker gebied word by piek omhulsel drywing (POD). ’n Oplossing vir die probleem is om ’n aanpassings sisteem te gebruik wat die aanvullende versterker se hekspanning dinamies beheer afhangende van die intree drywings vlakke. Die ontwerp en evaluasie van so ’n aanpassings Doherty versterker is die hoof doel van hierdie tesis. ’n Klassieke Doherty versterke met gelyke drywings verdeling en ’n ongelyke Doherty versterker wat gebruik maak van ongelyke drywings verdeling tussen die hoof-en aanvullende versterkers is ook gevalueer en vergelyk met die aanpassings Doherty versterker. Die versterkers was ontwerp met ’n 10 W LDMOS FET, die MRF282. Die aanpassings Doherty versterker en die ongelyke Doherty versterker het aanmerklike verbeteringe in effektiwiteit en uittree drywing gebring in vergelyking met die ewe Doherty versterker. By POD het die aanpassings versterker 42.4 dBm teen 39.75 % drywing toegevoegde effektiwiteit (DTE) gelewer, die ongelyke Doherty versterker 41.9 dBm teen 40.75 % DTE, en die ewe Doherty versterker 40.8 dBm teen 38.6 DTE. By ’n intree drywingsvlak 3 dB laer as POD het die aanpassings Doherty versterker ’n effektiwiteit van 34.3 % getoon, in vergelyking met die onewe Doherty versterker se 34.9 % en die ewe Doherty versterker se 29.75 % DTE.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Haotian. "Smart grid technologies and implementations." Thesis, City University London, 2014. http://openaccess.city.ac.uk/5918/.

Full text
Abstract:
Smart grid has been advocated in both developing and developed countries in many years to deal with large amount of energy deficit and air pollutions. However, many literatures talked about some specific technologies and implementations, few of them could give a clear picture on the smart grid implementations in a macro scale like what is the main consideration for the smart grid implementations, how to examine the power system operation with communication network deployment, how to determine the optimal technology scheme with consideration of economic and political constraints, and so on. Governments and related institutions are keen to evaluate the cost and benefit of new technologies or mechanisms in a scientific way rather than making decision blindly. Decision Support System, which is an information system based on interactive computers to support decision making in planning, management, operations for evaluating technologies, is an essential tool to provide decision makers with powerful scientific evidence. The objective of the thesis is to identify the data and information processing technologies and mechanisms which will enable the further development of decision support systems that can be used to evaluate the indices for smart grid technology investment in the future. First of all, the thesis introduces the smart grid and its features and technologies in order to clarify the benefits can be obtained from smart grid deployment in many aspects such as economics, environment, reliability, efficiency, security and safety. Besides, it is necessary to understand power system business and operation scenarios which may affect the communication network model. This thesis, for the first time, will give detailed requirements for smart grid simulation according to the power system business and operation. In addition, state of art monitoring system and communication system involved in smart grid for better demand side management will be reviewed in order to find out their impacts reflecting to the power systems. The methods and algorithms applied to the smart grid monitoring, communication technologies for smart grid are summarized and the monitoring systems are compared with each other to see the merits and drawbacks in each type of the monitoring system. In smart grid environment, large number of data are need to be processed and useful information are required to be abstracted for further operation in power systems. Machine learning is a useful tool for data mining and prediction. One of the typical machine learning artificial algorithms, artificial neural network (ANN) for load forecasting in large power system is proposed in this thesis and different learning methods of back-propagation, Quasi-Newton and Levenberg-Marquardt, are compared with each other to seek the best result in load forecasting. Bad load forecasting may leads to demand and generation mismatch, which could cause blackout in power systems. Load shedding schemes are powerful defender for power system from collapsing and keep the grid in integral to a maximum extent. A lesson learned from India blackout in July 2012 is analyzed and recommendations on preventing grid from blackout are given in this work. Also, a new load shedding schemes for an isolated system is proposed in this thesis to take full advantage from information sharing and communication network deployment in smart grid. Lastly, the new trend of decision support system (DSS) for smart grid implementation is summarized and reliability index and stability scenarios for cost benefit analysis are under DSS consideration. Many countries and organizations are setting renewable penetration goals when planning the contribution to reduce the greenhouse gas emission in the future 10 or 20 years. For instance, UK government is expecting to produce 27% of renewable energies EU-wide before 2030. Some simulations have been carried out to demonstrate the physical insight of a power system operation with renewable energy integration and to study the non-dispatchable energy source penetration level. Meanwhile, issues from power system reliability which may affect consumers are required to take into account. Reliability index of Centralized wind generations and that of distributed wind generations are compared with each other under an investment perspective.
APA, Harvard, Vancouver, ISO, and other styles
14

Michel, Claude. "Modeles et implementations d'interpretes reflexifs." Nice, 1997. http://www.theses.fr/1997NICE5117.

Full text
Abstract:
Dans cette these, nous definissons et implementons un modele de reflexivite pour un interprete de langage d'acteurs. Notre modele s'inscrit dans une approche pragmatique de la reflexivite dans laquelle les aspects pratiques et l'efficacite sont des criteres primordiaux. Notre demarche s'appuie sur une etude pousse de la reflexivite et sur un dialogue permanent avec un utilisateur averti. L'etude de la reflexivite explore les modeles et les implementations des interpretes reflexifs de langages fonctionels. Ces derniers sont les premiers a avoir beneficie du concept de reflexivite et constitue une plate-forme naturelle a l'etude des mecanismes linguistiques. Ils nous ont permis de clarifier le role de l'implementation de la connexion causale dans la reflexivite globale. Ce type de reflexivite etend les possibilites offertes par la reflexivite en liberant le programmeur des limitations imposees par la representation. L'evaluation partielle, technique presentee comme ideale pour resoudre le probleme de l'efficacite dans les interpretes reflexifs est etudiee en detail. L'implementation d'un interprete reflexif qui l'utilise a permis de mettre en evidence les apports mais aussi les difficultes de mise en uvre de cette technique. Notre objectif principal se materialise par l'interprete reflexif lacte/r. Le modele de reflexivite de lacte/r a ete defini a partir de l'etude de la reflexivite, et d'un principe fondateur, le principe de localite. L'application de ce dernier s'est traduite par l'introduction d'operations qui controlent dynamiquement le nombre de niveaux d'interpretation au sein des tours de meta-acteurs et par une localisation de l'impact de la reflexivite aux seules entites explicitement reifiees. Il a ensuite ete affine et rationalise a partir des critiques et des propositions d'un utilisateur averti. Lacte/r se caracterise par sa simplicite, son large champ d'application et une efficacite compatible avec une utilisation reguliere.
APA, Harvard, Vancouver, ISO, and other styles
15

Yoo, Daniel. "Alchemy -- Transmuting base specifications into implementations." Worcester, Mass. : Worcester Polytechnic Institute, 2008. http://www.wpi.edu/Pubs/ETD/Available/etd-022609-151429/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Barlindhaug, Lars Feiring. "Developing Software Quality in KBE Implementations." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for produktutvikling og materialer, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18550.

Full text
Abstract:
The report is written to show as to what extent test-driven development (TDD) and continuous integration (CI) can be used on KBE models and how a unit testing framework for KBE models can be developed.Test-driven development (TDD) and continuous integration (CI) has changed the way software is tested. Software testing was often a separate process at the end of a project. It is now being worked on during the entire development period. TDD and CI relies on unit tests. Unit tests are done by dividing the code into the smallest possible units and testing each of them independently. This master’s thesis asks how these practices can be used for testing knowledge based engineering (KBE) models.A unit testing framework for the Adaptive Modeling Language (AML), AUnit, has been developed. It is explained in detail and an introductory guide to using AUnit for testing KBE models in AML is included. AUnit was used to perform TDD and CI on different KBE models, both creating new models and testing existing ones. Testing KBE models differ to a large degree from testing regular object-oriented software. Different approaches for unit testing and TDD has been performed on several KBE models. It was concluded that the basic attributes in KBE models cannot be unit tested in a sensible way. This includes adding any superclasses and simple parameters like height and width. Without including these attributes, unit testing cannot fully be performed on KBE models using AUnit. However, the models can highly benefit from having unit tests for the logic in the model, which is where the most severe bugs will be. When the attributes are implemented in the model, test-driven development (TDD) can be performed on the models.Automatic continuous integration (CI) has been performed on a KBE model andthe basic principles of CI have been accounted for. CI for KBE models does notdiffer much from other software projects, so its focus is reduced.
APA, Harvard, Vancouver, ISO, and other styles
17

Dhingra, Neha. "Analysis of ORM Based JPA Implementations." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36524.

Full text
Abstract:
Large scale application design and development involve some critical decisions, and one of the most important issues that a ect software application design and development is the technology stack used to develop an extensive system. In a JPA API, response time is often a measure of how quickly an interactive system responds to user input. Persisting framework, such as Object Relational Mapping (ORM) are applied to manage communications between an object model and data model components and are vital for such systems. Hibernate is considered the most e ective ORM framework due to its advanced features, and it is the de-facto standard for Java Persistence API (JPA)-based data persistent frameworks. This thesis comprises a review of the most widely used JPA providers, particularly frameworks that provide JPA support such as Hibernate JPA, EclipseLink, OpenJPA and DataNucleus JPA. In current java programming, APIs based on persistence and performance are integral aspects of an application. Performance analysis of the above four JPA implementations is based on the ORM framework that contributed most signi cantly to discovering the challenges and veri ed the programming considerations in the language. For a large-scale enterprise, working on JPA is always tedious due to the potential pressures and overloads of the implementations, as well as the comprehensive guarantee, that is required while adopting the technology. A JPA implementation continually persists data into the database at runtime, by managing persistence processes through interfaces and classes, that often needs optimization, to provide performance-oriented results at heavy loads. Therefore, in this thesis a detail feature analysis was performed, before the performance analysis. To enhance the comparison of the persistence framework, an extended experiment with a cloud database using Database-as-a-service (DBaaS) versus Physical Persistence was performed, using a comparative approach for all four JPA implementations. Di erent SQL queries on cloud versus physical persistence for JPA applications were measured using CPU, GC, and threads (live and daemon). Finally, a statistical analysis was performed using the Pearson's correlation coe cient and a steady/start-up phase.
APA, Harvard, Vancouver, ISO, and other styles
18

Hedén, Mattias. "SCTP - An analysis of proposed implementations." Thesis, Högskolan i Skövde, Institutionen för kommunikation och information, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-5216.

Full text
Abstract:
There are several weaknesses in the popular transport protocol TCP (Transmission Control Protocol). A possible replacement to TCP would be the newer protocol SCTP (Steam Control Transmission Protocol). This thesis presents three different proposed implementations of SCTP: HTTP over SCTP, online games over SCTP and IP mobility over SCTP. The proposed implementations are analyzed, based on relevant literature, and recommendations are issued on the importance of moving forward with them. The result of the thesis is that HTTP over SCTP is recommended. SCTP features such as multi-streaming, multi-homing and the four-way handshake addresses the inherent weaknesses with using TCP for HTTP traffic. IP mobility over SCTP is also recommended since it results in lower delay in the handover process compared to MIPv6 (Mobile IPv6). Online games over SCTP, however, is not recommended since the existing implementations of SCTP results in poor latency for the kind of traffic online games produce.
APA, Harvard, Vancouver, ISO, and other styles
19

De, Wulf Martin. "From timed models to timed implementations." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210797.

Full text
Abstract:

Computer Science is currently facing a grand challenge :finding good design practices for embedded systems. Embedded systems are essentially computers interacting with some physical process. You could find one in a braking systems or in a nuclear power plant for example. They present several design difficulties :first they are reactive systems, interacting indefinitely with their environment. Second,they must satisfy real-time constraints specifying when they should respond, and not only how. Finally, their environment is often deeply continuous, presenting complex dynamics. The formal models of choice for specifying such systems are timed and hybrid automata for which model checking is pretty well studied.

In a first part of this thesis, we study a complete design approach, including verification and code generation, for timed automata. We have to define a new semantics for timed automata, the AASAP semantics, that preserves the decidability properties for model checking and at the same time is implementable. Our notion of implementability is completely novel, and relies on the simulation of a semantics that is obviously implementable on a real platform. We wrote tools for the analysis and code generation and exemplify them on a case study about the well known Philips Audio Control Protocol.

In a second part of this thesis, we study the problem of controller synthesis for an environment specified as a hybrid automaton. We give a new solution for discrete controllers having only an imperfect information about the state of the system. In the process, we defined a new algorithm, based on the monotonicity of the controllable predecessors operator, for efficiently finding a controller and we show some promising applications on a classical problem :the universality test for finite automata.
Doctorat en sciences, Spécialisation Informatique
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
20

Lai, Pei Ling. "Neural implementations of canonical correlation analysis." Thesis, University of the West of Scotland, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nonthaleerak, Preeprem. "Strengthening Six Sigma for service implementations." Thesis, Lancaster University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Singh, Rajinder Jit. "VLSI implementations for wave digital filtering." Thesis, Queen's University Belfast, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Roche, Brendan. "Modelling hardware implementations of neural networks." Thesis, University of Ulster, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Yoo, Daniel. "Alchemy: Transmuting Base Specifications into Implementations." Digital WPI, 2009. https://digitalcommons.wpi.edu/etd-theses/168.

Full text
Abstract:
Alloy specifications are used to define lightweight models of systems. We present Alchemy, which compiles Alloy specifications into implementations that execute against persistent databases. Alchemy translates a subset of Alloy predicates into imperative update operations, and it converts facts into database integrity constraints that it maintains automatically in the face of these imperative actions. In addition to presenting the semantics and an algorithm for this compilation, we present the tool and outline its application to a non-trivial specification. We also discuss lessons learned about the relationship between Alloy specifications and imperative implementations.
APA, Harvard, Vancouver, ISO, and other styles
25

Farmani, Mohammad. "Threshold Implementations of the Present Cipher." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/1024.

Full text
Abstract:
"The process of securing data has always been a challenge since it is related to the safety of people and society. Nowadays, there are many cryptographic algorithms developed to solve security problems. However, some applications have constraints which make it difficult to achieve high levels of security. Light weight cryptography aims to address this issue while trying to maintain low costs. Side-channel attacks have changed the way of cryptography significantly. In this kind of attacks, the attacker has physical access to the crypto-system and can extract the sensitive data by monitoring and measuring the side-channels such as power consumption, electromagnetic emanation, timing information, sound, etc. These attacks are based on the relationship between side-channels and secret data. Therefore, there need to be countermeasures to eliminate or reduce side channel leaks or to break the relationship between side-channels and secret data to protect the crypto systems against side-channel attacks. In this work, we explore the practicality of Threshold Implementation (TI) with only two shares for a smaller design that needs less randomness but is still leakage resistant. We demonstrate the first two-share Threshold Implementations of light-weight block cipher Present. Based on implementation results, two-share TI has a lower area overhead and better throughput when compared with a first-order resistant three-share scheme. Leakage analysis of the developed implementations reveals that two-share TI can retain perfect first-order resistance. However, the analysis also exposes a strong second-order leakage. "
APA, Harvard, Vancouver, ISO, and other styles
26

Zayes, Pedro A. (Pedro Angel) 1975. "Analyzing the behavior of TCP implementations." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/47627.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 72-73).
by Pedro A. Zayes.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
27

Weinstein, Yaakov Shmuel 1974. "Experimental implementations of quantum information processing." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Dobson, Jonathan M. "ASIC implementations of the Viterbi Algorithm." Thesis, University of Edinburgh, 1999. http://hdl.handle.net/1842/13669.

Full text
Abstract:
The Viterbi Algorithm is a popular method for decoding convolutional codes, receiving signals in the presence of intersymbol-interference, and channel equalization. In 1981 the European Telecommunications Administration (CEPT) created the Groupe Special Mobile (GSM) Committee to devise a unified pan-European digital mobile telephone standard. The proposed GSM receiver structure brings together Viterbi decoding and equilization. This thesis presents three VLSI designs of the Viterbi Algorithm with specific attention paid to the use of such modules within a GSM receiver. The first design uses a technique known as redundant number systems to produce a high speed decoder. The second design uses complementary pass-transistor logic to produce a low-power channel equalizer. The third design is a low area serial equalizer. In describing the three designs, redundant number systems and complementary pass-transistor logic are examined. It is shown that while redundant number systems can offer significant speed advantages over twos complement binary, there are other representations that can perform equally well, if not better. It will also be shown that complementary pass-transistor logic can offer a small improvement for VLSI circuits in terms of power consumption.
APA, Harvard, Vancouver, ISO, and other styles
29

De, Castro Leo(Leo Ramón Nathan). "Practical homomorphic encryption implementations & applications." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129883.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2020
Cataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 67-69).
Homomorphic encryption is an exciting technology that enables computations to be performed over encrypted data. While initial constructions were impractical, recent works have enabled eciency necessary for many practical application. In this thesis, we present a new library for homomorphic encryption and two of applications built on this library. The first application is a fast oblivious linear evaluation protocol, a fundamental building block for secure computation. The second is a secure data aggregation platform used to study cyber risk.
by Leo de Castro.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
30

Herrera-Marti, David A. "Implementations of fault-tolerant quantum devices." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/10209.

Full text
Abstract:
Accurate control and addressability of quantum devices will come with the promise of improvement in a wide variety of theoretical and applied fields, such as chemistry, condensed matter physics, theoretical computer science, foundational physics, communications, metrology and others. Decoherence of quantum states and the loss of quantum systems have adverse effects and deter a satisfactory usage of quantum devices. This is the main problem to be overcome, which is the goal of quantum fault tolerance. In this thesis we present a series of works that contribute to some of the fields mentioned above, in the direction of fighting decoherence and loss. These works fall in two categories: on one hand, we looked at computer architectures which can be used to combat errors, using techniques of quantum error correcting codes. In a first project we found decoherence and loss probability thresholds below which quantum computing is provably possible. We assumed a very particular error model tailored specially to quantum dots as single photon sources and linear optics. Subsequently we looked at the problem of loss, both of heralded and unheralded, and devised some ways to fight it. The framework under which this work was done was used to develop theory which is currently being tested in a quantum optics experimental group and will be reported in an article later this year. On the other hand, we studied how the error probability can be reduced at the physical level, thanks exclusively to the properties of the system in which information is stored, as opposed to making use of quantum codes. We looked at a particular superconducting circuit, which is potentially very well protected against some types of decoherence. In particular, we observed that the interaction with the environment become weaker for certain values of the circuit external parameters.
APA, Harvard, Vancouver, ISO, and other styles
31

Hanna, Youssef. "Verifying sensor network security protocol implementations." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
32

Benatar, Jonathan G. "Fem implementations of magnetostrictive-based applications." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/3273.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Dept. of Aerospace Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
33

Rodrigues, dos Santos d'Amorim Fernanda. "Modularity analysis of use case implementations." Universidade Federal de Pernambuco, 2010. https://repositorio.ufpe.br/handle/123456789/2409.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:57:56Z (GMT). No. of bitstreams: 2 arquivo3237_1.pdf: 1530844 bytes, checksum: dcdb6221a7c974cbfc9e96c7629001ef (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Atualmente, arquitetura baseada em componentes é a abordagem mais utilizada no desenvolvimento de softwares complexos; esta tem como principal objetivo a atribuição dos requisitos da aplicação aos componentes. Uma das técnicas mais difundidas para especificação de requisitos é a utilização de Casos de Uso. Em geral, arquiteturas de software baseadas em componentes resultam em implementações onde o código relativo a um caso de uso está espalhado e entrelaçado em diversos componentes do sistema, caracterizando um crosscutting concern. Isto ocorre porque técnicas tradicionais, como Orientação a Objetos (OO), não oferecem mecanismos que sejam capazes de modularizar este tipo de concern. Recentemente, novas técnicas de modularização como aspectos, mixins e classes virtuais, foram propostas para tentar resolver este problema. Estas técnicas podem ser usadas para agrupar o código relacionado a um único caso de uso em uma nova unidade de modularização. Este trabalho analisa qualitativa e quantitativamente o impacto causado por este tipo de modularização de casos de uso. Nós exploramos duas técnicas baseadas em Orientação a Aspectos (OA): (i) Casos de Uso como Aspectos - onde utilizamos os construtores de AspectJ para isolar todo código relativo à implementação de um caso de uso em um aspecto; e (ii) Casos de Uso como Colaborações Plugáveis - onde usamos os construtores de CaesarJ para modularizar implementações de casos de uso através de uma composição hierárquica de colaborações. Nós executamos dois estudos de casos onde comparamos as implementações OA de casos de uso com sua implementação OO. No processo de avaliação extraímos métricas tradicionais e contemporâneas incluindo coesão, acoplamento e separação de concerns e analisamos modularidade em termos de atributos de qualidade de software como: plugabilidade, rastreabilidade e suporte para desenvolvimento em paralelo. Nossos resultados indicam que modularidade é um conceito relativo e sua análise depende de outros fatores além do sistema alvo, das métricas e da técnica aplicada
APA, Harvard, Vancouver, ISO, and other styles
34

Eliasi, Behnam, and Arian Javdan. "Comparison of blockchain e-wallet implementations." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258932.

Full text
Abstract:
With the rise of blockchain technology and cryptocurrency, secure e-wallets also become more important. But what makes an e-wallet secure? In this report, we compare different aspects of ewallets to see which alternatives are secure and convenient enough to be used.This report contains comparative analyses of different implementation for e-wallets. The problem area is divided into three smaller areas: Key storage, authentication, and recovery. These problem areas have defined criteria for what is considered good qualities in each respective area.The results show that for key storage, the best options are, Android’s keystore/IOS’ secure enclave, offline storage or a hybrid hot/cold storage. For authentication, the best alternatives proved to be BankID and local authentication through the phone’s OS. Good Recovery alternatives include recovery seeds that recover the whole e-wallet or using multiple keys for both signing and recovery.The proof of concept made for this project uses three different storage methods with the authentication methods for each one and with the possibility of recovery in case a key should be lost. The storage methods used are offline storage thought QR-codes, online storage with firebase and local storage with Android keystore or Secure enclave. Authentication is done with Facebook/Google sign in or local authentication.
Med blockkedja och kryptovalutornas ökande popularitet blir säkra e-plånböcker allt mer viktiga. Men vad gör en e-plånbok säker? I detta arbete ska olika implementationer för e-plånböcker undersökas för att se vilka alternativ som är tillräckligt säkra samt användarvänliga.Problemområdena delas upp i följande delar: nyckellagring, autentisering och återhämtning av stulen/förlorade nycklar. Arbetet innefattar jämförelser mellan olika lösningar till dessa områden med definierade jämförelsekriterier.Resultatet visar att för nyckellagring är de bästa alternativen Androids keystore system/IOS secure enclave som båda är en form av säker lagringsplats på telefonen, offline lagring och hybridlagring som enkelt förklarat är en tjänst som bevarar data offline och gör den online när användaren väl vill ha tillgång till datan. För autentisering är de bästa alternativen BankID och lokal autentisering genom telefonens operativsystem. För återhämtning av nycklar är de bästa alternativen recovery seed eller att använda multipla nycklar för både signering och återhämtning.En proof of concept gjordes där lagringsmetoderna papper (exempelvis QR-kod), online-lagring med Firebase och lokal lagring med Android keystore eller Secure enclave implementerats. Autentiseringen sker med hjälp av Facebook/Google login och lokal autentisering. Återhämtning görs med två utav tre nycklarna som används för både signering och återhämtning.
APA, Harvard, Vancouver, ISO, and other styles
35

Giunti, Marco <1973&gt. "Secure implementations of typed channel abstractions." Doctoral thesis, Università Ca' Foscari Venezia, 2007. http://hdl.handle.net/10579/226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kimmel, Richard A. "Experimentation methodology for evaluating operational INFOCON implementations." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA392768.

Full text
Abstract:
Thesis (M.S. in Systems Technology) Naval Postgraduate School, June 2001.
Thesis advisors, William G. Kemple, Shelley P. Gallup. Includes bibliographical references (p. 105-106). Also Available online.
APA, Harvard, Vancouver, ISO, and other styles
37

Giambiagi, Pablo. "Secrecy for mobile implementations of security protocols." Licentiate thesis, KTH, Microelectronics and Information Technology, IMIT, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1341.

Full text
Abstract:

Mobile code technology offers interesting possibilities tothe practitioner, but also raises strong concerns aboutsecurity. One aspect of security is secrecy, the preservationof confidential information. This thesis investigates themodelling, specification and verification of secrecy in mobileapplications which access and transmit confidential informationthrough a possibly compromised medium (e.g. the Internet).These applications can be expected to communicate secretinformation using a security protocol, a mechanism to guaranteethat the transmitted data does not reach unauthorizedentities.

The central idea is therefore to relate the secrecyproperties of the application to those of the protocol itimplements, through the definition of a "confidential protocolimplementation" relation. The argument takes an indirect form,showing that a confidential implementation transmits secretdata only in the ways indicated by the protocol. We define theimplementation relation using labelled transition semantics,bisimulations and relabelling functions. To justify itstechnical definition, we relate this property to a notion ofnoninterference for nondeterministic systems derived fromCohen’s definition of Selective Independency. We alsoprovide simple and local conditions that greatly simplify itsverification, and report on our experiments on an architectureshowing how the proposed formulations could be used in practiceto enforce secrecy of mobile code.

APA, Harvard, Vancouver, ISO, and other styles
38

Larsen, Fredrik Lied. "Conformance testing of Data Exchange Set implementations." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9258.

Full text
Abstract:

Product information exchange has been described by a number of standards. The “Standard for the Exchange of Product model data” (STEP) is published by ISO as an international standard to cover this exchange. “Product Life Cycle Support” (PLCS) is a standard developed as an extension to STEP, covering the complete life cycle information needs for products. PLCS uses Data Exchange Sets (DEXs) to exchange information. A DEX is a subset of the PLCS structure applicable for product information exchange. A DEX is specified in a separate document form the PLCS standard, and is published under OASIS. The development of DEXs is ongoing and changing, nine DEXs have been identified and are being developed within the Organization for the Advancement of Structured Information Standards (OASIS). Each of the nine DEXs covers a specific business concept. Implementations based on the DEX specifications are necessary in order to send and receive populated DEXs with product information. The implementations add contents to a DEX structure in a neutral file format which can be exchanged. Interoperability between senders and receivers of DEXs can not be guaranteed, however, conformance testing of implementations can help increase the chances of interoperability. Conformance testing is the process of testing an implementation against a set of requirements stated in a specification or standard used to develop the implementation. Conformance testing is performed by sending inputs to the implementation and observing the output. The output is then analysed with respect to expected output. STEP dedicates a whole section of the standard to conformance testing of STEP implementations. This section describes how implementations of STEP shall be tested and analysed. PLCS is an extension of STEP, and DEXs are subsets of PLCS. Conformance testing for STEP is used as a basis for DEX conformance testing, because of the similarities between PLCS and STEP. A testing methodology based on STEP conformance testing and DEX specifications is developed. The testing methodology explains how conformance testing can be achieved on DEX implementations exemplified with a test example on a specific DEX. The thesis develops a proposed set of test methods for conformance testing DEX adapter implementations. Conformance testing of Export adapters tests the adapter’s ability to populate and output a correct DEX according to the specifications in the applicable DEX specification. Conformance testing of the Import adapter verifies that the content of the populated input DEX is retrievable in the receiving computer system. A specific DEX, “Identify a part and its constituent parts”, is finally used as an example on how to test a specific DEX specification. Test cases are derived from a set of test requirements identified from the DEX specification. Testing of these requirements is explained explicitly.

APA, Harvard, Vancouver, ISO, and other styles
39

Bush, Charles D. "Teacher Perceptions About New Evaluation Model Implementations." Thesis, Northcentral University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10622533.

Full text
Abstract:

The challenge of designing and implementing teacher evaluation reform throughout the U.S. has been represented by different policies, teacher evaluation components, and difficulties with implementation. The purpose of this qualitative embedded single case study was to explore teacher perceptions about new evaluation model implementations and how new model implementations impact the relationships between teachers and administration. The main unit of analysis was teachers at one school experiencing the implementation of new evaluation reform. The sub-units were the experience levels of teachers, specifically New Teachers, Mid-career Teachers, and Seasoned Teachers. Findings in this research demonstrated a protectiveness of the low income school in which the participants work, and a lack of trust in the state understanding the needs of a low performing school. The findings indicated teachers perceive the lack of local control or input into the development or implementation of a new evaluation tool may create feelings of mistrust and ulterior motives. Results also emerged suggesting that teachers perceive a new teacher evaluation model may add stress to the site, provide tools for feedback and accountability, and possibly negatively impact the relationships with students. Finally, the findings indicated striking differences of the perceptions of teachers with different levels of teaching experience. Teachers of all experience levels perceived similar, positive relationships between teachers and administrators. However, the perceptions of the current evaluation tool was markedly different based on years of experience. New Teachers and Mid-Career Teachers stressed a desire to receive feedback and the need for feedback to improve their practice. Conversely, Seasoned Teachers stated a clear lack of need or desire for feedback. Additionally, All experience level groups perceived that there may be some level of added stress during the implementation of a new evaluation tool. Seasoned Teachers Mid-Career Teachers perceive the possibility of a new tool as a negative event, while New Teachers viewed this as an opportunity for accountability and alignment.

APA, Harvard, Vancouver, ISO, and other styles
40

Zeffer, Håkan. "Hardware–Software Tradeoffs in Shared-Memory Implementations." Licentiate thesis, Uppsala universitet, Avdelningen för datorteknik, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-86369.

Full text
Abstract:
Shared-memory architectures represent a class of parallel computer systems commonly used in the commercial and technical market. While shared-memory servers typically come in a large variety of configurations and sizes, the advance in semiconductor technology have set the trend towards multiple cores per die and multiple threads per core. Software-based distributed shared-memory proposals were given much attention in the 90s. But their promise of short time to market and low cost could not make up for their unstable performance. Hence, these systems seldom made it to the market. However, with the trend towards chip multiprocessors, multiple hardware threads per core and increased cost of connecting multiple chips together to form large-scale machines, software coherence in one form or another might be a good intra-chip coherence solution. This thesis shows that data locality, software flexibility and minimal processor support for read and write coherence traps can offer good performance, while removing the hard limit of scalability. Our aggressive fine-grained software-only distributed shared-memory system exploits key application properties, such as locality and sharing patterns, to outperform a hardware-only machine on some benchmarks. On average, the software system is 11 percent slower than the hardware system when run on identical node and interconnect hardware. A detailed full-system simulation study of dual core CMPs, with multiple hardware threads per core and minimal processor support for coherence traps is on average one percent slower than its hardware-only counterpart when some flexibility is taken into account. Finally, a functional full-system simulation study of an adaptive coherence-batching scheme shows that the number of coherence misses can be reduced with up to 60 percent and bandwidth consumption reduced with up to 22 percent for both commercial and scientific applications.
APA, Harvard, Vancouver, ISO, and other styles
41

Ketcha, Ngassam Ernest. "Towards cache optimization in finite automata implementations." Thesis, Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-07212007-120525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Xiao-an. "Trellis based decoders and neural network implementations." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/13730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

O'Shea, Nicholas. "Verification and validation of security protocol implementations." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4753.

Full text
Abstract:
Security protocols are important and widely used because they enable secure communication to take place over insecure networks. Over the years numerous formal methods have been developed to assist protocol designers by analysing models of these protocols to determine their security properties. Beyond the design stage however, developers rarely employ formal methods when implementing security protocols. This may result in implementation flaws often leading to security breaches. This dissertation contributes to the study of security protocol analysis by advancing the emerging field of implementation analysis. Two tools are presented which together translate between Java and the LySa process calculus. Elyjah translates Java implementations into formal models in LySa. In contrast, Hajyle generates Java implementations from LySa models. These tools and the accompanying LySa verification tool perform rapid static analysis and have been integrated into the Eclipse Development Environment. The speed of the static analysis allows these tools to be used at compile-time without disrupting a developer’s workflow. This allows us to position this work in the domain of practical software tools supporting working developers. As many of these developers may be unfamiliar with modelling security protocols a suite of tools for the LySa process calculus is also provided. These tools are designed to make LySa models easier to understand and manipulate. Additional tools are provided for performance modelling of security protocols. These allow both the designer and the implementor to predict and analyse the overall time taken for a protocol run to complete. Elyjah was among the very first tools to provide a method of translating between implementation and formal model, and the first to use either Java for the implementation language or LySa for the modelling language. To the best of our knowledge, the combination of Elyjah and Hajyle represents the first and so far only system which provides translation from both code to model and back again.
APA, Harvard, Vancouver, ISO, and other styles
44

Triki, Ahlem. "Distributed Implementations of Timed Component-based Systems." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENM014/document.

Full text
Abstract:
L'implémenation distribuée des systèmes temps-réel a été toujous une tâche non-triviale. La coordination des composants s'exécutant sur une plate-forme distribuée doit être assurée par des protocoles de communication complexes en tenant compte de leurs contraintes de temps. Dans cette thèse, nous proposons un flot de conception rigoureux à partir d'un modèle de haut niveau d'un logiciel d'application décrit en BIP (Behavior, Interaction, Priority) et conduisant à une implémenation distribuée. Le flot de conception implique l'utilisation de transformations de modèles tout en conservant les propriétés fonctionnelles des modèles originaux de BIP. Un modèle BIP se compose d'un ensemble de composants qui se synchronisent à travers les interactions et les priorités. Notre méthode transforme les modèles BIP en un modéle Send/Receive qui fonctionnent en utilisant le passage de messages asynchrones. Les modèles obtenus sont directement implémenté sur une plate-forme donnée. Nous présentons trois solutions pour obtenir un modéle Send/Receive. Dans la première solution, nous proposons des modéles Send/Receive qui fonctionnent avec un engin centralisé qui implémente les interactions et les priorités. Les composants atomiques des modèles originaux sont transformés en composants Send/Receive qui communiquent avec l'engin centralisé via des interactions Send/Receive. L'engin centralisé exécute les interactions sous certaines conditions définies par les modèles à états partiels. Ces modèles représentent une déscription haut niveau de l'exécution parallèle de modèles BIP. Dans la deuxième solution, nous proposons de décentraliser l'engin. Les modéles Send/Receive obtenus sont structurées en trois couches: (1) les composants Send/Receive (2) un ensemble d'engin, chacun exécutant un sous-ensemble d'interactions, et (3) un ensemble de composants implémentant un protocole de résolution des conflits. Avec les solutions décrites ci-dessus, nous supposons que les latences de communication sont négligeables. Ceci est du au fait que les modéles Send/Receive sont concu de telle sorte qu'il n'y ait pas retard entre la décision d'exécuter une interaction dans un engin et son exécution dans les composants participant. Dans la troisième solution, nous proposons des modéles Send/ Receive qui exécutent correctement même en présence de latences de communication. Cette solution est basée sur le fait que les engin planifient l'exécution des interactions et notifient les composants à l'avance. Afin de planifier correctement les interactions, nous montrons que les engins sont tenus à observer des composants supplémentaires, en plus de ceux qui participent aux interactions. Nous présentons également une méthode pour optimiser le nombre de composants observés, en se basant sur l'utilisation de techniques d'analyse statique. A partir d'un modéle Send/Receive donné, nous générons une application distribuée où les interactions Send/Receive sont implémentées par les sockets TCP. Les résultats expérimentaux sur des exemples non triviaux et des études de cas montrent l'efficacité de notre méthode
Correct distributed implementation of real-time systems has always been a challenging task. The coordination of components executing on a distributed platform has to be ensured by complex communication protocols taking into account their timing constraints. In this thesis, we propose rigorous design flow starting from a high-level model of an application software in BIP (Behavior, Interaction, Priority) and leading to a distributed implementation. The design flow involves the use of model transformations while preserving the functional properties of the original BIP models. A BIP model consists of a set of components synchronizing through multiparty interactions and priorities. Our method transforms high-level BIP models into Send/Receive models that operate using asynchronous message passing. The obtained models are directly implementable on a given platform. We present three solutions for obtaining Send/Receive BIP models. -In the first solution, we propose Send/Receive models with a centralized scheduler that implements interactions and priorities. Atomic components of the original models are transformed into Send/Receive components that communicate with the centralized scheduler via Send/Receive interactions. The centralized scheduler is required to schedule interactions under some conditions defined by partial state models. Those models represent high-level representation of parallel execution of BIP models. - In the second solution, we propose to decentralize the scheduler. The obtained Send/Receive models are structured in 3 layers: (1) Send/Receive atomic components, (2) a set of schedulers each one handling a subset of interactions, and (3) a set of components implementing a conflict resolution protocol. With the above solutions, we assume that the obtained Send/Receive models are implemented on platforms that provide fast communications (e.g. multi-process platforms) to meet perfect synchronization in components. This is because the obtained schedulers are modeled such that interactions scheduling corresponds exactly to execution in components. - In the third solution, we propose Send/Receive models that execute correctly even if communications are not fast enough. This solution is based on the fact that schedulers plan interactions execution and notify components in advance. In order to plan correctly the interactions, we show that the schedulers are required to observe additional components, in addition to the ones participating in the interactions. We present also a method to optimize the number of observed components, based on the use of static analysis techniques. From a given Send/Receive model, we generate a distributed implementation where Send/Receive interactions are implemented by TCP sockets. The experimental results on non trivial examples and case studies show the efficiency of our design flow
APA, Harvard, Vancouver, ISO, and other styles
45

Zorluoglu, Habib Izzet, Turker Cakir, and Fahrettin Tezcan. "Offset implementations for Turkey's International Defense Acquisitions." Monterey California. Naval Postgraduate School, 2008. http://hdl.handle.net/10945/10326.

Full text
Abstract:
MBA Professional Report
"Offsets" is the umbrella term for a broad range of industrial and commercial "compensatory" practices. Specifically, offset agreements in the defense environment are increasing globally as a percentage of exports. Developed countries with established defense industries use offsets to channel work or technology to their domestic defense companies. Countries with newly industrialized economies are utilizing both military and commercial related offsets that involve the transfer of technology and know-how. Overall, offsets are definitely not new, and occur under a variety of names. In the defense industry it is now an accepted practice among both sellers and purchasers, and is likely to remain so for the indefinite future. This research will discuss defense offsets within the context of international trade and global arms trade. This discussion will draw upon the existing body of theory and practice on offsets (as identified in the literature review) to provide a basic understanding of offsets within the wider international trade context. The offset policies of selected countries will be analyzed prior to exploring the development of Turkish offset policy. Additionally sample defense acquisition programs will be examined as case studies to explain the incentives within Turkish offsets and to suggest future offset policies
APA, Harvard, Vancouver, ISO, and other styles
46

Ravindran, Somasundaram. "Aspects of practical implementations of PRAM algorithms." Thesis, University of Warwick, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Sharpe, Jeremy Edward. "Expanding the synthesis of distributed memory implementations." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106006.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 63).
In this thesis, I expanded the programming model implemented by the Sketch language to supplement its distributed memory parallelism with shared memory parallelism that uses the popular fork-join model. The primary contribution of this thesis is the means by which the code is assured to be free of race conditions. Sketch uses constraint satisfaction analysis to ensure it synthesizes code the functions properly for all inputs, and I demonstrate how assertions can be generated and inserted into the analysis to guarantee freedom from race conditions. This expanded programming model is then evaluated using test cases to ensure correct operation and benchmarks to examine overall performance.
by Jeremy Edward Sharpe.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
48

Hasan, Sami Kadhim. "FPGA implementations for parallel multidimensional filtering algorithms." Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/2159.

Full text
Abstract:
One and multi dimensional raw data collections introduce noise and artifacts, which need to be recovered from degradations by an automated filtering system before, further machine analysis. The need for automating wide-ranged filtering applications necessitates the design of generic filtering architectures, together with the development of multidimensional and extensive convolution operators. Consequently, the aim of this thesis is to investigate the problem of automated construction of a generic parallel filtering system. Serving this goal, performance-efficient FPGA implementation architectures are developed to realize parallel one/multi-dimensional filtering algorithms. The proposed generic architectures provide a mechanism for fast FPGA prototyping of high performance computations to obtain efficiently implemented performance indices of area, speed, dynamic power, throughput and computation rates, as a complete package. These parallel filtering algorithms and their automated generic architectures tackle the major bottlenecks and limitations of existing multiprocessor systems in wordlength, input data segmentation, boundary conditions as well as inter-processor communications, in order to support high data throughput real-time applications of low-power architectures using a Xilinx Virtex-6 FPGA board. For one-dimensional raw signal filtering case, mathematical model and architectural development of the generalized parallel 1-D filtering algorithms are presented using the 1-D block filtering method. Five generic architectures are implemented on a Virtex-6 ML605 board, evaluated and compared. A complete set of results on area, speed, power, throughput and computation rates are obtained and discussed as performance indices for the 1-D convolution architectures. A successful application of parallel 1-D cross-correlation is demonstrated. For two dimensional greyscale/colour image processing cases, new parallel 2-D/3-D filtering algorithms are presented and mathematically modelled using input decimation and output image reconstruction by interpolation. Ten generic architectures are implemented on the Virtex-6 ML605 board, evaluated and compared. Key results on area, speed, power, throughput and computation rate are obtained and discussed as performance indices for the 2-D convolution architectures. 2-D image reconfigurable processors are developed and implemented using single, dual and quad MAC FIR units. 3-D Colour image processors are devised to act as 3-D colour filtering engines. A 2-D cross-correlator parallel engine is successfully developed as a parallel 2-D matched filtering algorithm for locating any MRI slice within a MRI data stack library. Twelve 3-D MRI filtering operators are plugged in and adapted to be suitable for biomedical imaging, including 3-D edge operators and 3-D noise smoothing operators. Since three dimensional greyscale/colour volumetric image applications are computationally intensive, a new parallel 3-D/4-D filtering algorithm is presented and mathematically modelled using volumetric data image segmentation by decimation and output reconstruction by interpolation, after simultaneously and independently performing 3-D filtering. Eight generic architectures are developed and implemented on the Virtex-6 board, including 3-D spatial and FFT convolution architectures. Fourteen 3-D MRI filtering operators are plugged and adapted for this particular biomedical imaging application, including 3-D edge operators and 3-D noise smoothing operators. Three successful applications are presented in 4-D colour MRI (fMRI) filtering processors, k-space MRI volume data filter and 3-D cross-correlator.
APA, Harvard, Vancouver, ISO, and other styles
49

Mitchell, Kevin Nicholas Peter. "Implementations of process synchronisation, and their analysis." Thesis, University of Edinburgh, 1985. http://hdl.handle.net/1842/15411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Yun, Paul M. "Parallel Bus Implementations in Satellite Communications Systems." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615247.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
As the volume of linkages in the satellite communications systems increases, the parallel bus between the various processors of the satellite becomes a bottle neck to transfer the commands and data. The remedies to this problem are trivial in the ground stations; however, this problem imposes severe restrictions in parallel bus implementation of the satellite communications systems. The most severe restriction is the minimization of wire connections in the physical layer to minimize the weight, size and power consumption, and also to maximize the reliability. Another restriction is the flexibility in the link layer to adapt the different characteristics of the command and data messages. In this paper, the implementation to overcome the imposed restrictions in both physical and link layer of the parallel bus will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography