Dissertations / Theses on the topic 'Sécurité informatique'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Sécurité informatique.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Dacier, Marc. "Vers une évaluation quantitative de la sécurité informatique." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 1994. http://tel.archives-ouvertes.fr/tel-00012022.
Full textLes modèles formels développés pour l'étude de la sécurité informatique, n'offrent pas le cadre mathématique désiré. L'auteur montre qu'ils adoptent une hypothèse de pire cas sur le comportement des utilisateurs, incompatible avec une modélisation réaliste. Après avoir montré, sur la base du modèle take-grant, comment s'affranchir de cette hypothèse, l'auteur définit un nouveau modèle, le graphe des privilèges, plus efficace pour gérer certains problèmes de protection. Il illustre son utilisation dans le cadre des systèmes Unix.
Enfin, l'auteur propose d'évaluer la sécurité en calculant le temps et l'effort nécessaires à un intrus pour violer les objectifs de protection. Il montre comment définir un cadre mathématique apte à représenter le système pour obtenir de telles mesures. Pour cela, le graphe des privilèges est transformé en un réseau de Petri stochastique et son graphe des marquages est dérivé. Les mesures sont calculées sur cette dernière structure et leurs propriétés mathématiques sont démontrées. L'auteur illustre l'utilité du modèle par quelques résultats issus d'un prototype développé afin d'étudier la sécurité opérationnelle d'un système Unix.
Sadde, Gérald. "Sécurité logicielle des systèmes informatiques : aspects pénaux et civils." Montpellier 1, 2003. http://www.theses.fr/2003MON10019.
Full textSerme, Gabriel. "Modularisation de la sécurité informatique dans les systèmes distribués." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0063/document.
Full textAddressing security in the software development lifecycle still is an open issue today, especially in distributed software. Addressing security concerns requires a specific know-how, which means that security experts must collaborate with application programmers to develop secure software. Object-oriented and component-based development is commonly used to support collaborative development and to improve scalability and maintenance in software engineering. Unfortunately, those programming styles do not lend well to support collaborative development activities in this context, as security is a cross-cutting problem that breaks object or component modules. We investigated in this thesis several modularization techniques that address these issues. We first introduce the use of aspect-oriented programming in order to support secure programming in a more automated fashion and to minimize the number of vulnerabilities in applications introduced at the development phase. Our approach especially focuses on the injection of security checks to protect from vulnerabilities like input manipulation. We then discuss how to automate the enforcement of security policies programmatically and modularly. We first focus on access control policies in web services, whose enforcement is achieved through the instrumentation of the orchestration mechanism. We then address the enforcement of privacy protection policies through the expert-assisted weaving of privacy filters into software. We finally propose a new type of aspect-oriented pointcut capturing the information flow in distributed software to unify the implementation of our different security modularization techniques
Serme, Gabriel. "Modularisation de la sécurité informatique dans les systèmes distribués." Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0063.
Full textAddressing security in the software development lifecycle still is an open issue today, especially in distributed software. Addressing security concerns requires a specific know-how, which means that security experts must collaborate with application programmers to develop secure software. Object-oriented and component-based development is commonly used to support collaborative development and to improve scalability and maintenance in software engineering. Unfortunately, those programming styles do not lend well to support collaborative development activities in this context, as security is a cross-cutting problem that breaks object or component modules. We investigated in this thesis several modularization techniques that address these issues. We first introduce the use of aspect-oriented programming in order to support secure programming in a more automated fashion and to minimize the number of vulnerabilities in applications introduced at the development phase. Our approach especially focuses on the injection of security checks to protect from vulnerabilities like input manipulation. We then discuss how to automate the enforcement of security policies programmatically and modularly. We first focus on access control policies in web services, whose enforcement is achieved through the instrumentation of the orchestration mechanism. We then address the enforcement of privacy protection policies through the expert-assisted weaving of privacy filters into software. We finally propose a new type of aspect-oriented pointcut capturing the information flow in distributed software to unify the implementation of our different security modularization techniques
Vache, Marconato Geraldine. "Evaluation quantitative de la sécurité informatique : approche par les vulnérabilités." Phd thesis, INSA de Toulouse, 2009. http://tel.archives-ouvertes.fr/tel-00462530.
Full textVache, Géraldine. "Evaluation quantitative de la sécurité informatique : approche par les vulnérabilités." Toulouse, INSA, 2009. http://eprint.insa-toulouse.fr/archive/00000356/.
Full textThis thesis presents a new approach for quantitative security evaluation for computer systems. The main objective of this work is to define and evaluate several quantitative measures. These measures are probabilistic and aim at quantifying the environment influence on the computer system security considering vulnerabilities. Initially, we identified the three factors that have a high influence on system state: 1) the vulnerability life cycle, 2) the attacker behaviour and 3) the administrator behaviour. We studied these three factors and their interdependencies and distinguished two main scenarios based on nature of vulnerability discovery, i. E. Malicious or non malicious. This step allowed us to identify the different states of the system considering the vulnerability exploitation process and to define four measures relating to the states of the system: vulnerable, exposed, compromised, patched and secure. To evaluate these measures, we modelled the process of system compromising by vulnerability exploitation. Afterwards, we characterized the vulnerability life cycle events quantitatively, using real data from a vulnerability database, in order to assign realistic values to the parameters of the models. The simulation of these models enabled to obtain the values of the four measures we had defined. Finally, we studied how to extend the modelling to consider several vulnerabilities. So, this approach allows the evaluation of measures quantifying the influences of several factors on the system security
Guyeux, Christophe. "Désordre des itérations chaotiques et leur utilité en sécurité informatique." Besançon, 2010. http://www.theses.fr/2010BESA2019.
Full textFor the first time, the divergence and disorder properties of “chaotic iterations”, a tool taken from the discrete mathematics domain, are studied. After having used discrete mathematics to deduce situations of non-convergence, these iterations are modeled as a dynamical system and are topologically studied into the framework of the mathematical theory of chaos. We prove that their adjective “chaotic” is well chosen : these iterations are chaotic, according to the definitions of Devaney, Li-Yorke, expansivity, topological entropy, Lyapunov exponent, and so on. These properties have been established for a topology different from the order topology, thus the consequences of this choice are discussed. We show that these chaotic iterations can be computed without any loss of properties, and that it is possible to circumvent the problem of the finiteness of computers to obtain programs that are proven to be chaotic according to Devaney, etc. The procedure proposed in this document is followed to generate a digital watermarking algorithm and a hash function, which are chaotic according to the strongest possible sense. At each time, the advantages of being chaotic as defined in the mathematical theory of chaos is justified, the properties to check are chosen depending on the objectives to reach, and the programs are evaluated. A novel notion of security for steganography is introduced, to address the lack of tool for estimating the strength of an information hiding scheme against certain types of attacks. Finally, two solutions to the problem of secure data aggregation in wireless sensor networks are proposed
Antakly, Dimitri. "Apprentissage et vérification statistique pour la sécurité." Thesis, Nantes, 2020. http://www.theses.fr/2020NANT4015.
Full textThe main objective of this thesis is to combine the advantages of probabilistic graphical model learning and formal verification in order to build a novel strategy for security assessments. The second objective is to assess the security of a given system by verifying whether it satisfies given properties and, if not, how far is it from satisfying them. We are interested in performing formal verification of this system based on event sequences collected from its execution. Consequently, we propose a model-based approach where a Recursive Timescale Graphical Event Model (RTGEM), learned from the event streams, is considered to be representative of the underlying system. This model is then used to check a security property. If the property is not verified, we propose a search methodology to find another close model that satisfies it. We discuss and justify the different techniques we use in our approach and we adapt a distance measure between Graphical Event Models. The distance measure between the learned "fittest" model and the found proximal secure model gives an insight on how far our real system is from verifying the given property. For the sake of completeness, we propose series of experiments on synthetic data allowing to provide experimental evidence that we can attain the desired goals
Cormier, Alexandre. "Modélisaton et sécurité des réseaux." Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/25012/25012.pdf.
Full textBen, Ghorbel Meriam. "Administration décentralisée des politiques de sécurité." Télécom Bretagne, 2009. http://www.theses.fr/2009TELB0101.
Full textBen, Ghorbel Meriam. "Administration décentralisée des politiques de sécurité." Télécom Bretagne, 2009. http://www.theses.fr/2008TELB0101.
Full textAgosti, Pascal. "La signature : de la sécurité juridique à la sécurité technique." Montpellier 1, 2003. http://www.theses.fr/2003MON10012.
Full textClavier, Christophe. "De la sécurité physique des crypto-systèmes embarqués." Versailles-St Quentin en Yvelines, 2007. http://www.theses.fr/2007VERS0028.
Full textIn a world full of threats, the development of widespread digital applications has led to the need for a practical device containing cryptographic functions that provide the everyday needs for secure transactions, confidentiality of communications, identification of the subject or authentication for access to a particular service. Among the cryptographic embedded devices ensuring these functionalities, smart cards are certainly the most widely used. Their portability (a wallet may easily contain a dozen) and their ability to protect its data and programs against intruders, make it as the ideal ``bunker'' for key storage and the execution of cryptographic functions during mobile usage requiring a high level of security. Whilst the design of mathematically robust (or even proven secure in some models) cryptographic schemes is an obvious requirement, it is apparently insufficient in the light of the first physical attacks that were published in 1996. Taking advantage of weaknesses related to the basic implementation of security routines, these threats include side-channel analysis which obtains information about the internal state of the process, and the exploitation of induced faults allowing certain cryptanalysis to be performed which otherwise would not have been possible. This thesis presents a series of research works covering the physical security of embedded cryptosystems. Two parts of this document are dedicated to the description of some attacks and to a study of the efficiency of conceivable countermeasures. A third part deals with that particular and still mainly unexplored area which considers the applicability of physical attacks when the cryptographic function is, partly or totally, unknown by the adversary
Mendy, Norbert Lucien. "Les attaques et la sécurité des systèmes informatiques." Paris 8, 2006. http://www.theses.fr/2006PA082735.
Full textHacking activities appeared around 1980 with first personal computers and since did not stop developing. At the beginning, this practice was primarily individual and playful. Now it is mainly made up by the activities of groups, with very various motivations. Today, due to the development of electronic means of communication, data security concerns a wider public. This thesis examines initially, from a technical and sociological point of view, attacks and defense mechanisms, and proposes a new concept of the security which is not only any centered on technical solutions but also takes in consideration the social dimension of the problem
Bascou, Jean-Jacques. "Contribution à la sécurité des systèmes : une méthodologie d'authentification adaptative." Toulouse 3, 1996. http://www.theses.fr/1996TOU30253.
Full textAissaoui, Mehrez Hassane. "Sécurité pour les réseaux du futur : gestion sécurisée des identités." Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066606.
Full textToday, the Internet is changing radically our habits, especially with the massive influx of the nomadic techniques, the Internet of objects, the growing use of grid computing, wireless networks and the emergence of new approaches in recent years. In particular, the virtualization of the computing infrastructures, which allowed defining a new model called Cloud Computing, introducing an enough frank breakdown with the traditional models, can be perceived as a preparatory stage towards the Internet of future.The implementation of these approaches allows, in a different way : mutualization and organization of the computer system. It allows to dematerialize the physical infrastructures and to deport applications on distant containers. Therefore, the global architecture of Internet should be evolved. It will rely strongly on these new approaches and in particular, Cloud Computing and virtualization. However, no system is infallible especially if resources are distributed and mutualized. They raise a number of problems and involve directly security issues, which remain one of the main barriers to the adoption of these technologies.Like any new technology, Cloud Computing and virtualization create new risks, which come to graft to traditional threats of the outsourcing management of the privilege separation, the identity and accesses management, the robustness of the virtualization software, the virtual machine isolation, the personal data protection, reversibility, privacy... The traditional Internet architecture cannot provide the adequate solutions to the challenges raised by these new approaches: mobility, flexibility, security requirements, reliability and robustness. Thus, a research project (SecFuNet : Security For Future Networks) was validated by the European Commission, to provide some answers, to make a state of the art of these security mechanisms and a comprehensive study of orchestration and integration techniques based on protection components within overall security architecture
Hourdin, Vincent. "Contexte et sécurité dans les intergiciels d'informatique ambiante." Nice, 2010. http://www.theses.fr/2010NICE4076.
Full textIn ubiquitous computing, context is key. Computer applications are extending their interactions with the environment: new inputs and outputs are used, such as sensors and other mobile devices interacting with the physical environment. Middlewares, created in distributed computing to hide the complexity of lower layers, are then loaded with new concerns, such as taking into account the context, adaptation of applications or security. A middleware layer representation of these concerns cannot express all heir interdependencies. In pervasive computing, distribution is required to obtain contextual information, but it is also necessary to take into account the context in distribution, for example to restrict interactions between entities in a defined context. In addition,asynchronous interactions used in those new environments require special attention when taking into account the context. Similarly, security is involved both in the middleware layers of distribution and context-sensitivity. In this thesis we present a model taking into account the context both in security and distribution. Access control must evolve to incorporate a dynamic and reactive authorization, based on information related to environment or simply on the authentication information of entities. Contextual information evolve with their own dynamic, independent of applications. It is also necessary to detect context changes to reenforce the authorization. We are experimenting this context-awareness targetting interaction control with the experimental framework WComp, derived from the SLCA/AA (Service Lightweight Component Architecture / Aspects of Assembly) model. SLCA allows to create dynamic middlewares and applications for which functional cutting is not translated into layers but into an interleaving of functionalities. Aspects of assembly are a mechanism for compositional adaptation of assemblies of components. We use them to express our non-functional concerns and to compose them with existing applications in a deterministic and reactive manner. For this purpose, we introduce context-aware interaction control rules. The middleware thus allows to adapt, according to context, our non-functional concerns and the behavior of the application
Habib, Lionel. "Formalisations et comparaisons de politiques et de systèmes de sécurité." Paris 6, 2011. http://www.theses.fr/2011PA066146.
Full textLacombe, Eric. "Sécurité des noyaux de systèmes d'exploitation." Phd thesis, INSA de Toulouse, 2009. http://tel.archives-ouvertes.fr/tel-00462534.
Full textPascual, Nathalie. "Horloges de synchronisation pour systèmes haute sécurité." Montpellier 2, 1992. http://www.theses.fr/1992MON20145.
Full textBoisseau, Alexandre. "Abstractions pour la vérification de propriétés de sécurité de protocoles cryptographiques." Cachan, Ecole normale supérieure, 2003. https://theses.hal.science/tel-01199555.
Full textSince the development of computer networks and electronic communications, it becomes important for the public to use secure electronic communications. Cryptographic considerations are part of the answer to the problem and cryptographic protocols describe how to integrate cryptography in actual communications. However, even if the encryption algorithms are robust, there can still remain some attacks due to logical flaw in protocols and formal verification can be used to avoid such flaws. In this thesis, we use abstraction techniques to formally prove various types of properties : secrecy and authentication properties, fairness properties and anonymity
Laganier, Julien. "Architecture de sécurité décentralisée basée sur l'identification cryptographique." Lyon, École normale supérieure (sciences), 2005. http://www.theses.fr/2005ENSL0354.
Full textThis thesis studies the problem of securing large scale and dynamic communication, execution and storage infrastructures. The majority of existing security solutions relies on the existence of a global public key infrastructure. The deployment of such a global infrastructure is problematic at technical, administrative and political levels. In order to relieve solutions from this constraint, we propose a decentralized security approach based on cryptographic identifiers (CBID, CGA, HBA and HIP) and delegation (SPKI certificates). We show that this security approach fits better to the intrinsical decentralized nature of the large scale, shared and open systems like Internet or grid computing. To validate the approach, we instantiate it into several security solutions for existing protocols using the IP security (IPsec) and host identity (HIP) protocols. In particular, security solutions for the IPv6 (Internet Protocol version 6) network layer and its ND (Neighbor Discovery) component, as well as for virtualization of the execution, communication and storage infrastructure of grid computing (Supernet, HIPernet and Internet Backplane Protocol) are presented and analysed
Delpech, Vincent. "Dématérialisation et sécurité des transactions." Bordeaux 4, 1996. http://www.theses.fr/1996BOR40029.
Full textBecause of the influence of new communication technologies, the written support to contract is gradually removed from drafting of such documents. In the meantime, the exchange of consent finds a new dematerialization on an electronic background. But dematerialization doesn't mean suppression of all sign of existence. Bacause of the need to secure the security of transactions, notably in relation to the rules of evidence, must be enlightened legals means, both statute and contractual, of legitimating computer information. This research is necessary, since those new technics are reliable and adapted to the speed of today's economical relationship. Those technics provide identification of parties, authentification of the contents of the contract and are a guarantee against any attack on their intellectual integrity. Those elements justify an autonomous acknowledgement of new forms of writing, but also a different reading of the traditional concept of what is a written support
Ouedraogo, Wendpanga Francis. "Gestionnaire contextualisé de sécurité pour des « Process 2.0 »." Thesis, Lyon, INSA, 2013. http://www.theses.fr/2013ISAL0132/document.
Full textTo fit the competitive and globalized economic environment, companies and especially SMEs / SMIs are more and more involved in collaborative strategies, requiring organizational adaptation to fit this openness constraints and increase agility (i.e. the ability to adapt and fit the structural changes). While the Web 2.0 allows sharing data (images, knowledge, CV, micro-blogging, etc...) and while SOA aims at increasing service re-using rate and service interoperability, no process sharing strategies are developed. To overcome this limit, we propose to share processes as well to set a "process 2.0" framework allowing sharing activities. This will support an agile collaborative process enactment by searching and composing services depending on the required business organization and the service semantics. Coupled with the cloud computing deployment opportunity, this strategy will lead to couple more strongly Business, SaaS and PaaS levels. However, this challenges security constraints management in a dynamic environment. The development of security policies is usually based on a systematic risks analysis, reducing them by adopting appropriate countermeasures. These approaches are complex and as a consequence difficult to implement by end users. Moreover risks are assessed in a "closed" and static environment so that these methods do not fit the dynamic business services composition approach, as services can be composed and run in different business contexts (including the functionalities provided by each service, the organization (Who does what?), the coordination between these services and also the kind of data (strategic or no...) that are used and exchanged) and runtime environment (public vs private platform…). By analyzing these contextual information, we can define specific security constraints to each business service, specify the convenient security policies and implement appropriate countermeasures. In addition, it is also necessary to be able to propagate the security policies throughout the process to ensure consistency and overall security during the process execution. To address these issues, we propose to study the definition of security policies coupling Model Driven Security and Pattern based engineering approach to generate and deploy convenient security policies and protection means depending on the (may be untrusted) runtime environment. To this end, we propose a set of security patterns which meet the business and platform related security needs to set the security policies. The selection and the implementation of these security policies will be achieved thank to context-based patterns. Simple to understand by non-specialists, these patterns will be used by the model transformation process to generate these policies in a Model@Runtime strategy so that security services will be selected and orchestrated at runtime to provide a constant quality of protection (independent of the deployment)
Ould-Slimane, Hakima. "Réécriture de programmes pour une application effective des politiques de sécurité." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/28026/28026.pdf.
Full textDuring the last decades, we have witnessed a massive automation of the society at all levels. Unfortunately, this technological revolution came with its burden of disadvantages. Indeed, a new generation of criminals emerged and is benefitting from continuous progress of information technologies to cause more illegal activities. Thus, to protect computer systems, it has become very crucial to rigorously define security policies and provide the effective mechanisms required to enforce them. Usually, the main objective of a security mechanism is to control the executions of a software and ensure that it will never violate the enforced security policy. However, the majority of security mechanisms are based on ad hoc methods and thus, are not effective. In addition, they are unreliable, since there is no evidence on their ability to enforce security policies. Therefore, there is a need to develop novel security mechanisms that allow enforcing security policies in a formal, correct, and accurate way. In this context, our thesis targets the formal characterization of effective security policies enforcement that is based on programs rewriting. We mean by “effective” enforcement preventing all the “bad” behaviors of a program while keeping all its "good" behaviors. In addition, effective enforcement should not compromise the semantics of controlled programs. We have chosen for rewriting programs, because it has a great power compared to other security mechanisms that are either permissive or too restrictive. Themain contributions of this thesis are the following : – Formal characterization of security enforcement of safety properties through program rewriting. Safety properties represent the main class of properties usually enforced by security mechanisms. – Formal characterization of any security property using program rewriting. This contribution shows how program rewriting allows the enforcement of security policies that no other class of security mechanisms can enforce. – Algebraic approach as an alternative formal characterization of program rewriting based security enforcement. In this contribution, we investigate an algebraic formal model in order to reduce the gap between the specification and the implementation of program rewriting based security mechansisms.
Humphries, Christopher. "User-centred security event visualisation." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S086/document.
Full textManaging the vast quantities of data generated in the context of information system security becomes more difficult every day. Visualisation tools are a solution to help face this challenge. They represent large quantities of data in a synthetic and often aesthetic way to help understand and manipulate them. In this document, we first present a classification of security visualisation tools according to each of their objectives. These can be one of three: monitoring (following events in real time to identify attacks as early as possible), analysis (the exploration and manipulation a posteriori of a an important quantity of data to discover important events) or reporting (representation a posteriori of known information in a clear and synthetic fashion to help communication and transmission). We then present ELVis, a tool capable of representing security events from various sources coherently. ELVis automatically proposes appropriate representations in function of the type of information (time, IP address, port, data volume, etc.). In addition, ELVis can be extended to accept new sources of data. Lastly, we present CORGI, an successor to ELVIS which allows the simultaneous manipulation of multiple sources of data to correlate them. With the help of CORGI, it is possible to filter security events from a datasource by multiple criteria, which facilitates following events on the currently analysed information systems
Saadi, Rachid. "The Chameleon : un système de sécurité pour utilisateurs nomades en environnements pervasifs et collaboratifs." Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0040/these.pdf.
Full textWhile the trust is easy to set up between the known participants of a communication, the evaluation of trust becomes a challenge when confronted with unknown environment. It is more likely to happen that the collaboration in the mobile environment will occur between totally unknown parties. An approach to handle this situation has long been to establish some third parties that certify the identities, roles and/or rights of both participants in a collaboration. In a completely decentralized environment, this option is not sufficient. To decide upon accesses one prefer to rely only on what is presented to him by the other party and by the trust it can establish, directly by knowing the other party or indirectly, and vice-versa. Hence a mobile user must for example present a set of certificates known in advance and the visited site may use these certificates to determine the trust he can have in this user and thus potentially allow an adapted access. In this schema the mobile user must know in advance where she wants to go and what she should present as identifications. This is difficult to achieve in a global environment. Moreover, the user likes to be able to have an evaluation of the site she is visiting to allow limited access to her resources. And finally, an user does not want to bother about the management of her security at fine grain while preserving her privacy. Ideally, the process should be automatized. Our work was lead to define the Chameleon architecture. Thus the nomadic users can behave as chameleons by taking the "colors" of their environments enriching their nomadic accesses. It relies on a new T2D trust model which is characterized by support for the disposition of trust. Each nomadic user is identified by a new morph certification model called X316. The X316 allows to carry out the trust evaluation together with the roles of the participants while allowing to hide some of its elements, preserving the privacy of its users
Godonou, Théophane Gloria. "Combinaison d'approche statique et dynamique pour l'application de politiques de sécurité." Thesis, Université Laval, 2014. http://www.theses.ulaval.ca/2014/30434/30434.pdf.
Full textIn this Master thesis, we present an approach to enforce information flow policies using a multi-valued type-based analysis followed by an instrumentation when needed. The target is a core imperative language. Our approach aims at reducing false positives generated by static analysis, and at reducing execution overhead by instrumenting only when needed. False positives arise in the analysis of real computing systems when some information is missing at compile time, for example the name of a file, and consequently, its security level. The key idea of our approach is to distinguish between negative and may responses. Instead of rejecting the possibly faulty commands, they are identified and annotated for the second step of the analysis; the positive and negative responses are treated as is usually done. This work is a hybrid security enforcement mechanism: the maybe-secure points of the program detected by our type based analysis are instrumented with dynamic tests. The basic type based analysis has been reported by Desharnais et al. [12], this work deals with the modification of the type system and the instrumentation step. It has been accepted for publication [7]. The novelty of our approach is the handling of four security types, but we also treat variables and channels in a special way. Programs interact via communication channels. Secrecy levels are associated to channels rather than to variables whose security levels change according to the information they store. Thus the analysis is flow-sensitive.
Fall, Marfall N'Diaga. "Sécurisation formelle et optimisée de réseaux informatiques." Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27543/27543.pdf.
Full textFirewalls are crucial elements in enforcing network security policies. They have been widely deployed for securing private networks but, their configuration remains complex and error prone. During the last years, many techniques and tools have been proposed to correctly configure firewalls. However, most of existing works are informal and do not take into account the global performance of the network or other qualities of its services (QoS). In this thesis we introduce a formal approach allowing to formally and optimally configure a network so that a given security policy is respected and by taking into account the QoS.
Faurax, Olivier. "Méthodologie d'évaluation par simulation de la sécurité des circuits face aux attaques par faute." Aix-Marseille 2, 2008. http://theses.univ-amu.fr.lama.univ-amu.fr/2008AIX22106.pdf.
Full textMicroelectronic security devices are more and more present in our lives (smartcards, SIM cards) and they contains sensitive informations that must be protected (account number, cryptographic key, personal data). Recently, attacks on cryptographic algorithms appeared, based on the use of faults. Adding a fault during a device computation enables one to obtain a faulty result. Using a certain amount of correct results and the corresponding faulty ones, it is possible to extract secret data and, in some cases, complete cryptographic keys. However, physical perturbations used in practice (laser, radiations, power glitch) rarely match with faults needed to successfully perform theoretical attacks. In this work, we propose a methodology to test circuits under fault attacks, using simulation. The use of simulation enables to test the circuit before its physical realization, but needs a lot of time. That is why our methodology helps the user to choose the most important faults in order to significantly reduce the simulation time. The tool and the corresponding methodology have been tested on a cryptographic circuit (AES) using a delay fault model. We showed that use of delays to make faults can generate faults suitable for performing known attacks
Turuani, Mathieu. "Sécurité des protocoles cryptographiques : décidabilité et complexité." Nancy 1, 2003. http://www.theses.fr/2003NAN10223.
Full textRibeiro, Marcelo Alves. "Méthodes formelles pour la vérification probabiliste de propriétés de sécurité de protocoles cryptographiques." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/28121/28121.pdf.
Full textCertain cryptographic protocols were specifically developed to provide some security properties in our networks of communication. For the purpose of assuring that a protocol fulfils its security properties, probabilistic model checkings are undertaken to confirm if it introduces a fault when its probabilistic behavior is considered. We wanted to use a probabilistic method (and also non-deterministic) of protocols modeling to confirm if this method may substitute others that were already used for checking faults in cryptographic protocols. It leads us to consider the objective of our scientific researches as: quantitative analysis of faults in cryptographic protocols.
Kabil, Alexandre. "CyberCOP 3D : visualisation 3D interactive et collaborative de l'état de sécurité d'un système informatique." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0166.
Full textThe aim of this thesis was to study the use of Collaborative Virtual Environments (CVE) for the analysis of the state of security of computer systems, also called Cyber Situational Awareness (CSA). After studying CSA’s models and tools, we have had the opportunity to visit the Security Operations Centers (SOCs) of four industrial partners of the CyberCNI chair, in order to better understand the needs and expectations of cyber analysts. These visits were made as part of a collaborative activity analysis protocol and have allowed us to propose a model, the 3D Cyber-COP. Based on this model and a model of the WannaCry ransomware, we have developed a CVE and a simplified scenario engine that allows users to design their own alert analysis scenarios. We have also performed a usability evaluation of a virtual environment for alert analysis, with a panel of novice users
Santana, de Oliveira Anderson. "Réécriture et modularité pour les politiques de sécurité." Thesis, Nancy 1, 2008. http://www.theses.fr/2008NAN10007/document.
Full textIn this thesis we address the modular specification and analysis of flexible, rule-based policies. We introduce the use of the strategic rewriting formalism in this domain, such that our framework inherits techniques, theorems, and tools from the rewriting theory. This allows us to easily state and verify important policy properties such as the absence of conflicts, for instance. Moreover, we develop rewrite-based methods to verify elaborate policy properties such as the safety problem in access control and the detection of information flows in mandatory policies. We show that strategies are important to preserve policy properties under composition. The rich strategy languages available in systems like Tom, Stratego, Maude, ASF+SDF and Elan allows us to define several kinds of policy combiners. Finally, in this thesis we provide a systematic methodology to enforce rewrite-based policies on existing applications through aspect-oriented programming. Policies are weaved into the existing code, resulting in programs that implement a reference monitor for the given policy. Reuse is improved since policies and programs can be maintained independently from each other
Oualha, Nouha. "Sécurité et coopération pour le stockage de donnéees pair-à-pair." Paris, ENST, 2009. http://www.theses.fr/2009ENST0028.
Full textSelf-organizing algorithms and protocols have recently received a lot of interest in mobile ad-hoc networks as well as in peer-to-peer (P2P) systems, as illustrated by file sharing or VoIP. P2P storage, whereby peers collectively leverage their storage resources towards ensuring the reliability and availability of user data, is an emerging field of application. P2P storage however brings up far-reaching security issues that have to be dealt with, in particular with respect to peer selfishness, as illustrated by free-riding attacks. The continuous observation of the behavior of peers and monitoring of the storage process is an important requirement to secure a storage system against such attacks. Detecting peer misbehavior requires appropriate primitives like proof of data possession, a form of proof of knowledge whereby the holder interactively tries to convince the verifier that it possesses some data without actually retrieving them or copying them at the verifier. We propose and review several proof of data possession protocols. We in particular study how data verification and maintenance can be handed over to volunteers to accommodate peer churn. We then propose two mechanisms, one based on reputation and the other on remuneration, for enforcing cooperation by means of such data possession verification protocols, as periodically delivered by storage peers. We assess the effectiveness of such incentives with game theoretical techniques. We in particular discuss the use of non-cooperative one-stage and repeated Bayesian games as well as that of evolutionary games
Oundjian-Barts, Hélène. "Droit, sécurité et commerce électronique." Aix-Marseille 3, 2007. http://www.theses.fr/2007AIX32063.
Full textSince the 1st January of 1978 law, Internet has become a new mean of exchange which has stirred up the whole world economy functioning, and due to discrepancies between regulations, France has been compelled to adopt appropriate laws as far as proof, cryptology, or special definitions, are concerned, with for instance the 21st June 2004 law. So that it took part in the lex electronica apparition, similar to its ancestor the lex mercatoria. As contracting on Internet has become usual, those plans of action are fully justified and they give security questions a particular sharpness because of so many possible sites of infringements and of the increasing number of web partners who make the identification, localisation and catching trespassers, very uncertain therefore as the determination of law enforcement. It went on with its modernisation by adopting the new regulation about computer and liberties, and the DADVSI law in 2006 about royalties and bordering rights in the information society. Within the framework of a strategic security managerial politics responding to an investment optimisation logical economy of the cyber-company, and in order to make web partners be sensitive to this problem, technologies and law (which global consistence will be assured by Courts), will reinforce confidence and rights respects in that way, to serve a durable electronic commerce development
Trinh, Viet Cuong. "Sécurité et efficacité des schémas de diffusion de données chiffrés." Paris 8, 2013. http://octaviana.fr/document/181103516#?c=0&m=0&s=0&cv=0.
Full textIn this thesis, we work on the domain of broadcast encryption and tracing traitors. Our contributions can be divided into three parts. We first recall the three tracing models: non-black-box tracing model, single-key black box tracing model, and general black box tracing model. While the last model is the strongest model, the two former models also cover many practical scenarios. We propose an optimal public key traitor tracing scheme in the two first models. We then consider two new advanced attacks (pirate evolution attack and Pirates 2. 0) which were proposed to point out some weaknesses of the schemes in the subset-cover framework, or more generally of combinatorial schemes. Since these schemes have been widely implemented in practice, it is necessary to find some counter-measures to these two types of attacks. In the second contribution, we build two schemes which are relatively efficient and which resist well these two types of attacks. In the last contribution, we study a generalized model for broadcast encryption which we call multi-channel broadcast encryption. In this context, the broadcastor can encrypt several messages to several target sets “at the same time”. This covers many scenarios in practice such as in pay-TV systems in which providers have to send various contents to different groups of users. We propose an efficient scheme with constant size ciphertext
Randrianajaina, Jean-Hubert. "Contribution à la sécurité des systèmes et réseaux d'information : l'architecture RE.V.E.S. un modèle de processeur de sécurité à base d'anteserveurs répartis." Paris 9, 1997. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1997PA090057.
Full textThe popularity of very large public servers such as Vidéotex and Web Servers foretells that tomorrow’s business corporations will be confronted with a demand based on heterogeneous software and hardware platforms, implicating files, databases, and processes shared across networks. The constant threats of disasters such as Internet Virus are still a concern, invoking permanent doubt towards the credibility of large information networks. Industrial response to this problem seems to revolve around concepts and security measures proposed by three dominant models: Orange Book, OSI standard 7498-2 and the security model of distributed computing standards (DCE,NCA,ONC,CORBA). As for sensitive data transmitted from their original site to hostile environments, cryptography techniques provide the best method of protection. However, as with every technological transition process, the implementation of ail these mechanisms will probably take a considerable amount of time before business can really count on environments and products allowing them to realistically meet their needs. By an interchange protocol secured by a front end entity based on security processes working together according to the client/server model, the objective of this thesis is to put forward an approach independent of products, allowing data security inside ail multi-agent and heterogeneous servers
Chorfi, Redha. "Abstraction et vérification de programmes informatiques." Thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25710/25710.pdf.
Full textAissaoui, Mehrez Hassane. "Sécurité pour les réseaux du futur : gestion sécurisée des identités." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066606.
Full textToday, the Internet is changing radically our habits, especially with the massive influx of the nomadic techniques, the Internet of objects, the growing use of grid computing, wireless networks and the emergence of new approaches in recent years. In particular, the virtualization of the computing infrastructures, which allowed defining a new model called Cloud Computing, introducing an enough frank breakdown with the traditional models, can be perceived as a preparatory stage towards the Internet of future.The implementation of these approaches allows, in a different way : mutualization and organization of the computer system. It allows to dematerialize the physical infrastructures and to deport applications on distant containers. Therefore, the global architecture of Internet should be evolved. It will rely strongly on these new approaches and in particular, Cloud Computing and virtualization. However, no system is infallible especially if resources are distributed and mutualized. They raise a number of problems and involve directly security issues, which remain one of the main barriers to the adoption of these technologies.Like any new technology, Cloud Computing and virtualization create new risks, which come to graft to traditional threats of the outsourcing management of the privilege separation, the identity and accesses management, the robustness of the virtualization software, the virtual machine isolation, the personal data protection, reversibility, privacy... The traditional Internet architecture cannot provide the adequate solutions to the challenges raised by these new approaches: mobility, flexibility, security requirements, reliability and robustness. Thus, a research project (SecFuNet : Security For Future Networks) was validated by the European Commission, to provide some answers, to make a state of the art of these security mechanisms and a comprehensive study of orchestration and integration techniques based on protection components within overall security architecture
Etrog, Jonathan. "Cryptanalyse linéaire et conception de protocoles d'authentification à sécurité prouvée." Versailles-St Quentin en Yvelines, 2010. http://www.theses.fr/2010VERS0025.
Full textThis Ph. D, devoted to symmetric cryptography, addresses two separate aspects of cryptology. First, the protection of messages using encryption algorithms and, second, the protection of privacy through authentication protocols. The first part concerns the study of linear cryptanalysis while the second is devoted to the design of authentication protocols with proven security. Although introduced in the early 90s, linear cryptanalysis has recently experienced a revival due to the development of new variants. We are both interested in its practical and theoretical aspects. First, we present a cryptanalysis of a reduced version of SMS4, the encryption algorithm used in WiFi in China then, second, we introduce multilinear cryptanalysis and describe a new form of multilinear cryptanalysis. The second part of the thesis concerns the study of RFID authentication protocols respecting privacy. We define a model to formalize the notions of security for these protocols. Then we propose two protocols, each one performing a compromise between strong unlinkability and resistance to denial of service attacks, which allow low-cost implementations. We establish security proofs in the standard model for these two protocols
Dallot, Léonard. "Sécurité de protocoles cryptographiques fondés sur les codes correcteurs d'erreurs." Caen, 2010. http://www.theses.fr/2010CAEN2047.
Full textCode-based cryptography appeared in 1968, in the early years of public-key cryptography. The purpose of this thesis is the study of reductionist security of cryptographic constructions that belong to this category. After introducing some notions of cryptography and reductionist security, we present a rigorous analysis of reductionist security of three code-based encryption scheme : McEliece's cryptosystem, Niederreiter's variant and a hybrid scheme proposed by N. Sendrier and B. Biswas. The legitimacy of this approach is next illustrated by the cryptanalysis of two variants of McEliece's scheme that aim at reducing the size of the keys necessary for ensure communication confidentiality. Then we present a reductionist security proof of a signature scheme proposed in 2001 by N. Courtois, M. Finiasz and N. Sendrier. In order to manage this, we show that we need to slightly modify the scheme. Finally, we show that technics used in the previous scheme can also be used to build a provably secure threshold ring signature scheme
Christofi, Maria. "Preuves de sécurité outillées d’implémentations cryptographiques." Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0029.
Full textIn this thesis, we are interested on the formal verification of cryptographic implementations. In the first part, we study the verification of the protocol mERA using the tool ProVerif. We prove that this protocol verifies some security properties, like the authentication, the secrecy and the unlinkability, but also properties like its vivacity. In the second part of this thesis, we study the formal verification of cryptographic implementations against an attack family: attacks with fault injection modifying data. We identify and present the different models of these attacks considering different parameters. We then model the cryptographic implementation (with its countermeasures), we inject all possible fault scenarios and finally we verify the corresponding code using the Frama-C tool, based on static analysis techniques. We present a use case of our method: the verification of an CRT-RSA implementation with Vigilant’s countermeasure. After expressing the necessary properties for the verification, we inject all fault scenarios (regarding the chosen fault model). This verification reveals two fault scenarios susceptible to flow secret information. In order to mechanize the verification, we insert fault scenarios automatically according to both single and multi fault attacks). This creates a new Frama-C plug-in: TL-FACE
Feix, Benoît. "Implémentations Efficaces de Crypto-systèmes Embarqués et Analyse de leur Sécurité." Limoges, 2013. https://aurore.unilim.fr/theses/nxfile/default/19ba2f73-2b7f-42ed-8afc-794a4b0c7604/blobholder:0/2013LIMO4062.pdf.
Full textCryptography has become a very common term in our daily life even for those that are not practising this science. It can represent today an efficient shield that prevent us from hackers' or other non-respectable entities' intrusions in our privacy. Cryptography can protect the personal data we store on many physical numerical supports or even cloudy ones for the most intrepid people. However a secure usage cryptography is also necessary. Cryptographic algorithms must be implemented such that they contain the right protections to defeat the category of physical attacks. Since the first article has been presented on this subject in 1996, different attack improvements, new attack paths and countermeasures have been published and patented. We present the results we have obtained during the PhD. New physical attacks are presented with practical results. We are detailing innovative side-channel attacks that take advantage of all the leakage information present in a single execution trace of the cryptographic algorithm. We also present two new CoCo (Collision Correlation) attacks that target first order protected implementations of AES and RSA algorithms. We are in the next sections using fault-injection techniques to design new combined attacks on different state of the art secure implementation of AES and RSA. Later we present new probable prime number generation method well suited to embedded products. We show these new methods can lead to faster implementations than the probabilistic ones commonly used in standard products. Finally we conclude this report with the secure exponentiation method we named Square Always
Treger, Joana. "Etude de la sécurité de schémas de chiffrement par bloc et de schémas multivariés." Versailles-St Quentin en Yvelines, 2010. http://www.theses.fr/2010VERS0015.
Full textThe thesis is made up of two parts. The first one deals with the study of bloc ciphers, Feistel networks with internal permutations and Misty-like schemes. The context is generic, in the sense that the internal permutations are supposed random. His allows to obtain properties that only concern the structure of the scheme and do not depend on any particular application. This part focuses on generic attacks on these two schemes. The second part is about multivariate cryptosystems. A differential property of the public key of HM is shown, allowing to get an efficient distinguisher. Moreover, we can invert the system by using Gröbner bases. We also describe a key-recovery attack on HFE, which works for a family of key instances, now called "weak keys"
Challal, Yacine. "Sécurité dans les communications de groupe." Compiègne, 2005. http://www.theses.fr/2005COMP1561.
Full textThe advantages of IP multicast in multi-party communications, such as saving bandwidth, simplicity and efficiency, are very interesting for new services combining voire, video and text over Internet. This urges the effective large scale deployment of multicasting to satisfy the increasing demand for multicasting from both Internet Service Providers (ISPs) and Content Distributors. Unfortunately, the strengths of IP multicast are also its security weaknesses. Indeed, the open and anonymous membership and the distributed nature of multicasting are serious threats to the security of this communication mode!. Much effort has been conducted to address the many issues relating to securing multicast data transmission, such as: access control, confidentiality, authentication and watermarking. Ln this thesis we deal with the two keystone security issues of any secure multicast architecture: data origin authentication and confidentiality. For each theme, we present a detailed analysis of the problem while highlighting special features and issues inherent to the multicast nature. Then, we review existing solutions in the literature and analyze their advantages and shortcomings. Finally, we provide our own original proposaIs, depicting their advantages over the previous solutions
Besson, Frédéric. "Résolution modulaire d'analyses de programmes : application à la sécurité logicielle." Rennes 1, 2002. http://www.theses.fr/2002REN1A114.
Full textDinh, Ngoc Tu. "Walk-In : interfaces de virtualisation flexibles pour la performance et la sécurité dans les datacenters modernes." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES002.
Full textVirtualization is a powerful tool that brings numerous benefits for the security, efficiency and management of computer systems. Modern infrastructure therefore makes heavy use of virtualization in almost every software component. However, the extra hardware and software layers present various challenges to the system operator. In this work, we analyze and identify the challenges relevant to virtualization. Firstly, we observe a complexification of maintenance from the numerous software layers that must be constantly updated. Secondly, we notice a lack of transparency on details of the underlying infrastructure from virtualization. Thirdly, virtualization has a damaging effect on system performance, stemming from how the layers of virtualization have to be navigated during operation. We explore three approaches of solving the challenges of virtualization through adding flexibility into the virtualization stack. - Our first contribution tackles the issue of maintainability and security of virtual machine platforms caused by the need to keep these platforms up-to-date. We introduce HyperTP, a framework based on the hypervisor transplant concept for updating hypervisors and mitigating vulnerabilities. - Our second contribution focuses on performance loss resulting from the lack of visibility of non-uniform memory access (NUMA) topologies on virtual machines. We thoroughly evaluate I/O workloads on virtual machines running on NUMA architectures, and implement a unified hypervisor-VM resource allocation strategy for optimizing virtual I/O on said architectures. - For our third work, we focus our attention on high-performance storage subsystems for virtualization purposes. We present NVM-Router, a flexible yet easy to use virtual storage platform that supports the implementation of fast yet efficient storage functions. Together, our solutions demonstrate the tradeoffs present in the configuration spaces of virtual machine deployments, as well as how to reduce virtualization overhead through dynamic adjustment of these configurations
Hachana, Safaà. "Techniques de rôle mining pour la gestion de politiques de sécurité : application à l'administration de la sécurité réseau." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2014. http://www.theses.fr/2014ESMA0017/document.
Full textThis thesis is devoted to a bottom-Up approachfor the management of network security policies fromhigh abstraction level with low cost and high confidence.We show that the Network Role Based Access Control(Net-RBAC) model is adapted to the specification ofnetwork access control policies. We propose policymining, a bottom-Up approach that extracts from thedeployed rules on a firewall the corresponding policymodeled with Net-RBAC. We devise a generic algorithmbased on matrix factorization, that could adapt most ofthe existing role mining techniques to extract instancesof Net-RBAC. Furthermore, knowing that the large andmedium networks are usually protected by multiplefirewalls, we handle the problem of integration of Net-RBAC policies resulting from policy mining over severalfirewalls. We demonstrate how to verify securityproperties related to the deployment consistency overthe firewalls. Besides, we provide assistance tools foradministrators to analyze role mining and policy miningresults as well. We formally define the problem ofcomparing sets of roles and evidence that it is NPcomplete.We devise an algorithm that projects rolesfrom one set into the other set based on Booleanexpressions. This approach is useful to measure howcomparable the two configurations of roles are, and tointerpret each role. Emphasis on the presence ofshadowed roles in the role configuration will be put as itincreases the time complexity of sets of rolescomparison. We provide a solution to detect differentcases of role shadowing. Each of the abovecontributions is rooted on a sound theoreticalframework, illustrated by real data examples, andsupported by experiments
Van, Le. "Gridsec : une architecture sécurisée pour le grid computing." Besançon, 2003. http://www.theses.fr/2003BESA2028.
Full text