Dissertations / Theses on the topic 'Caractère numérique'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 35 dissertations / theses for your research on the topic 'Caractère numérique.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Sajid, Bakht-Baidar. "Multifissuration unidirectionnelle des revêtements à caractère fragile : simulation numérique et étude d'échantillons industriels." Vandoeuvre-les-Nancy, INPL, 2000. http://www.theses.fr/2000INPL046N.
Full textCoppo, Marc-Antoine. "Familles maximales de systèmes de points surabondants dans le plan projectif." Nice, 1989. http://www.theses.fr/1989NICE4318.
Full textAlliot, Sandie. "Essai de qualification de la notion de données à caractère personnel." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCB001.
Full textPersonal data has undergone a major change which justifies a new study of this notion. This particular data is at the heart of the digital economy and thus stirs numerous claims. What is at stake here is to try to work out a balance between different claims such as managing operators' will of appropriation and protecting people concerned by the data, for example. This is why it is essential to find a precise definition and adequate qualifications of personal data to find a balance between the various interests. The thesis will focus on the necessity to adopt a new vision of personal data, to show its current characteristics so as to manage it efficiently
Rossi, Julien. "Protection des données personnelles et droit à la vie privée : enquête sur la notion controversée de « donnée à caractère personnel »." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2549.
Full textInternet and digital information and communication technologies in general are often portrayedas a threat to privacy. This gives rise to many debates, both in the media and among decisionmakers. The Snowden revelations, in 2013, followed by the adoption in 2016 of the General Data Protection Regulation (GDPR), have moved these discussions under the spotlight of the publicsphere.The research presented in this dissertation was born out of three questions: can we define what“privacy” is? Is there any consensus on its definition? And does this consensus change with theevolution of the technical milieu transforming our ways of communicating, and by doing so, theway in which our privacy can be intruded upon? By defining “privacy” as the object which is protected by normative texts – laws, court decisions,techno-political standards of the Internet – protecting the right to privacy, it becomes possible toconduct an empirical study of how it evolved and how it has been a topic of contention. Data protection law emerged in Europe during the 1970’s. Its aim was to protect a “privacy” that was perceived as under threat by the advent of computers. Currently, the GDPR, or somedocuments adopted by standards-settings organisations like the Internet Engineering Task Force(IETF) or the World Wide Web Consortium (W3C), are written with the intention that they protect this privacy through a set of rules and principles referred to as “data protection”, that apply to “personal data”. The legal definitions of this notion produced by political institutions and those crafted instandards-settings bodies are identical. Furthermore, the study of the genealogy of data protection reveals that computer scientists have played a pivotal role in the invention of the principles that “data protection” still relies on, for instance in the GDPR. The analysis of the controversies that took place in the shaping of these rules shows that the notion of “personal data” written down in the normative texts we analysed essentially reflects the beliefs system of a coalition inspired by liberal utilitarian ideals, valuing individual autonomy and granting importance to the respect of one’s consent. This framing of “privacy” has become the paradigm on the field. Other theories, such as those defining “privacy” as a space bound by collectively defined borders protecting it from the public eye, or those advocating the recognition of private property rights on personal data, have been less successful in shaping policy out comes.The advent and spread of networked computers have not directly determined the evolution of theobject that is protected by the right to privacy. It is, rather, the perceptions a group of actors had of computers, that caused such an evolution. Convinced that their liberal conception of privacy issocially valuable, they managed to craft a new legal category during the 1970’s in Europe: the right to the protection of personal data. The GDPR, adopted in 2016, just like Web standards aiming at enhancing the protection of privacy, rely those same principles that were invented during these early debates. Therefore, it can be said that the emergence of computers has indeed, but indirectly, been a triggering factor in the evolution of “privacy” defined as the object protected by the right to privacy
Tassone, Loredana. "La protection européenne des données à caractère personnel traitées à des fins judiciaires." Thesis, Strasbourg, 2013. http://www.theses.fr/2013STRAA028.
Full textIn our society which can be called "information society" and also society of "electronic surveillance," it is essential to ensure the protection of personal data. The implementation of regulations on data protection has not always been easy, and in some areas, remains complex. The judiciary is a striking example of an area where it has been difficult to establish specificand appropriate rules for the protection of personal data. In the years ninety, the question was raised whether a specific protection for judicial data was necessary. The findings were different: on the one hand, some have estimated that a specificregulation was appropriate, on the other hand, others felt that the idea of establishing such rules must be abandoned. This study seems to have been - at some point - quitted. Given the improvements in technology and the evolution of European legislation, it seems essential to renew this question and try to find out which level of protection is given to judicial data in the current European system of protection of personal data. The importance to renew this question is proven by the case law of the Strasbourg Court. The latter has indeed been referred to cases of human rights violations due to improper processing of data in the judiciary. In addition, plans to reform the system of protection of personal data are currently under discussion at European level. Indeed, the Council of Europe and the European Union are currently working on the modernization of existing texts on the protection of personal data. It is therefore interesting to examine the place reserved for the protection of judicial data in the context of these reforms. The thesis recall the importance of protecting personal data at any time, in any place and in any field of activity, including the judiciary. Its main aim, on the one hand, is to highlight existing problems in the protection of judicial data and, on the other hand, to answer the question of whether a specific regulation in this area must nowadays be developed at European level. In the first part Council of Europe and the European Union’s texts on data protection and theirapplicability to the judicial data were analyzed. Given the fact that the texts currently into force at the Council of Europe and at the European Union, at first glance, have the same content, it has been necessary to analyze them with a comparative approach in order to understand their interoperability and verify their applicability to judicial data. Despite the multitude of texts on data protection present at European level (those of the Council of Europe and of the European Union), a marginal space is reserved to judicial data. The basic principles stated in these texts are in theory applicable to judicial data but in concrete those are not applied. In any case, those texts do not provide effective protection of judicial data. [...]
Nella nostra società, che può essere chiamata “società dell'informazione” o anche società di “sorveglianza elettronica”, è ormai indispensabile garantire la protezione dei dati personali. L'attuazione della normativa in materia di protezione dei dati non è sempre stata cosa facile, e per alcuni settori di attività, essa risulta tutt’ora un’operazione complessa. Il settoregiudiziario ne è un esempio. In effetti, si tratta di un settore in cui è stato difficile elaborare ed attuare regole specifiche ed adeguate per la protezione dei dati personali. Negli anni novanta, ci si è chiesti se fosse necessario provvedere all’adozione di regole specifiche per la protezione dei dati trattati a fini giudiziari. Secondo una prima teoria una regolamentazione specifica risultava opportuna, tuttavia secondo un’altra teoria l'idea di creare una regolamentazione per la protezione dei dati personali trattati in ambito giudiziario doveva essere accantonata. Lo studio di tale questione sembra essere stato - ad un certo punto - abbandonato. Dati i miglioramenti tecnologici intervenuti negli ultimi anni e l'evoluzione della legislazione europea in materia di protezione dei dati, sembra indispensabile porsi nuovamente la questione dell’opportunità di una regolamentazione specifica per il settore giudiziario. Sembra altresì opportuno interrogarsi sull’importanza che attualmente viene attribuita alla protezione dei dati trattati per finalità legali nell’ambito dell’attuale sistema europeo di protezione dei dati personali. Tale necessità sembra essere corroborata dal contenuto della giurisprudenza della Corte di Strasburgo. Quest'ultima è stata, infatti, spesso adita per casi di violazioni dei diritti fondamentali dovuti ad un trattamento illecito dei dati personali per ragioni di giustizia. Inoltre, dei progetti di riforma del sistema di protezione dei dati personali sono attualmente in discussione a livello europeo. Infatti, il Consiglio d'Europa e l'Unione europea stanno attualmente lavorando sulla modernizzazione dei testi esistenti in materia di protezione dei dati personali. E 'quindi altresì interessante esaminare quale importanza è data alla protezione dei dati giudiziari nel contesto di dette riforme. Oltre a ricordare l'importanza della protezione dei dati personali, in qualsiasi momento, in qualsiasi luogo e in qualsiasi settore di attività, tra cui quello giudiziario, l’obiettivo principale delle tesi è, da un lato, di evidenziare i problemi esistenti in materia di protezione dei dati giudiziari e, dall’altro, di valutare l’opportunità dell’esistenza a livello europeo di una normativa specifica per questo settore. Nella prima parte i testi del Consiglio d'Europa e dell'Unione europea in materia di protezione dei dati e la loro applicabilità al settore giudiziario sono analizzati. Tenuto conto del fatto ch i testi attualmente in vigore presso il Consiglio d'Europa e l'Unione europea hanno, di primo acchito, lo stesso contenuto, è stato necessario analizzarli con un’ottica comparatistica al fine di capire come essi si conciliano e di verificare la loro applicabilità ai dati giudiziari
Chambardon, Nicolas. "L'identité numérique de la personne humaine : contribution à l'étude du droit fondamental à la protection des données à caractères personnel." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE2072.
Full textFrench law approaches personal data and the person they are related to as separated objects. This special status would be justified by the transformation resulting from the data processing. However, by defining personal data as "information relating to an identified or identifiable natural person, directly or indirectly", the law of 6 January 1978 suggests that they are in fact connected to each other. Therefore, those data are to be understood as identifying elements. Following the interdependence of subjective and objective elements, they are components of identity. They form the person’s digital identity, which is increasingly solicited and gathered. The intuitive assumption of personal data as components of identity is thwarted by French positive law, within which the Data Protection Act marks its autonomy in comparison to Article 9 of the Civil Code – the latter being the matrix of rights related to personality. The same way, protection of personal data is distinguished from protection of privacy in the European Union’s Charter of Fundamental Rights. This increasing autonomy allows the accelerated conversion of personal data into assets. In a multitude of conventions, they are regarded as isolated elements of which processing is allowed. Yet the split between the person and their data could be avoided: protection of the autonomy of the person can ensure a connexion. The European Court of Human Rights considers data protection as part of the right to privacy, hence asserting the existence of a link between personal data and identity of the individual. Moreover, its case law regarding the protection of personal autonomy may constitute an answer to the objectification of individuals. Correlatively, the French Constitutional Court has already taken data protection as a part of personal freedom, the latter being considered in its case law as the embryo of the right to the development of personality and the protection of identity. By taking identity as the starting point of a study examining a right to data protection, it is possible to reveal the stakes of exponential gathering of personal data and ensuing profiling: the autonomy of the individual. Therefore, the latter can be protected by the concept of human person as subject of fundamental rights
Peng, Shuiran. "Analyse mathématique et numérique de plusieurs problèmes non linéaires." Thesis, Poitiers, 2018. http://www.theses.fr/2018POIT2306/document.
Full textThis thesis is devoted to the theoretical and numerical study of several nonlinear partial differential equations, which occur in the mathematical modeling of phase separation and micro-electromechanical system (MEMS). In the first part, we study higher-order phase separation models for which we obtain well-posedness and dissipativity results, together with the existence of global attractors and, in certain cases, numerical simulations. More precisely, we consider in this first part higher-order Allen-Cahn and Cahn-Hilliard equations with a regular potential and higher-order Allen-Cahn equation with a logarithmic potential. Moreover, we study higher-order anisotropic models and higher-order generalized Cahn-Hilliard equations, which have applications in biology, image processing, etc. We also consider the hyperbolic relaxation of higher-order anisotropic Cahn-Hilliard equations. In the second part, we develop semi-implicit and implicit semi-discrete, as well as fully discrete, schemes for solving the nonlinear partial differential equation, which describes both the elastic and electrostatic effects in an idealized MEMS capacitor. We analyze theoretically the stability of these schemes and the convergence under certain assumptions. Furthermore, several numerical simulations illustrate and support the theoretical results
Zwolinska, Monika. "Sécurité et libertés fondamentales des communications électroniques en droit français, européen et international." Thesis, Nice, 2015. http://www.theses.fr/2015NICE0038/document.
Full textThe impact of today’s information and communication technologies is essential forthe exercice of human rights, particularly concerning freedom of expression and privacyprotection. With the massive use of Internet, mobile phones and – more recently – other smart objects and digital services, the tension mounts with respect to establishing the limit between public and private space online. Likewise, the freedom of expression, communication and information are at risk as - under the pretext of fighting cybercrime and cyber terrorism, as well as maintaining public order - public authorities interfere with online contents by controlling, monitoring, restraining or prohibiting it. Especially as both States’ and private companies’ capacities in creating extremely precise databases identifying information on persons’ consumption habits, itineraries, thoughts and opinions gradually increase. Therefore, the need to redefine the way in which the respect of fundamental freedoms is taken into consideration in the context of digital environment becomes urgent
Mohty, Ola. "L'information du consommateur et le commerce électronique." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1G009.
Full textWith the current significant expansion in the concept of legal information, consumers may face some major difficulties when deciding to explore the online market. Numerous sources can provide multiple information, and buyers must conform to some chaotic practices that making it difficult to distinguish between relevant and irrelevant content. However, a well informed online consumer must rely on an authentic source with a relevant content. Therefore, it is clear that regulations related to the obligation for providing information to the consumer need to be examined closely. The recent evolution of the online market and the structural imbalance between the professional and the consumer revealed a number of significant discrepancies highlighting the need to revise the bases for consumer’s information obligation. At present, such a new concept of online information is emerging, with an extended “obligation to inform” and a permanent engagement with a reactive consumer
Cortas, Maria. "Calcul par tranches pour les équations différentielles à variable temps à caractère explosif." Reims, 2005. http://www.theses.fr/2005REIMS001.
Full textThe aim of this work is to propose a numerical method for solving different types of partial and ordinary differential equations. The equations share the same common property for their solutions to become infinite (blow up behaviour) or to become null (extinction behaviour) in finite time. This type of equations is solved using a sliced time computing technique, combined with rescaling both the variable time and the solution of the differential system. The main criterion under which the slice of time is defined, consists in imposing that the rescaled solution should not be greater than a preset cut off value. Another selection criterion for the method is based on the invariance and similarity conditions, enforced on the rescaled model in each of the time slices
Faibra, Djok Tao Daniel. "Recherches sur les variations des souches de "Dermatophilus congolensis" : caractères biochimiques et polymorphisme de restriction." Paris 12, 1993. http://www.theses.fr/1993PA120013.
Full textAndré, Jacques. "Création de fontes en typographie numérique." Habilitation à diriger des recherches, Université Rennes 1, 1993. http://tel.archives-ouvertes.fr/tel-00011218.
Full textCe mémoire comprend deux parties. La première est une courte synthèse sur la typographie. Nous rappelons ce qu'est un caractère et comment on les fabriquait autrefois puis comment on les manipule aujourd'hui, c'est-à-dire ce qu'est une fonte numérique et comment, depuis peu, on arrive à atteindre les exigences du passé.
La seconde partie montre les recherches auxquelles nous avons participé : nous montrons d'abord une possibilité fondamentale en matière de créativité : on peut désormais calculer la forme d'un caractère lorsque l'on l'utilise et non plus avant, une fois pour toutes. Nous donnons comme première application celui du dessin des symboles mathématiques. Nous montrons ensuite comment ce concept de fonte dynamique permet de créer de nouveaux caractères en fonction du contexte.
En guise de conclusion, nous indiquons plusieurs voies de recherche.
Mullot, Rémy. "Segmentation d'images et extraction de primitives pour la reconnaissance optique de texte." Rouen, 1991. http://www.theses.fr/1991ROUES001.
Full textRousseaux, Francis. "Contribution à une méthodologie d'acquisition des connaissances pour l'ingénierie des Systèmes d'Information et de Communication : l'exemple de CHEOPS pour l'aide à la gestion de crises collectives à caractère géopolitique — l'hypothèse d'un niveau des valeurs par delà le niveau des connaissances." Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 1995. http://tel.archives-ouvertes.fr/tel-00417588.
Full textUn certain nombre de paradoxes doivent être dépassés pour mener à bien la réalisation d'un SIC coopératif et une démarche incrémentale s'impose, tant pour organiser une architecture SIC de type structure d'accueil d'outils informatiques, que pour dégager des spécifications fonctionnelles et ergonomiques, ou encore pour structurer une ontologie du domaine applicatif et un modèle de coopération multi-agents. Sur ce sujet, l'argumentation industrielle rejoint l'argumentation scientifique, et s'exprime à travers la dialectique offre/demande du marché des SIC, étendue aux applications civiles comme la gestion des risques industriels et la protection de l'environnement, ainsi qu'à travers le besoin de capitaliser le savoir-faire souvent artisanal des chefs de projet SIC.
Le projet CHEOPS, engagé à partir de 1991 dans le milieu industriel pour dégager un état de l'art et des axes de progrès pour les SIC, a conduit à élaborer une méthode de conduite de projets SIC, à définir une architecture incrémentale de souche applicative pour les SIC, à expliciter une ontologie du domaine de l'aide à la gestion de crises collectives à caractère géographique, et à jeter les bases d'un programme de recherche transdisciplinaire sur la coopération multi-agents dans les SIC, impliquant des industriels et des universitaires.
Pour cela une hypothèse a été formulée, qui prolonge l'hypothèse dite du Knowledge Level énoncée par Alan Newell en 1982 pour modéliser des connaissances exploitables à la fois par un agent rationnel et par une machine symbolique. Cette hypothèse, proposée sous l'appellation d'hypothèse du niveau des valeurs communes, permettrait de modéliser l'être essentiel d'un Titan Immanent personnifiant un collectif social (tout droit venu du mythe ancestral), de façon exploitable par un collectif d'agents humains ou artificiels idéalement rationnels. Une expression au niveau des valeurs communes s'exprimerait notamment au niveau des connaissances par des modèles de coopérations multi-agents, permettant en quelque sorte l'incarnation sociale du Titan.
Bella, Gábor. "Modélisation de texte numérique multilingue : vers un modèle général et extensible fondé sur le concept de textème." Télécom Bretagne, 2008. http://www.theses.fr/2008TELB0067.
Full textThis thesis is concerned with the modelling of electronic text. This modelling involves the definition both of the atomic text elements and of the way these elements join together to form textual structures. In response to the growing need for internationalisation of information systems, historical models of text, based on the concept of code tables, have been extended by semi-formalised knowledge related to the writing system so that, by now, such knowledge is essential to text processing of even the simplest kind. Thus were born the Unicode character encoding and the so-called 'intelligent' font formats. Realising that this phenomenon marks only the beginning of a convergence towards models based on the principles of knowledge representation, we here propose an alternative approach to text modelling that defines a text element not as a table entry but through the properties that describe the element. The formal framework that we establish, initially developed for the purposes of knowledge representation, provides us with a method by which precise formal definitions can be given to much-used but ill-defined notions such as character, glyph, or usage. The same framework allows us to define a generalised text element that we call a texteme, the atomic element on which a whole family of new text models is based. The study of these models then leads us to the understanding
Mahabir, Laetitia-Amanda. "L'identité personnelle et les réseaux sociaux." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM1015.
Full textThe identity of people in the digital space cannot be defined in one way. Its mode of expression is multiple. In fact, an individual can have several identities in the digital environment. It can also play different social roles according to social contexts that it faces. But what are effects of using identity masks ? The individual's identity within the social networks refers to the self-questioning, and others. The individualistic side and the community side of this identity are the foundation of the building of user's identity. Moreover, it appears that the wealth and the complexity of networks are the result of the confusion made by the law between individual and identity. In fact, the law is based on an essentialist conception of identity, by which each player has its own trajectory. But the individual does not live alone, he lives in a group and it is part of a network of social relations. The identity is made in the interaction of a claimed identity for oneself and assigned by others. Also, develop a personal identity on the fringes of the digital reality is to distinguish the person of the concept of online presence which is eminently declarative and performative.All this leads to reconsider the place of identity in the digital space. To understand the question of the construction of personal identity in social network, different approaches will be detailed. Those approaches are aimed to ensure everyone an identity according his wishes. Also, it will be necessary to adapt the existing measures to the virtual reality, in order to establish a more secure regime of personal identity, in respect of the rights of each user's personnality
Goulvestre, Ann-maël. "La RFID et le droit." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM1016.
Full textThe acronym RFID stands for Radio Frequency Identification, a wireless technology that provides the ability to automatically identify and locate tags attached to objects, as well as the person who is actually carrying them. The technology relies on storing and retrieving data, through radio waves by using devices called RFID tags. Those tags are already prevalent in our everyday lives to grant building access control for example. And this is only the beginning, as RFID technology is expected to be one of the main brick of the Internet of things. Like any other new technology, RFID needs a suitable legal framework to get benefits to both consumers and businesses. This work is thus intended to bring up RFID from a legal perspective. Indeed, one of the new technology key successes lies on the way the consumers will grasp it. When looking for economic profits, it is then essential to focus on public perception and consumer confidence related to this technology. And this would not come along without security concern which represents actually the main challenge to face, most of all from a technical perspective, but also from a legal one.For this reason, the proposed work aims at exposing the legal framework which could be applied both to its container (the RFID system) and its content (the RFID data). It is worth analysing the way the legal practitioner can deal with this technology and all its related issues. Are there any specifics legislations on this subject? And if so, what are they? A critical assessment is undertaken to appreciate the substance of the relationships existing between RFID and law, and potential improvement that come along
Bourdon, Jean-Luc. "Parallélisme et cognition appliqués à la reconnaissance de caractères." Rouen, 1991. http://www.theses.fr/1991ROUE5034.
Full textWehbi, Hassan. "Segmentation et reconnaissance en-ligne des mots d'écriture manuscrite mixte." Paris 12, 1996. http://www.theses.fr/1996PA120017.
Full textLethelier, Edouard. "Combinaison des concepts de segmentation et de reconnaissance pour l'écriture manuscrite hors-ligne : application au traitement des montants numériques des chèques." Rouen, 1996. http://www.theses.fr/1996ROUES045.
Full textOulhadj, Hamouche. "Des primitives aux lettres : une méthode structurelle de reconnaissance en ligne de mots d'écriture cursive manuscrite avec un apprentissage continu." Paris 12, 1990. http://www.theses.fr/1990PA120045.
Full textRenau-Ferrer, Ney. "Outils et méthodes pour l'analyse automatique multi-niveaux de tracés manuscrits à caractère géométrique acquis en ligne." Thesis, Antilles-Guyane, 2011. http://www.theses.fr/2010AGUY0394/document.
Full textThis thesis handles the problem of the automatic analysis of online hand drawn geometric sketches. An online sketch can be analysed according to several points of view. As for offline sketching, we can try to recognize the produced shape. However, online sketching allows other levels of analysis. For example the analysis of the behavior of the drawer during the production of the sketch. In this thesis, we have tried to develop tools allowing a multi level analysis, including both shape and behavior analysis. The first part of our work deals with the pre treatments that must be performed on the sketch in order to allow upper level analysis. Those pre treatments are filtering, mixed segmentation and feature points detection and labelisation. In the second part, we approach shape analysis in two aspects: shape recognition and evaluation. We have developed a appearance based method which use local descriptors to allows both recognition and evaluation of the quality of a produced shape compared to the model . in the last part we propose a method for drawer's behavior extraction and modeling. Then we show how we can not only determine the favorite procedure of a drawer but also recognize the drawer by analyzing his behavior
Heutte, Laurent. "Reconnaissance de caractères manuscrits : application a la lecture automatique des chèques et des enveloppes postales." Rouen, 1994. http://www.theses.fr/1994ROUES049.
Full textChatelain, Clément. "Extraction de séquences numériques dans des documents manuscrits quelconques." Phd thesis, Université de Rouen, 2006. http://tel.archives-ouvertes.fr/tel-00143090.
Full textLaroche, Benjamin. "Le big data à l’épreuve du règlement européen général sur la protection des données." Thesis, Toulouse 1, 2020. http://www.theses.fr/2020TOU10041.
Full textCitizens’ daily uses of technologies in a digital society exponentially produce data. In this context, the development of massive data collection appears as inevitable. Such technologies involve the processing of personal data in order to create economic value or to optimize business or decision-making processes. The General Data Protection Regulation (EU) 2016/679 (GDPR) aims to regulate these practices while respecting the imperatives of flexibility and technological neutrality. However, big data is proving to be an unprecedentedly complex legal issue, as its specific characteristics oppose several principles of the General Data Protection Regulation. Widely shared, this observation has gradually imposed an implicit form of status quo that does not allow for the effective resolution of the incompatibility between the reality of big data and the legal framework provided by the GDPR. In order to solve this equation, a distributive approach, based on the components of the big data: its structure, its data and its algorithmic capabilities, will then make it possible to study the qualification of this notion in order to identify an appropriate regime. Overcoming such a problem will, first of all, involve updating the qualification of personal data in order to respond to the increasing complexity of data processing carried out using advanced algorithmic capabilities. In addition, the accountability of the various actors involved, in particular through joint responsibilities for processing, will be associated with the notion of risk in order to bring the necessary updating to the regulation of big data. Finally, the application of a data protection impact analysis methodology will test and then synthesize the indispensable strengthening of the adequacy between legal theory and the practical reality of big data
Michel, Justin. "Contributions à l'étude des cavités nasosinusiennes de l'adulte par la morphométrie géométrique et la simulation numérique des écoulements." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM5077/document.
Full textObjectives - to conduct a morphometric geometric study in order to define the variability of the frontal and maxillary sinuses in an adult population and to determinate the sex of individuals - to validate a Computational fluid dynamic model of the nasal airflow and to describe the influence of the sinus conformation on the nasal airflow.We defined twenty reliable and reproducible landmarks for frontal and maxillary sinuses. We described the variability of the frontal and maxillary sinuses in an adult population sample. Finally, we highlighted the existence of allometries for frontal and maxillary sinuses but also the existence of a sexual dimorphism in the conformation of the maxillary sinusesComputational fluid dynamics:We offer a reliable model for numerical simulation of flows from CT examination. Based on this model, we have not shown any influence of the conformation on the nasal sinus ventilation and thermal conditioning of the inhaled and exhaled air. We showed no ventilation in the sinuses in the nasal respiratory cycle. Future studies will endeavor to define a humidity conditioning air model and a simulation model of production of NO and gas exchange between sinuses and nasal cavities
Jean-Caurant, Axel. "Analyse de documents et du comportement des utilisateurs pour améliorer l'accès à l'information." Thesis, La Rochelle, 2018. http://www.theses.fr/2018LAROS028/document.
Full textThe constant increase of available documents and tools to access them has led to a change of research practices. For a few years now, more and more information retrieval platforms are made available online to the scientific community or the public. This data deluge is a great opportunity for users seeking information. However, it comes with new problems and new challenges to overcome. Formerly, the main issue for researchers was to identify if a particular resource existed. Today, the challenge is more about finding how to access pertinent information. We have identified two distinct levers to limit the impact of this new search paradigm. First, we believe that it is necessary to analyze how the different search platforms are used. To be able to understand and read into users behavior is a necessary step to comprehend what users understand, and to identify what they need to get an in-depth understanding of the operation of such platforms. Indeed, most systems act as black boxes which conceal the underlying transformations applied on data. Users do not need to understand in details how those algorithms work. However, because those algorithms have a major impact on the accessibility of information, and need to be taken into account during the exploitation of search results. Why is the search engine returning those particular results ? Why is this document more pertinent than another ? Such seemingly naive questions are nonetheless essential to undertake an analytical approach of the information search and retrieval task. We think that users have a right and a duty to question themselves about the relevance of such and such tool at their disposal. To help them cope with these issues, we developped a dual-use information search platform. On the one hand, it can be used to observe and understand user behavior. On the other hand, it can be used as a pedagogical medium to highlight research biases users can be exposed to. At the same time, we believe that the tools themselves must be improved. In the second part of this thesis, we study the impact that the quality of documents can have on their accessibility. Because of the increase of documents available online, human operators are less and less able to insure their quality. Thus, there is a need to set up new strategies to improve the way search platform operate and process documents. We propose a new method to automatically identify and correct errors generated by information extraction process such as OCR
Andro, Mathieu. "Bibliothèques numériques et crowdsourcing : expérimentations autour de Numalire, projet de numérisation à la demande par crowdfunding." Thesis, Paris 8, 2016. http://www.theses.fr/2016PA080039/document.
Full textInstead of outsource tasks to providers in cheap labour countries, libraries increasingly appeal to online crowds, making relationship with their users more collaborative. The PhD begins with a conceptual chapter on the consequences of this new economic model on society and on libraries. Then, an overview of the projects is presented in the areas of digitization on demand (crowdfunding) and OCR correction with gamification and folksonomy. It is followed by a state of the art, a review and analysis on crowdsourcing applied to digitization and digital libraries. Finally, conceptual contributions and original experiments, with Numalire, a crowdfunding and digitization on demand project are presented
Frémondière, Pierre. "L'évolution de l'accouchement dans la lignée humaine. Estimation de la contrainte fœto-pelvienne par deux méthodes complémentaires : la simulation numérique de l'accouchement et l'analyse discriminante des modalités d'accouchement au sein d'un échantillon obstétrical." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM5013.
Full textThe purpose of this thesis is to estimate delivery outcomes for extinct hominids. We therefore use two complementary methods : numerical simulation of childbirth and discriminant analysis of delivery outcomes from an obstetrical sample. First, we use kriging to construct meshes of pelves and neonatal skulls. Fossil hominid specimens included in the study are Australopithecines, early Homo (EH) and middle to early Pleistocene Homo (MEPH). We estimate fetal cranial dimensions with chimpanzee or human cranial growth curve that we reversly use and apply on juveniles skull measurements. “Virtual” dyads are formed from pelves and neonatal skulls. Then, we simulate childbirth of these « virtual » dyads. Different levels of laxity of the sacro-iliac junction and different positions of the fetal head are considered. Finally, we use an obstetrical sample: delivery outcome is noted, CT-scans are used to obtain maternal pelvic measurements and diameters of the fetal head were also measured after delivery. A discriminant analysis is performed using this obstetrical sample to separate delivery outcomes thanks to fetal-pelvic measurements. Fossil dyads were subsequently added in the discriminant analysis to assess delivery outcomes to which they belong. Results suggest small fetal-pelvic constraint for Austalopithecines. This constraint is moderate for EH. Fetal-pelvic constraint is more important for MEPH. We suggest that rotational birth appears with EH. The curved trajectory of the fetal head appears with MEPH. Emergence of rotational birth and curved trajectory of fetal head are probably explained by two major increases in brain size during late and middle Pleistocene
Elagouni, Khaoula. "Combining neural-based approaches and linguistic knowledge for text recognition in multimedia documents." Thesis, Rennes, INSA, 2013. http://www.theses.fr/2013ISAR0013/document.
Full textThis thesis focuses on the recognition of textual clues in images and videos. In this context, OCR (optical character recognition) systems, able to recognize caption texts as well as natural scene texts captured anywhere in the environment have been designed. Novel approaches, robust to text variability (differentfonts, colors, sizes, etc.) and acquisition conditions (complex background, non uniform lighting, low resolution, etc.) have been proposed. In particular, two kinds of methods dedicated to text recognition are provided:- A segmentation-based approach that computes nonlinear separations between characters well adapted to the localmorphology of images;- Two segmentation-free approaches that integrate a multi-scale scanning scheme. The first one relies on a graph model, while the second one uses a particular connectionist recurrent model able to handle spatial constraints between characters.In addition to the originalities of each approach, two extra contributions of this work lie in the design of a character recognition method based on a neural classification model and the incorporation of some linguistic knowledge that enables to take into account the lexical context.The proposed OCR systems were tested and evaluated on two datasets: a caption texts video dataset and a natural scene texts dataset (namely the public database ICDAR 2003). Experiments have demonstrated the efficiency of our approaches and have permitted to compare their performances to those of state-of-the-art methods, highlighting their advantages and limits
Journault, Matthieu. "Analyse statique modulaire précise par interprétation abstraite pour la preuve automatique de correction de programmes et pour l’inférence de contrats." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS152.
Full textEnsuring the scalability of static analyzers defined by abstract interpretation poses difficulties. A classical technique known to speed up analyses is the discovery and reuse of summaries for some of the sequences of statements of the source code. In this thesis we focus on a subset of C that does not allow recursion and define a modular analyzer, able to infer, prove and use (to improve the efficiency) such summaries. Our modular analyzer is built on top of an existing C analyzer and is therefore able to handle unions, structures, arrays, memory allocations (static and dynamic), pointers, pointer arithmetics, pointer casts, function calls, string manipulations, ... . String handling is provided by a new abstract domain defined in this thesis. In this thesis we provide a lifting of classical numerical abstract domains to the representation of heterogeneous sets. This lifting can be used for relational domains and maintains only one numerical abstract state, by opposition to partitioning. The last point of interest of this thesis is the definition of an abstract domain able to represent sets of trees with numerically labeled leaves. This abstraction is based on regular and tree regular languages and delegates the handling of numerical constraints to an underlying domain able to represent heterogeneous sets of environments. As the thesis took place in the mopsa project, we provide an overview of some of the results obtained by the mopsa team during the thesis
Peyrard, Clément. "Single image super-resolution based on neural networks for text and face recognition." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI083/document.
Full textThis thesis is focussed on super-resolution (SR) methods for improving automatic recognition system (Optical Character Recognition, face recognition) in realistic contexts. SR methods allow to generate high resolution images from low resolution ones. Unlike upsampling methods such as interpolation, they restore spatial high frequencies and compensate artefacts such as blur or jaggy edges. In particular, example-based approaches learn and model the relationship between low and high resolution spaces via pairs of low and high resolution images. Artificial Neural Networks are among the most efficient systems to address this problem. This work demonstrate the interest of SR methods based on neural networks for improved automatic recognition systems. By adapting the data, it is possible to train such Machine Learning algorithms to produce high-resolution images. Convolutional Neural Networks are especially efficient as they are trained to simultaneously extract relevant non-linear features while learning the mapping between low and high resolution spaces. On document text images, the proposed method improves OCR accuracy by +7.85 points compared with simple interpolation. The creation of an annotated image dataset and the organisation of an international competition (ICDAR2015) highlighted the interest and the relevance of such approaches. Moreover, if a priori knowledge is available, it can be used by a suitable network architecture. For facial images, face features are critical for automatic recognition. A two step method is proposed in which image resolution is first improved, followed by specialised models that focus on the essential features. An off-the-shelf face verification system has its performance improved from +6.91 up to +8.15 points. Finally, to address the variability of real-world low-resolution images, deep neural networks allow to absorb the diversity of the blurring kernels that characterise the low-resolution images. With a single model, high-resolution images are produced with natural image statistics, without any knowledge of the actual observation model of the low-resolution image
Chen, Yong. "Analyse et interprétation d'images à l'usage des personnes non-voyantes : application à la génération automatique d'images en relief à partir d'équipements banalisés." Thesis, Paris 8, 2015. http://www.theses.fr/2015PA080046/document.
Full textVisual information is a very rich source of information to which blind and visually impaired people (BVI) not always have access. The presence of images is a real handicap for the BVI. The transcription into an embossed image may increase the accessibility of an image to BVI. Our work takes into account the aspects of tactile cognition, the rules and the recommendations for the design of an embossed image. We focused our work on the analysis and comparison of digital image processing techniques in order to find the suitable methods to create an automatic procedure for embossing images. At the end of this research, we tested the embossed images created by our system with users with blindness. In the tests, two important points were evaluated: The degree of understanding of an embossed image; The time required for exploration.The results suggest that the images made by this system are accessible to blind users who know braille. The implemented system can be regarded as an effective tool for the creation of an embossed image. The system offers an opportunity to generalize and formalize the procedure for creating an embossed image. The system gives a very quick and easy solution.The system can process pedagogical images with simplified semantic contents. It can be used as a practical tool for making digital images accessible. It also offers the possibility of cooperation with other modalities of presentation of the image to blind people, for example a traditional interactive map
Do, Thanh Ha. "Sparse representations over learned dictionary for document analysis." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0021/document.
Full textIn this thesis, we focus on how sparse representations can help to increase the performance of noise removal, text region extraction, pattern recognition and spotting symbols in graphical documents. To do that, first of all, we give a survey of sparse representations and its applications in image processing. Then, we present the motivation of building learning dictionary and efficient algorithms for constructing a learning dictionary. After describing the general idea of sparse representations and learned dictionary, we bring some contributions in the field of symbol recognition and document processing that achieve better performances compared to the state-of-the-art. These contributions begin by finding the answers to the following questions. The first question is how we can remove the noise of a document when we have no assumptions about the model of noise found in these images? The second question is how sparse representations over learned dictionary can separate the text/graphic parts in the graphical document? The third question is how we can apply the sparse representation for symbol recognition? We complete this thesis by proposing an approach of spotting symbols that use sparse representations for the coding of a visual vocabulary
Kesiman, Made Windu Antara. "Document image analysis of Balinese palm leaf manuscripts." Thesis, La Rochelle, 2018. http://www.theses.fr/2018LAROS013/document.
Full textThe collection of palm leaf manuscripts is an important part of Southeast Asian people’s culture and life. Following the increasing of the digitization projects of heritage documents around the world, the collection of palm leaf manuscripts in Southeast Asia finally attracted the attention of researchers in document image analysis (DIA). The research work conducted for this dissertation focused on the heritage documents of the collection of palm leaf manuscripts from Indonesia, especially the palm leaf manuscripts from Bali. This dissertation took part in exploring DIA researches for palm leaf manuscripts collection. This collection offers new challenges for DIA researches because it uses palm leaf as writing media and also with a language and script that have never been analyzed before. Motivated by the contextual situations and real conditions of the palm leaf manuscript collections in Bali, this research tried to bring added value to digitized palm leaf manuscripts by developing tools to analyze, to transliterate and to index the content of palm leaf manuscripts. These systems aim at making palm leaf manuscripts more accessible, readable and understandable to a wider audience and, to scholars and students all over the world. This research developed a DIA system for document images of palm leaf manuscripts, that includes several image processing tasks, beginning with digitization of the document, ground truth construction, binarization, text line and glyph segmentation, ending with glyph and word recognition, transliteration and document indexing and retrieval. In this research, we created the first corpus and dataset of the Balinese palm leaf manuscripts for the DIA research community. We also developed the glyph recognition system and the automatic transliteration system for the Balinese palm leaf manuscripts. This dissertation proposed a complete scheme of spatially categorized glyph recognition for the transliteration of Balinese palm leaf manuscripts. The proposed scheme consists of six tasks: the text line and glyph segmentation, the glyph ordering process, the detection of the spatial position for glyph category, the global and categorized glyph recognition, the option selection for glyph recognition and the transliteration with phonological rules-based machine. An implementation of knowledge representation and phonological rules for the automatic transliteration of Balinese script on palm leaf manuscript is proposed. The adaptation of a segmentation-free LSTM-based transliteration system with the generated synthetic dataset and the training schemes at two different levels (word level and text line level) is also proposed