Dissertations / Theses on the topic 'Modèle de l'utilisateur'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 33 dissertations / theses for your research on the topic 'Modèle de l'utilisateur.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Thomas, Isabelle. "Modèle de l'utilisateur, modèle de la tache et modèles d'explications pour expliquer dans le contexte des taches de conception : application en conception d'équipements électroniques." Paris 13, 1993. http://www.theses.fr/1993PA132026.
Full textNadal, Maurin. "Assistance à l'utilisateur novice dans le cadre du dessin de graphe à l'aide de méthodes d'apprentissage." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00981993.
Full textLaitano, María Inés. "Le modèle trifocal : une approche communicationnelle des interfaces numériques : Contributions à la conception d'interfaces accessibles." Thesis, Paris 8, 2015. http://www.theses.fr/2015PA080025.
Full textObject of study for several disciplines, digital interfaces appear as a heterogeneous object: a human-computer dialogue, a reflection of a mental model, an instrument, a set of semiotic signs ... All these dimensions, addressed individually by disciplinary approaches, have never been gathered in a common paradigmatic framework. This thesis argues that interfaces, as a complex object of study, must be addressed in a framework capable of dealing with this complexity. It proposes to do so by the Systemic Communication Theory and to think about not in terms of interface quality attributes (usability, communicability, conviviality...) but in terms of meanings. This implies a human-centered model and provides an integrated point of view enabling the design of new interfaces as well as their implementation in numerous contexts and sensory modalities.The trifocal model is thus a systemic approach to communication via the interface. It studies the relationships between user and machine, between user and object of his activity and between user and designer, as well as emergent properties of the system. The trifocal model provides an interface description transposable from one modality to another. It allows, on one hand, to study the meaning of non-visual interfaces and, on the other, to translate interfaces from one modality to another. The trifocal model takes a fresh look on designing accessible interfaces since it complements existing methods with new analytical dimensions
Krichene, Haná. "SCAC : modèle d'exécution faiblement couplé pour les systèmes massivement parallèles sur puce." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10093.
Full textThis work proposes an execution model for massively parallel systems aiming at ensuring the communications overlap by the computations. The execution model defined in this PhD thesis is named SCAC: Synchronous Communication Asynchronous Computation. This weakly coupled model separates the execution of communication phases from those of computation in order to facilitate their overlapping, thus covering the data transfer time. To allow the simultaneous execution of these two phases, we propose an approach based on three levels: two globally-centralized/locally-distributed hierarchical control levels and a parallel computation level. A generic and parametric implementation of the SCAC model was performed to fit different applications. This implementation allows the designer to choose the system components (from pre-designed ones) and to set its parameters in order to build the adequate SCAC configuration for the target application. An analytical estimation is proposed to evaluate the performance of an application running in SCAC mode. This estimation is used to predict the execution time without passing through the physical implementation in order to facilitate the parallel program design and the SCAC architecture configuration. The SCAC model was validated by simulation, synthesis and implementation on an FPGA platform, with different examples of parallel computing applications. The comparison of the results obtained by the SCAC model with other models has shown its effectiveness in terms of flexibility and execution time acceleration
Lavergne-Boudier, Valérie. "Système dynamique d'interrogation des bases de données bibliographiques." Paris 7, 1990. http://www.theses.fr/1990PA077243.
Full textGantier, Samuel. "Contribution au design du documentaire interactif : jonction et disjonction des figures de l'utilisateur de B4, fenêtres sur tour, coproduit par France Télévisions." Thesis, Valenciennes, 2014. http://www.theses.fr/2014VALE0031.
Full textIn the last few years, several hundred interactive documentaries (i-docs) have been published on the Internet. If many media professionals prize the i-doc format, its design remains a challenging feat. Given this, what light do film documentary theories and digital media shed on the mediated metamorphoses that typify the “New Writings” movement? What are the communicational and ontologico-aesthetic issues of i-docs? What role and what power should an instance of enunciation accord to the “actant-spectator”?In response to these questions, our study of the current state of the French-speaking production scene brought to the fore a typology of interaction modes. Following this observation, an ethnographic approach, based on a participant observation method, questioned the overall sociotechnical and semio-graphic issues that marked the six-month design process of an i-doc called B4, fenêtres sur tour for the State-run France Télévisions. A Grounded Theory analysis of the data highlighted the different dimensions of a more or less implicit negotiated Model User used by the actors. Finally, the purported uses of i-docs were questioned in evaluating users’ experience. The junctions and disjunctions involving the interaction of the User, Statistical and Empirical Models contributed to a better grasp of the designing of the hybrid and non-stabilised i-doc format
Lequay, Victor. "Une approche ascendante pour la gestion énergétique d'une Smart-Grid : modèle adaptatif et réactif fondé sur une architecture décentralisée pour un système générique centré sur l'utilisateur permettant un déploiement à grande échelle." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1304.
Full textThe field of Energy Management Systems for Smart Grids has been extensively explored in recent years, with many different approaches being described in the literature. In collaboration with our industrial partner Ubiant, which deploys smart homes solutions, we identified a need for a highly robust and scalable system that would exploit the flexibility of residential consumption to optimize energy use in the smart grid. At the same time we observed that the majority of existing works focused on the management of production and storage only, and that none of the proposed architectures are fully decentralized. Our objective was then to design a dynamic and adaptive mechanism to leverage every existing flexibility while ensuring the user's comfort and a fair distribution of the load balancing effort ; but also to offer a modular and open platform with which a large variety of devices, constraints and even algorithms could be interfaced. In this thesis we realised (1) an evaluation of state of the art techniques in real-time individual load forecasting, whose results led us to follow (2) a bottom-up and decentralized approach to distributed residential load shedding system relying on a dynamic compensation mechanism to provide a stable curtailment. On this basis, we then built (3) a generic user-centered platform for energy management in smart grids allowing the easy integration of multiple devices, the quick adaptation to changing environment and constraints, and an efficient deployment
Perron, Marc. "Conception d'une stratégie de commande vectorielle énergétiquement optimale de la machine asynchrone basée sur un modèle des pertes neuronal sur FPGA." Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26247/26247.pdf.
Full textKhenak, Nawel. "Vers un modèle unifié de la présence spatiale et de ses facteurs : application à l’étude de la téléprésence en environnements immersifs." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG024.
Full text« Spatial presence » is a psycho-cognitive phenomenon that refers to the sensation experienced by people when they feel they are physically located in a given space. This feeling has already been studied extensively. However, like other psycho-cognitive phenomena, no consensus has yet been reached on its definition or how to measure it. Therefore, this thesis attempts to establish a model of spatial presence by highlighting the factors involved in its emergence, and the tools to measure its intensity. Thus, while taking into account the new technological advances in the field of Virtual Reality (VR), the first contribution of this thesis was to describe the main factors of spatial presence gathered into a conceptual model that takes into account the existing theories. Many factors of different importance exist, the best known is the « immersion » factor. Particular attention was paid to this factor through the evaluation of spatial presence in highly immersive environments. Yet, the evaluation of spatial presence can only be done by finding stable ways to quantify it. This task proves to be complicated due to the subjective nature of the phenomenon : the most commonly used tools are questionnaires administered to users after they have completed an experiment in a controlled environment. However, other more objective tools exist, such as the behavioral analysis of users when interacting within an environment. Nevertheless, the reliability of these measurements is still open to debate. Then, the second contribution of this thesis was to find reliable and valid tools for measuring spatial presence, not only by combining the subjective and objective approaches already existing but also by proposing new measures to evaluate this phenomenon and its factors. To this end, a spatial presence questionnaire was developed. In addition, different measures based on human behavior were assessed through the conduction of experiments where users had to perform specific tasks in different immersive environments. Particular attention has also been paid to remote environments that can generate the feeling of « Telepresence », which has been much less studied than presence in virtual environments. The advantage of remote environments is that the actions in such environments have real consequences in our world, in contrast to computer-generated environments in which actions remain confined. The users’ awareness of the impact of their actions in remote environments influences their behavior, and could, therefore, affect the presence experienced. Consequently, spatial telepresence may highlight different factors compared to those generated in virtual environments. Besides, telepresence can be compared to « natural » presence experienced by users when they are physically located in real environments without any technological mediation. Such comparison allows to highlight the effect of the technologies used on the feeling of presence. Indeed, the results of the experiments conducted during this thesis, comparing real, remote, and virtual environments, showed a significant difference between the environments in the degree of spatial presence as well as in some of its factors, such as the « degree of reality » attributed to the environment. In conclusion, the work carried out during this thesis allowed developing a model of spatial presence, analyzing its relationship with several factors, and finding suitable tools to measure them. The model can now be used to characterize spatial presence in order to improve VR systems, either in terms of performance or user well-being in concrete contexts. In particular, the Covid-19 pandemic situation that occurred during this thesis is a very good example that encourages the study and the use of spatial telepresence systems
Pangracious, Vinod. "High Performance Three-Dimensional Tree-based FPGA Architecture using 3D Technology Process." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066480.
Full textToday, FPGAs (Field Programmable Gate Arrays) has become important actors in the computational devices domain that was originally dominated by microprocessors and ASICs. FPGA design big challenge is to nd a good trade-o between exibility and performances. Three factors combine to determine the characteristics of an FPGA: quality of its architecture, quality of the CAD tools used to map circuits into the FPGA, and its electrical technology design. This dissertation aims at exploring a development of Three- dimensional (3D) physical design methodology and exploration tools for 3D Tree-based stacked FPGA architecture to improve area, density, power and performances. The first part of the dissertation is to study the existing variants of 2D Tree-based FPGA architecture and the impact of 3D migration on its topology. We have seen numerous studies showing the characteristics of Tree-based interconnect networks, how they scale in terms of area and performance, and empirically how they relate to particular designs. Nevertheless we never had any breakthrough in optimizing these network topologies to exploit the advantages in area and power consumption and how to deal with the larger wire-length issues that impede performance of Tree-based FPGA architecture. Through the course of the work, we understand that, we would not be able to optimize the speed, unless we break the very backbone of the Tree-based interconnect network and resurrect again by using 3D technology. The 3D-ICs can alleviate interconnect delay issues by ofering exibility in system design, placement and routing. A new set of 3D FPGA architecture exploration tools and technologies developed to validate the advance in performance and area.The second contribution of this thesis is the development 3D physical design methodology and tools using existing 2D CAD tools for the implementation of 3D Tree-based FPGA demonstrator. During the course of design process, we addressed many specic issues that 3D designers will encounter dealing with tools that are not specically designed to meet their needs. In contrast, the thermal performance is expected to worsen with the use of 3D integration. We examined precisely how thermal behavior scales in 3D integration and determine how the temperature can be controlled using thermal design techniques
Mbatchou, Nkwetchoua Guy Merlin. "Vers un modèle d'accompagnement de l'apprentissage dans les Learning Management Systems : une approche basée sur la modélisation multi-scénarios d'un cours et la co-construction du scénario par les apprenants." Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS257.pdf.
Full textThis thesis contributes to support learning in Technology Enhanced Learning environment in order to improve the learning process. In a context where we do not have profiles to adapt learning, we have opted for learner-directed learning under the constraints defined by the teacher.We developed to teachers a model to design multi-scenario course. The model inspired on the Competence-based Knowledge Space Theory to which 3 extensions are added to correct its weaknesses in a context of initial or lifelong training. The model is based on learning objectives and prerequisite relationships among them to produce multiple scenarios in a reasonable time. A survey of teachers shows a priori an acceptability of the model. We allow learner to co-construct his scenario during the learning. Co-construction results from the fact that the scenario must respect constraints defined by the teacher to avoid illogical choices that may lead to failure or even dropout. The learning process is based on the choice and change of objectives to achieve and activities to do. A survey of learners shows a priori acceptability of the model. The models are implemented as an integrable plugin in Moodle. An experiment with the teachers allowed them to detect inconsistencies and deficiencies in their courses. We observed a variety of scenarios built by the students
Dedieu, Sébastien. "Adaptation d'un système de reconstruction de modèles numériques 3D à partir de photographies face aux connaissances de l'utilisateur." Bordeaux 1, 2001. http://www.theses.fr/2001BOR12481.
Full textAhmed, Sameer. "Application d'un langage de programmation de type flot de données à la synthèse haut-niveau de système de vision en temps-réel sur matériel reconfigurable." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00844399.
Full textMahmoudian, Bigdoli Navid. "Compression for interactive communications of visual contents." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S072.
Full textInteractive images and videos have received increasing attention due to the interesting features they provide. With these contents, users can navigate within the content and explore the scene from the viewpoint they desire. The characteristics of these media make their compression very challenging. On the one hand, the data is captured in high resolution (very large) to experience a real sense of immersion. On the other hand, the user requests a small portion of the content during navigation. This requires two characteristics: efficient compression of data by exploiting redundancies within the content (to lower the storage cost), and random access ability to extract part of the compressed stream requested by the user (to lower the transmission rate). Classical compression schemes can not handle random accessibility because they use a fixed pre-defined order of sources to capture redundancies. The purpose of this thesis is to provide new tools for interactive compression schemes of images. For that, as the first contribution, we propose an evaluation framework by which we can compare different image/video interactive compression schemes. Moreover, former theoretical studies show that random accessibility can be achieved using incremental codes with the same transmission cost as non-interactive schemes and with reasonable storage overhead. Our second contribution is to build a generic coding scheme that can deal with various interactive media. Using this generic coder, we then propose compression tools for 360-degree images and 3D model texture maps with random access ability to extract the requested part. We also propose new representations for these modalities. Finally, we study the effect of model selection on the compression rates of these interactive coders
Sahade, Mohamad. "Un démonstrateur automatique basé sur la méthode de tableaux pour les logiques modales : implémentation et études de stratégies." Toulouse 3, 2006. http://www.theses.fr/2006TOU30055.
Full textLi, Jing. "Design of mechatronic products based on user-customized configuration : an application for industrial robots." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2211/document.
Full textIn today's market, obtaining a variety of products through configuration design has become increasingly common. However, with the development of the market, customers have not only satisfied the company offering a variety of products, and more demands on participating in the process of configuration design by themselves, so that they can obtain fully personalized products. Customer participation leads to the changes of design process, company's management model, etc. Based on the above problem, this thesis takes industrial robot as an example, and studies the management issues related to the customer involved in the design, in order to address the contradiction between product diversification, personalized requirements and the long design cycle and high manufacturing costs. Firstly, Auser-customized configuration design pattern is presented. The theory source of user-customized configuration design pattern is introduced, and then the related concepts are expounded. The corresponding business mode of user-customized configuration design pattern is given, and the key technologies to realize business mode is studied. System dynamics models were established for user-customized configuration design business mode and for traditional business mode of industrial robots by Anylogic simulation software. Secondly, the component-based theory and method are studied, including the formal description of things, ontology representation, componentization and servitization. On this basis, the componentization description model is established for the product parts. And the model is represented as service-component. Next, the formation process and extension method of service-component are introduced. An example of industrial robot components modeling is analyzed, includingestablishing industrial robot domain ontology by protégé, describing, instantiating and extensing components. Thirdly, the industrial robot user-customized configuration design template is constructed, and users can obtain the industrial robot meeting constraints through parameters setting; The kinematics and dynamics analysis on template is taken by Simscape model, and the dynamic parameters is analyzed, and the finite element analysis on template is taken by ANSYS, including statics analysis and modal analysis. The parameters flow process in template is analyzed. Then taking industrial robot user-customized configuration design using configuration template as an example, the configuration template is analyzed in application. Fourthly, the internal algorithm of user-customized configuration design is researched. Platform-based and user-leading user-customized configuration design process is constructed, and then the internal algorithm to keep the design running smoothly is studied, including the degree of freedom determination, fuzzy demand calculation, and service-component configuration and the configuration program evaluation. A case analysis is also taken for the internal algorithm Finally, on the basis of the previous section, the prototype system design of the open design platform is taken. Based on system requirements analysis and system design, the main pages of the platform are designed, and the key functions are introduced
Vidal, Jorgiano. "Dynamic and partial reconfigurable embedded systems design with UML." Lorient, 2010. http://www.theses.fr/2010LORIS203.
Full textAdvances in reconfigurable technologies allow entire multiprocessor systems to be implemented in a single FPGA (Multiprocessor System on Programmable Chip, MP- SoPC). In order to speed up the design time of such heterogeneous systems, new modelling techniques must be developed. Furthermore, dynamic execution is a key point for modern systems, i. E. Systems that can partially change their behavior at run time in order to adjust their execution to the environment. UML (Unified Modeling Language) has been used for software modeling since its first version. Recently, with new modeling concepts added to later versions (UML 2), it has become more and more suitable for hardware modeling. This thesis is a contribution to the MOPCOM project, where we propose a set of modeling techniques in order to build complex embedded systems by using UML. The modeling techniques proposed here consider the system to be built in one complete model. Moreover, we propose a set of transformation that allows the system to be automatically generated. Our approach allows the modelling of dynamic applications onto reconfigurable platforms. Design time reduction up to 30% has been measured while using our methodology
Ginon, Blandine. "Modèles et outils génériques pour mettre en place des systèmes d’assistance épiphytes." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0080/document.
Full textThis thesis in computer science is situated more particulary in the field of knowledge engineering. It concerns the a posteriori setup of assistance systems in existing applications, while having a generic approach. In order to setup the setup of assistance systems in existing applications without a need to redevelop it or to access its source code, we choose to have a fully epiphytic approach. We proposed a adjunction process of an assistance system to a target-application with a epiphytic manner. It is constituted of two phases: the specification and the execution of the assistance. The assistance specification phase enables an expert, the assistance designer, to represent his knowledge relative to the target-application and to the assistance that he wishes to setup. The assistance execution phase uses this knowledge to provide the target-application end-users with the assistance wished by the designer. To make possible on the one hand the assistance specification by an assistance designer potentially non-computer scientist, and one the second hand the automatic execution of the specified assistance, we propose a pivot language: aLDEAS. This graphical language makes possible the definition of very varied assistance systems, with the shape of a set of rules. Our theoretical propositions have been implemented through the SEPIA system, constituted of different tools. The SEPIA assistance editor is aimed at assistance designers, and it implemented the assistance specification phase. It provided the assistance designers with an interface to handle aLDEAS elements in order to define assistance rules. These rules can then be executed by the SEPIA generic assistance engine, which implements the assistance execution phase. It provides the target-application end-users with the specified assistance. For this purpose, the assistance engine manages different epiphytic tools, in order to monitor and inspect the target-application, and to perform the assistance actions. The models implemented through the SEPIA system are generic, but it make possible the setup of assistance systems specifically suited on the one hand to their target-application, and on the second hand to the end-users
Daga, Jean-MIchel. "Modélisation des performances temporelles des circuits CMOS submicroniques au niveau porte logique." Montpellier 2, 1997. http://www.theses.fr/1997MON20132.
Full textChappet, de Vangel Benoît. "Modèles cellulaires de champs neuronaux dynamiques." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0194/document.
Full textIn the constant search for design going beyond the limits of the von Neumann architecture, non conventional computing offers various solutions like neuromorphic engineering and cellular computing. Like von Neumann who roughly reproduced brain structures to design computers architecture, neuromorphic engineering takes its inspiration directly from neurons and synapses using analog substratum. Cellular computing influence comes from natural substratum (chemistry, physic or biology) imposing locality of interactions from which organisation and computation emerge. Research on neural mechanisms was able to demonstrate several emergent properties of the neurons and synapses. One of them is the attractor dynamics described in different frameworks by Amari with the dynamic neural fields (DNF) and Amit and Zhang with the continuous attractor neural networks. These neural fields have various computing properties and are particularly relevant for spatial representations and early stages of visual cortex processing. They were used, for instance, in autonomous robotics, classification and clusterization. Similarly to many neuronal computing models, they are robust to noise and faults and thus are good candidates for noisy hardware computation models which would enable to keep up or surpass the Moore law. Indeed, transistor area reductions is leading to more and more noise and the relaxation of the approx. 0% fault during production and operation of integrated circuits would lead to tremendous savings. Furthermore, progress towards many-cores circuits with more and more cores leads to difficulties due to the centralised computation mode of usual parallel algorithms and their communication bottleneck. Cellular computing is the natural answer to these problems. Based on these different arguments, the goal of this thesis is to enable rich computations and applications of dynamic neural fields on hardware substratum with neuro-cellular models enabling a true locality, decentralization and scalability of the computations. This work is an attempt to go beyond von Neumann architectures by using cellular and neuronal computing principles. However, we will stay in the digital framework by exploring performances of proposed architectures on FPGA. Analog hardware like VLSI would also be very interesting but is not studied here. The main contributions of this work are : 1) Neuromorphic DNF computation ; 2) Local DNF computations with randomly spiking dynamic neural fields (RSDNF model) ; 3) Local and asynchronous DNF computations with cellular arrays of stochastic asynchronous spiking DNFs (CASAS-DNF model)
Infantes, Guillaume. "Apprentissage de modèles de comportement pour le contrôle d'exécution et la planification robotique." Phd thesis, Université Paul Sabatier - Toulouse III, 2006. http://tel.archives-ouvertes.fr/tel-00129505.
Full textFahssi, Racim Mehdi. "Identification systématique et représentation des erreurs humaines dans les modèles de tâches." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30304/document.
Full textIn user-centered approaches, the techniques, methods, and development processes used aim to know and understand the users (analyze their needs, evaluate their ways of using the systems) in order to design and develop usable systems that is in line with their behavior, skills and needs. Among the techniques used to guarantee usability, task modeling makes it possible to describe the objectives and activities of the users. With task models, human factors specialists can analyze and evaluate the effectiveness of interactive applications. This approach of task analysis and modeling has always focused on the explicit representation of the standard behavior of the user. This is because human errors are not part of the users' objectives and are therefore excluded from the job description. This vision of error-free activities, widely followed by the human-machine interaction community, is very different from the Human Factor community vison on user tasks. Since its inception, Human Factor community has been interested in understanding the causes of human error and its impact on performance, but also on major aspects like the reliability of the operation and the reliability of the users and their work. The objective of this thesis is to demonstrate that it is possible to systematically describe, in task models, user errors that may occur during the performance of user tasks. For this demonstration, we propose an approach based on task models associated with a human error description process and supported by a set of tools. This thesis presents the results of the application of the proposed approach to an industrial case study in the application domain of aeronautics
Leroux-Beaudout, Renan. "Méthodologie de conception de systèmes de simulations en entreprise étendue, basée sur l'ingénierie système dirigée par les modèles." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30089.
Full textThis manuscript presents a methodology for the design of "early" simulations in extended enterprise, based on model-driven system engineering. The goal is to allow the system architect to explore alternative solutions, and to verify and/or validate the system architecture being designed, in relation to the user requirements. This methodology is divided into two complementary axes : the method part (new) and the means of execution, without which there can be no simulation. This new method is based on the following principle : starting from the user requirements to create the system architecture model, then derive the simulation architecture, develop the executable models and run the simulation in relation to objectives of verification and/or validation. By doing this, potential differences in interpretations between the system architecture model and simulation models are removed or at least reduced compared to a traditional approach. This method is of matrix type. The columns represent the actors, while the lines correspond to the different steps of the MBSE method used by the system architect for the product, including the refinement steps. The actors are the system architect for the product (SyA), a first new actor introduced by this method : the system architect for the simulation (SiA), the developers of the simulation executable models (SMD), and the second new actor in charge of the execution of the simulation (SEM). The analysis of its qualities and the production of results exploitable by the system architect for the product. As the method relies on a matrix structure, the SyA can request simulations, either in depth to specify a particular point of its model, or more in extension to check the good agreement of the functions between them. With this new matrix approach, the system architect for the product can reuse functions already defined during the upstream or downstream stages of its previous decompositions. Overall, saving time, costs, and confidence. The second axis of this methodology is the realization of an extended enterprise cosimulation (EE) platform, which is a project in itself. Based on a proposal of requirements specifications, the MBSE has defined a functional and physical architecture. The architecture of this platform can be modified according to the simulation needs expressed by the architect of the simulation. This is one of his prerogatives. The proposal introduces a third new player : the Infrastructure Project Manager (IPM) which is in charge of coordinating for the realization of the cosimulation platform, within his company. For an EE of federated type, that is to say from contractor to subcontractor, introduction of two new actors : - the supervisor of IPM, whose rôle is to link IPMs to solve the administrative and interconnection problems, - the person responsible in charge of the execution simulations. He coordinates, with the SEM of each partner, the implementation of simulations, ensures launches, and returns the results to all partners
Duruisseau, Mickaël. "Améliorer la compréhension d’un programme à l’aide de diagrammes dynamiques et interactifs." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I042/document.
Full textDevelopers dominate in software development. In this context, they must perform a succession of elementary tasks (analysis, coding, linking with existing code ...), but in order to perform these tasks, a developer must regularly change his context of work (search information, read code ...) and analyze code that is not his. These actions require a high adaptation time and reduce the efficiency of the developer. Software modeling is a solution to this type of problem. It offers an abstract view of a software, links between its entities as well as algorithms used. However, Model-Driven Engineering (MDE) is still underutilized in business. In this thesis, we propose a tool to improve the understanding of a program using dynamic and interactive diagrams. This tool is called VisUML and focuses on the main coding activity of the developer. VisUML provides views (on web pages or modeling tools) synchronized with the code.The generated UML diagrams are interactive and allow fast navigation with and in the code. This navigation reduces the loss of time and context due to activity changes by providing at any time an abstract diagram view of the elements currently open in the developer’s coding tool. In the end, VisUML was evaluated by twenty developers as part of a qualitative experimentation of the tool to estimate the usefulness of such a tool
Zhang, Man. "Modeling of Multiphysics Electromagnetic & Mechanical Coupling and Vibration Controls Applied to Switched Reluctance Machine." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS287/document.
Full textDue to its inherent advantages Switched Reluctance Machine (SRM) are appealing to the automotive industry. However, automotive traction is a very noise sensitive application where the acoustic behavior of the power train may be the distinction between market success and market failure. To make SRM more competitive in the automotive application, this work will focus on the control strategy to improve the acoustic behavior of SRM by vibration reduction. A semi-analytical electromagnetic/structural multi-physics model is proposed based on the simulation results of numerical computation. This multi-physics model is composed by electromagnetic and structural models, which are connected by the radial force. Two control strategies are proposed. The first strategy to improve the acoustic behavior of SRM by vibration reduction. A semi-analytical electromagnetic/ structural multi-physics model is proposed based on the simulation results of numerical computation. This multi-physics model is composed by electromagnetic and structural models, which are connected by the radial force. Two control strategies are proposed. The first one reduces the vibration by varying the turn-off angle, the frequency of the variable signal is based on the mechanical property of switched reluctance machine. Besides, an uniformly distributed random function is introduced to avoid local high vibration component. Another one is based on the Direct Force Control (DFC), which aims to obtain a smoother total radial force to reduce the vibration. An reference current adapter (RCA) is proposed to limit the torque ripple introduced by the DFC, which is caused by the absence of the current limitation. The second vibration reduction strategy named DFC&RCA is evaluated by experimental tests using an 8/6 SRM prototype. A hardware/software partitioning solution is proposed to implement this method, where FPGA board is used combined with a Microprocessor
Baklouti, Kammoun Mouna. "Méthode de conception rapide d’architecture massivement parallèle sur puce : de la modélisation à l’expérimentation sur FPGA." Thesis, Lille 1, 2010. http://www.theses.fr/2010LIL10101/document.
Full textThe main purpose of this PhD is to contribute to the design and implementation of high-performance Systems on Chip to accelerate and facilitate the design and execution of systematic data parallel applications. A massively parallel SIMD processing System-on-Chip named mppSoC is defined. This system is generic, parametric in order to be adapted to the application requirements. We propose a rapid and modular design method based on IP assembling to construct an mppSoC configuration. To this end, an IP library, mppSoCLib, is implemented. The designer can select the necessary components and define the parameters to implement the SIMD configuration satisfying his needs. An automated generation chain was developed. It allows the automatic generation of the corresponding VHDL code of an mppSoC configuration modeled at high abstraction level model (in UML). The generated code is simulable and synthetizable on FPGA. The developed chain allows the definition at a high abstraction level of an mppSoC configuration adequate for a given application. Based on the simulation of the automatically generated code, we can modify the SIMD configuration in a semi-automatic exploration process. We validate mppSoC in a real video application based on FPGA. In this same context, a comparison between mppSoC and other embedded systems shows the sufficient performance and effectiveness of mppSoC
Delomier, Yann. "Conception et prototypage de décodeurs de codes correcteurs d’erreurs à partir de modèles comportementaux." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0047.
Full textDigital communications are ubiquitous in the communicating objects of everyday life. Evolving communication standards, shorter time-to-market, and heterogeneous applications make the design for digital circuit more challenging. Fifth generation (5G) mobile technologies are an illustration of the current and future challenges. In this context, the design of digital architectures for the implementation of error-correcting code decoders will often turn out to be especially difficult. High Level Synthesis (HLS) is one of the computer-aided design (CAO) methodologies that facilitates the fast prototyping of digital architectures. This methodology is based on behavioral descriptions to generate hardware architectures. However, the design of efficient behavioral models is essential for the generation of high-performance architectures. The results presented in this thesis focus on the definition of efficient behavioral models for the generation of error-correcting code decoder architectures dedicated tp LDPC codes and polar codes. These two families of error-correcting codes are the ones adopted in the 5G standard. The proposed behavioural models have to combine flexibility, fast prototyping and efficiency.A first significant contribution of the research thesis is the proposal of two behavioural models that enables the generation of efficient hardware architectures for the decoding of LDPC codes. These models are generic. They are associated with a flexible methodology. They make the space exploration of architectural solutions easier. Thus, a variety of trade-offs between throughput, latency and hardware complexity are obtained. Furthermore, this contribution represents a significant advance in the state of the art regarding the automatic generation of LDPC decoder architectures. Finally, the performances that are achieved by generated architectures are similar to that of handwritten architectures with an usual CAO methodology.A second contribution of this research thesis is the proposal of a first behavioural model dedicated to the generation of hardware architectures of polar code decoders with a high-level synthesis methodology. This generic model also enables an efficient exploration of the architectural solution space. It should be noted that the performance of synthesized polar decoders is similar to that of state-of-the-art polar decoding architectures.A third contribution of the research thesis concerns the definition of a polar decoder behavioural model that is based on a "list" algorithm, known as successive cancellation list decoding algorithm. This decoding algorithm enables to achieve higher decoding performance at the cost of a significant computational overhead. This additional cost can also be observed on the hardware complexity of the resulting decoding architecture. It should be emphasized that the proposed behavioural model is the first model for polar code decoders based on a "list" algorithm
Goubali, Olga. "Apport des techniques de programmation par démonstration dans une démarche de génération automatique d'applicatifs de contrôle-commande." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2017. http://www.theses.fr/2017ESMA0003/document.
Full textFor designing complex and sociotechnical systems, business experts are responsible for writing the functional specifications because of their operational expert knowledge. However, these experts do not usually own the programming knowledge of those who design supervision systems. The task of the system design expert is then to define the functional specifications. S/he writes them in natural language, and then provides them to the designers of the supervision interface and the control-command code. The designers’ job is then to implement and integrate the specifications into the system. Errors from the specification interpretation come from the difference of technical knowledge between the various partners involved in the project. Moreover, depending on the complexity of the system, the definition of functional specifications can be tedious.We propose a design approach based on task modelling and End User Development in order to obtain functional specifications validated by the business experts (mechanical engineer for example).Model-driven engineering techniques are implemented to automatically generate the specification interface (that integrates Recorder, Generalizer, Replayer, and Corrector), the system supervision interface to be piloted and its control program.The technical feasibility of the proposed approach was demonstrated through a proof of concept. This proof of concept was evaluated to demonstrate the interest of the approach in the design of supervision systems
Cherif, Sana. "Approche basée sur les modèles pour la conception des systèmes dynamiquement reconfigurables : de MARTE vers RecoMARTE." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2013. http://tel.archives-ouvertes.fr/tel-00998248.
Full textDeest, Gaël. "Implementation trade-offs for FGPA accelerators." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S102/document.
Full textHardware acceleration is the use of custom hardware architectures to perform some computations faster or more efficiently than on general-purpose hardware. Accelerators have traditionally been used mostly in resource-constrained environments, such as embedded systems, where resource-efficiency was paramount. Over the last fifteen years, with the end of empirical scaling laws, they also made their way to datacenters and High-Performance Computing environments. FPGAs constitute a convenient implementation platform for such accelerators, allowing subtle, application-specific trade-offs between all performance metrics (throughput/latency, area, energy, accuracy, etc.) However, identifying good trade-offs is a challenging task, as the design space is usually extremely large. This thesis proposes design methodologies to address this problem. First, we focus on performance-accuracy trade-offs in the context of floating-point to fixed-point conversion. Usage of fixed-point arithmetic instead of floating-point is an affective way to reduce hardware resource usage, but comes at a price in numerical accuracy. The validity of a fixed-point implementation can be assessed using either numerical simulations, or with analytical models derived from the algorithm. Compared to simulation-based methods, analytical approaches enable more exhaustive design space exploration and can thus increase the quality of the final architecture. However, their are currently only applicable to limited sets of algorithms. In the first part of this thesis, we extend such techniques to multi-dimensional linear filters, such as image processing kernels. Our technique is implemented as a source-level analysis using techniques from the polyhedral compilation toolset, and validated against simulations with real-world input. In the second part of this thesis, we focus on iterative stencil computations, a naturally-arising pattern found in many scientific and embedded applications. Because of this diversity, there is no single best architecture for stencils: each algorithm has unique computational features (update formula, dependences) and each application has different performance constraints/requirements. To address this problem, we propose a family of hardware accelerators for stencils, featuring carefully-chosen design knobs, along with simple performance models to drive the exploration. Our architecture is implemented as an HLS-optimized code generation flow, and performance is measured with actual execution on the board. We show that these models can be used to identify the most interesting design points for each use case
Quadri, Imran Rafiq. "MARTE based model driven design methodology for targeting dynamically reconfigurable FPGA based SoCs." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2010. http://tel.archives-ouvertes.fr/tel-00486483.
Full textJung, Aera. "JEM-EUSO prototypes for the detection of ultra-high-energy cosmic rays (UHECRs) : from the electronics of the photo-detection module (PDM) to the operation and data analysis of two pathnders." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCC108/document.
Full textThe JEM-EUSO (Extreme Universe Space Observatory on-board the Japanese Experiment Module) international space mission is designed to observe UHECRs by detecting the UV fluorescence light emitted by the so-called Extensive Air Shower (EAS) which develop when UHECRs interact with the Earth’s atmosphere. The showers consist of tens of billions or more secondary particles crossing the atmosphere at nearly the speed of light, which excite nitrogen molecules which then emit light in the UV range. While this so-called “fluorescence technique'” is routinely used from the ground, by operating from space, JEM-EUSO will, for the first time, provide high-statistics on these events. Operating from space, with a large Field-of-View of ±30 °, allows JEM-EUSO to observe a much larger volume of atmosphere, than possible from the ground, collecting an unprecedented number of UHECR events at the highest energies.For the four pathfinder experiments built within the collaboration, we have been developing a common set of electronics, in particular the central data acquisition system, capable of operating from the ground, high altitude balloons, and space.These pathfinder experiments all use a detector consisting of one Photo-detection Modules (PDMs) identical to the 137 that will be present on the JEM-EUSO focal surface. UV light generated by high-energy particle air showers passes the UV filter and impacts the Multi-anode Photomultiplier Tubes (MAPMT). Here UV photons are converted into electrons, which are multiplied by the MAPMTs and fed into Elementary Cell Application-Specific Integrated Circuit (EC-ASIC) boards, which perform the photon counting and charge estimation. The PDM control board interfaces with these ASIC boards, providing power and configuration parameters, collecting data and performing the level 1 trigger. I was in charge of designing, developing, integrating, and testing the PDM control board for the EUSO-TA and EUSO-Balloon missions as well as the autonomous trigger algorithm testing and I also performed some analysis of the EUSO-Balloon flight data and data from the EUSO-TA October 2015 run.In this thesis, I will give a short overview of high-energy cosmic rays, including their detection technique and the leading experiments (Chapter 1), describe JEM-EUSO and its pathfinders including a description of each instrument (Chapter 2), present the details of the design and the fabrication of the PDM (Chapter 3) and PDM control board (Chapter 4), as well as the EUSO-TA and EUSO-Balloon integration tests (Chapter 5). I will report on the EUSO-Balloon campaign (Chapter 6) and results (Chapter 7), including a specific analysis developed to search for global variations of the ground UV emissivity, and apply a similar analysis to data collected at the site of Telescope Array (Chapter 8). Finally, I will present the implementation and testing of the first-level trigger (L1) within the FPGA of the PDM control board (Chapter 9). A short summary of the thesis will be given in Chapter 10
Ochoa, Ruiz Gilberto. "A high-level methodology for automatically generating dynamically reconfigurable systems using IP-XACT and the UML MARTE profile." Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00932118.
Full text