Dissertations / Theses on the topic 'Tâches de performance'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Tâches de performance.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Carastan, dos Santos Danilo. "Apprentissage sur heuristiques simples pour l'ordonnancement online de tâches parallèles." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM052.
Full textHigh-Performance Computing (HPC) platforms are growing in size and complexity. In an adversarial manner, the power demand of such platforms has rapidly grown as well, and current top supercomputers require power at the scale of an entire power plant. In an effort to make a more responsible usage of such power, researchers are devoting a great amount of effort to devise algorithms and techniques to improve different aspects of performance such as scheduling and resource management. But HPC platform maintainers are still reluctant to deploy state of the art scheduling methods and most of them revert to simple heuristics such as EASY Backfilling, which is based in a naive First-Come-First-Served (FCFS) ordering. Newer methods are often complex and obscure, and the simplicity and transparency of EASY Backfilling are too important to sacrifice.At a first moment we explored Machine Learning (ML) techniques to learn on-line parallel job scheduling heuristics. Using simulations and a workload generation model, we could determine the characteristics of HPC applications (jobs) that lead to a reduction in the mean slowdown of jobs in an execution queue. Modeling these characteristics using a nonlinear function and applying this function to select the next job to execute in a queue improved the mean task slowdown in synthetic workloads. When applied to real workload traces from highly different machines, these functions still resulted in performance improvements, attesting the generalization capability of the obtained heuristics.At a second moment, using simulations and workload traces from several real HPC platforms, we performed a thorough analysis of the cumulative results of four simple scheduling heuristics (including EASY Backfilling). We also evaluated effects such as the relationship between job size and slowdown, the distribution of slowdown values, and the number of backfilled jobs, for each HPC platform and scheduling policy. We show experimental evidence that one can only gain by replacing EASY Backfilling with the Smallest estimated Area First (SAF) policy with backfilling, as it offers improvements in performance by up to 80% in the slowdown metric while maintaining the simplicity and the transparency of EASY. SAF reduces the number of jobs with large slowdowns and the inclusion of a simple thresholding mechanism guarantees that no starvation occurs.Overall we achieved the following remarks: (i) simple and efficient scheduling heuristics in the form of a nonlinear function of the jobs characteristics can be learned automatically, though whether the reasoning behind their scheduling decisions is clear or not can be up to argument. (ii) The area (processing time estimate multiplied by the number of processors) of the jobs seems to be a quite important property for good parallel job scheduling heuristics, since many of the heuristics (notably SAF) that achieved good performances have the job's area as input. (iii) The backfilling mechanism seems to always help in increasing performance, though it does not outperform a better sorting of the jobs waiting queue, such as the sorting performed by SAF
Garcia, Pinto Vinicius. "Stratégies d'analyse de performance pour les applications basées sur tâches sur plates-formes hybrides." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM058/document.
Full textProgramming paradigms in High-Performance Computing have been shiftingtoward task-based models that are capable of adapting readily toheterogeneous and scalable supercomputers. The performance oftask-based applications heavily depends on the runtime schedulingheuristics and on its ability to exploit computing and communicationresources.Unfortunately, the traditional performance analysis strategies areunfit to fully understand task-based runtime systems and applications:they expect a regular behavior with communication and computationphases, while task-based applications demonstrate no clearphases. Moreover, the finer granularity of task-based applicationstypically induces a stochastic behavior that leads to irregularstructures that are difficult to analyze.In this thesis, we propose performance analysis strategies thatexploit the combination of application structure, scheduler, andhardware information. We show how our strategies can help tounderstand performance issues of task-based applications running onhybrid platforms. Our performance analysis strategies are built on topof modern data analysis tools, enabling the creation of customvisualization panels that allow understanding and pinpointingperformance problems incurred by bad scheduling decisions andincorrect runtime system and platform configuration.By combining simulation and debugging we are also able to build a visualrepresentation of the internal state and the estimations computed bythe scheduler when scheduling a new task.We validate our proposal by analyzing traces from a Choleskydecomposition implemented with the StarPU task-based runtime systemand running on hybrid (CPU/GPU) platforms. Our case studies show howto enhance the task partitioning among the multi-(GPU, core) to getcloser to theoretical lower bounds, how to improve MPI pipelining inmulti-(node, core, GPU) to reduce the slow start in distributed nodesand how to upgrade the runtime system to increase MPI bandwidth. Byemploying simulation and debugging strategies, we also provide aworkflow to investigate, in depth, assumptions concerning the schedulerdecisions. This allows us to suggest changes to improve the runtimesystem scheduling and prefetch mechanisms
Bouda, Florence. "Ecarts de performance dans des tâches spatiales de verticale et d'horizontale : Approche développementale." Montpellier 3, 1997. http://www.theses.fr/1997MON30007.
Full textWe have analysed the influence of perceptive, representative and moteur factors in the evaluation of performances in horizontal and vertical spacial tasks (organising and monitoring). The coordination between these factors depends on the type of experimental device and on the children's age. This coordination may originate differences in performance. It is known that the fact of changing a number of characteristics in a task, such as the instructions given to the children or the materials tends to alter the results (performances) of the children. Based on the previous studies of piaget and inhelder (1947) on the horizontal and vertical tasks, we have carried out experiences with two types of devices : a "concrete" device, with real bowls and a mountain model : and a graphic one (drawing representing the real bowls and the mountain model). By changing the conditions in which these exercices took place (with and without a blinfold), we have analysed the part played by some factors in the children's performance. In our analysis, we concentrated on the representation of the task (in the determination of the performance). Results show that both the type of device and the conditions in which these exercises took place have an effect on the representation of the representation of the task and entail difference in performance. The performances from the concrete experience are globally better than those of drawing one. Differences in performance are more significant with children aged 7 to 8. An analysis of the properties of these differences allows to distinguish differences due to the characteristics of the structure - children manage to forget about these effects - and "real" differences - children cannot avoid these effects. These results show the significance of visual datas for the monotoring of the results. Without visual datas, ie. With a blindfold, the children's performances improve. As a conclusion, the results are discussed considering the different conditions and factors (perceptive, representative and motor) which influence the execution of the tasks. Key words : differences in performance, perception, representation, space, development
Cojean, Terry. "Programmation des architectures hétérogènes à l'aide de tâches divisibles ou modulables." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0041/document.
Full textHybrid computing platforms equipped with accelerators are now commonplace in high performance computing platforms. Due to this evolution, researchers concentrated their efforts on conceiving tools aiming to ease the programmation of applications able to use all computing units of such machines. The StarPU runtime system developed in the STORM team at INRIA Bordeaux was conceived to be a target for parallel language compilers and specialized libraries (linear algebra, Fourier transforms,...). To provide the portability of codes and performances to applications, StarPU schedules dynamic task graphs efficiently on all heterogeneous computing units of the machine. One of the most difficult aspects when expressing an application into a graph of task is to choose the granularity of the tasks, which typically goes hand in hand with the size of blocs used to partition the problem's data. Small granularity do not allow to efficiently use accelerators such as GPUs which require a small amount of task with massive inner data-parallelism in order to obtain peak performance. Inversely, processors typically exhibit optimal performances with a big amount of tasks possessing smaller granularities. The choice of the task granularity not only depends on the type of computing units on which it will be executed, but in addition it will influence the quantity of parallelism available in the system: too many small tasks may flood the runtime system by introducing overhead, whereas too many small tasks may create a parallelism deficiency. Currently, most approaches rely on finding a compromise granularity of tasks which does not make optimal use of both CPU and accelerator resources. The objective of this thesis is to solve this granularity problem by aggregating resources in order to view them not as many small resources but fewer larger ones collaborating to the execution of the same task. One theoretical machine and scheduling model allowing to represent this process exists since several decades: the parallel tasks. The main contributions of this thesis are to make practical use of this model by implementing a parallel task mechanism inside StarPU and to implement and study parallel task schedulers of the literature. The validation of the model is made by improving the programmation and optimizing the execution of numerical applications on top of modern computing machines
Garlet, Milani Luís Felipe. "Autotuning assisté par apprentissage automatique de tâches OpenMP." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM022.
Full textModern computer architectures are highly complex, requiring great programming effort to obtain all the performance the hardware is capable of delivering. Indeed, while developers know potential optimizations, the only feasible way to tell which of them is faster for some platform is to test it. Furthermore, the many differences between two computer platforms, in the number of cores, cache sizes, interconnect, processor and memory frequencies, etc, makes it very challenging to have the same code perform well over several systems. To extract the most performance, it is often necessary to fine-tune the code for each system. Consequently, developers adopt autotuning to achieve some degree of portable performance. This way, the potential optimizations can be specified once, and, after testing each possibility on a platform, obtain a high-performance version of the code for that particular platform. However, this technique requires tuning each application for each platform it targets. This is not only time consuming but the autotuning and the real execution of the application differ. Differences in the data may trigger different behaviour, or there may be different interactions between the threads in the autotuning and the actual execution. This can lead to suboptimal decisions if the autotuner chooses a version that is optimal for the training but not for the real execution of the application. We propose the use of autotuning for selecting versions of the code relevant for a range of platforms and, during the execution of the application, the runtime system identifies the best version to use using one of three policies we propose: Mean, Upper Confidence Bound, and Gradient Bandit. This way, training effort is decreased and it enables the use of the same set of versions with different platforms without sacrificing performance. We conclude that the proposed policies can identify the version to use without incurring substantial performance losses. Furthermore, when the user does not know enough details of the application to configure optimally the explore-then-commit policy usedy by other runtime systems, the more adaptable UCB policy can be used in its place
Navet, Nicolas. "Évaluation de performances temporelles et optimisation de l'ordonnancement de tâches et messages." Vandoeuvre-les-Nancy, INPL, 1999. http://docnum.univ-lorraine.fr/public/INPL_T_1999_NAVET_N.pdf.
Full textMoreaud, Stéphanie. "Mouvement de données et placement des tâches pour les communications haute performance sur machines hiérarchiques." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2011. http://tel.archives-ouvertes.fr/tel-00635651.
Full textTyndiuk, Florence. "Référentiels Spatiaux des Tâches d'Interaction et Caractéristiques de l'Utilisateur influençant la Performance en Réalité Virtuelle." Phd thesis, Université Victor Segalen - Bordeaux II, 2005. http://tel.archives-ouvertes.fr/tel-00162297.
Full textLa première concerne les tâches d'interaction en réalité virtuelle, plus particulièrement la manipulation et la locomotion. L'étude et la comparaison des propriétés spatiales des environnements réels et virtuels nous permettent de proposer des modèles hiérarchiques de ces tâches, spécifiant les configurations d'interaction problématiques pour un utilisateur. En fonction de ces configurations problématiques, un concepteur devra contraindre le déplacement ou aider l'interaction. La principale difficulté que nous avons identifiée est l'adaptation de l'interface aux référentiels spatiaux de l'utilisateur (égocentrique, exocentrique).
La seconde dimension concerne l'identification des caractéristiques de l'utilisateur influençant la performance en fonction de la tâche (locomotion vs. manipulation) et de l'interface (immersive visuellement vs. peu immersive visuellement). Pour la configuration n'induisant que peu d'immersion visuelle, un écran d'ordinateur et un grand écran sont utilisés, l'angle de vue de l'utilisateur est conservé constant. Cette étude montre l'impact sur la performance d'interaction des capacités spatiales, de la dépendance-indépendance à l'égard du champ et de l'expérience en jeu vidéo, pour différentes interfaces et tâches. Nous montrons notamment qu'un grand écran peut soutenir la performance et minimiser l'influence des capacités spatiales sur celle-ci.
Gagné, Marie-Ève. "Performance à des tâches locomotrices et cognitives simultanées à la suite de traumatismes craniocérébraux légers." Doctoral thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/36743.
Full textMild traumatic brain injury (mTBI), also known as concussion, can cause cognitive, sensorimotor and neurophysiological alterations which can be observed days and even months post-injury. Presently, mTBI clinical assessments are often focused on either cognitive deficits or physical function separately. Evaluating these spheres of function in isolation, however, is not always ecological enough to capture how an individual will function in day-to-day life where cognitive and sensorimotor demands are often simultaneous. Recent laboratory-based studies have suggested that dual-tasks combining motor and cognitive tasks are a promising way to detect differences between healthy participants and individuals having sustained mTBI. However, studies are very heterogeneous in terms of tasks used and variables measured, few of them focus on variables that could influence dual-task performance and many are not readily transferable to a clinical setting. This doctoral thesis aimed at identifying a sensitive, ecologically valid dual-task protocol combining locomotor and cognitive demands to evaluate the residual effects of mTBI both in a laboratory environment (Study 1) and a clinical setting (Study 2). Using gait laboratory measures, the first study’s objectives were (1) to compare performance on cognitive and gait parameters using different dual-tasks in healthy controls and young adults with mTBI, and (2) to examine the effect of number of mTBIs sustained on dual-task performance. Exploratory correlations to investigate relationships between neuropsychological testing and dual-task performance were also calculated. Eighteen participants with mTBI (13 women; age 21.89 ±3.76, on average 59.56 days post-injury ±24.12) and fifteen control participants (9 women; age 22.20 ±4.33) were recruited. A battery of neuropsychological tests was used to assess verbal fluency, executive function, memory and attention. A physiological examination comprised assessment of grip strength, upper-limb coordination, as well as static and dynamic balance. Subjective symptoms were also assessed. A 9-camera motion analysis system (Vicon) was used to characterize gait. Participants were asked to walk along a 6-meter walkway during three conditions of locomotion: (1) level-walking, (2) walking and stepping over a deep obstacle (15 cm high x 15 cm deep), and (3) walking and stepping over a narrow obstacle (15 cm high x 3 cm deep) and three cognitive conditions: (1) Counting backwards by 2s, (2) Verbal fluency, and (3) Stroop task. These tasks were performed in combination and as single tasks. No significant differences were found between groups on neuropsychological tests, but the physiological examination revealed that the mTBI group had slower gait speed and were more unstable than the control group. The dual-task experimentation showed that mTBI had slower gait speed, in both single and dual-tasks, and slower response time during dualtasks. No combination of dual-task was revealed to be more sensitive to distinguish groups. In a clinical-like setting, the second study’s objective was to compare mTBI individuals with healthy controls, using accessible technology to assess performance. Twenty participants with mTBI (10 women; age 22.10 ±2.97, 70.9 days post-injury ±22.31) and 20 control participants (10 women; age 22.55 ±2.72) were recruited. Subjective symptoms, history of impacts, a short neuropsychological battery and subjective fatigue and concentration symptoms during experimentation were used to characterized groups. Participants walked back and forth two times along a 10 m long corridor, during two locomotor conditions: (1) level-walking or (2) walking and stepping over three obstacles and two cognitive conditions; (1) Counting backwards by 7s, and (2) Verbal fluency. These tasks were performed in combination and as single-tasks. Only a stopwatch and an observation grid were used to assess performance. Participants reported being significantly less concentrated during dual-task experimentation. Significantly greater dual-task cost for gait speed in the mTBI group were observed, which demonstrated increased difficulty in dual-task, even more than two months following the injury. This thesis highlights that dual-task protocols combining locomotor and cognitive tasks could represent a simple, practical and sensitive way for clinicians to detect residual alterations even months following mTBI. More work is needed to identify personal characteristics, such as mTBI history, that could influence performance. A reflection on how protocols could be developed according to clinical restrictions and needs is much required in order to pursue research of dual-tasks.
Bouvry, Pascal. "Placement de tâches sur ordinateurs parallèles à mémoire distribuée." Grenoble INPG, 1994. http://tel.archives-ouvertes.fr/tel-00005081.
Full textThe growing needs in computing performance imply more complex computer architectures. The lack of good programming environments for these machines must be filled. The goal to be reached is to find a compromise solution between portability and performance. The subject of this thesis is studying the problem of static allocation of task graphs onto distributed memory parallel computers. This work takes part of the project INRIA-IMAG APACHE and of the european one SEPP-COPERNICUS (Software Engineering for Parallel Processing). The undirected task graph is the chosen programming model. A survey of the existing solutions for scheduling and for mapping problems is given. The possibility of using directed task graphs after a clustering phase is underlined. An original solution is designed and implemented ; this solution is implemented within a working programming environment. Three kinds of mapping algorithms are used: greedy, iterative and exact ones. Most developments have been done for tabu search and simulated annealing. These algorithms improve various objective functions (from most simple and portable to the most complex and architecturaly dependant). The weigths of the task graphs can be tuned using a post-mortem analysis of traces. The use of tracing tools leads to a validation of the cost function and of the mapping algorithms. A benchmark protocol is defined and used. The tests are runned on the Meganode (a 128 transputer machine) using VCR from the university of Southampton as a router, synthetic task graphs generation with ANDES of the ALPES project (developped by the performance evaluation team of the LGI-IMAG) and the Dominant Sequence Clustering of PYRROS (developped by Tao Yang and Apostolos Gerasoulis)
Sergent, Marc. "Passage à l'echelle d'un support d'exécution à base de tâches pour l'algèbre linéaire dense." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0372/document.
Full textThe ever-increasing supercomputer architectural complexity emphasizes the need for high-level parallel programming paradigms to design efficient, scalable and portable scientific applications. Among such paradigms, the task-based programming model abstracts away much of the architecture complexity by representing an application as a Directed Acyclic Graph (DAG) of tasks. Among them, the Sequential-Task-Flow (STF) model decouples the task submission step, sequential, from the parallel task execution step. While this model allows for further optimizations on the DAG of tasks at submission time, there is a key concern about the performance hindrance of sequential task submission when scaling. This thesis’ work focuses on studying the scalability of the STF-based StarPU runtime system (developed at Inria Bordeaux in the STORM team) for large scale 3D simulations of the CEA which uses dense linear algebra solvers. To that end, we collaborated with the HiePACS team of Inria Bordeaux on the Chameleon software, which is a collection of linear algebra solvers on top of task-based runtime systems, to produce an efficient and scalable dense linear algebra solver on top of StarPU up to 3,000 cores and 288 GPUs of CEA-DAM’s TERA-100 cluster
Nesi, Xavier. "Prédiction de la performance et de son amélioration en cyclisme sur route." Lille 2, 2004. http://www.theses.fr/2004LIL2S036.
Full textConstant, Olivier. "Une approche dirigée par les modèles pour l'intégration de la simulation de performance dans la conception de systèmes à base de composants." Pau, 2006. http://www.theses.fr/2006PAUU3013.
Full textPredicting the performance of distributed systems is an issue that must be taken into account during the whole design and development process. This is especially true for systems that are built out of reused components since these systems are more difficult to predict. Nevertheless, evaluating the performance of a system at the design phase requires specific techniques and skills. A solution consists in extending classical design languages to allow transformation of design models into specific performance models. This thesis proposes a rigorous approach, based on MDE (Model-Driven Engineering) techniques, for the automatic transformation of UML 2. 0 models of component-based systems into queueing network models for performance simulation. A metamodel is first defined to provide a precise conceptual framework for component-based executable models exploiting the expressivity of simulation languages. The metamodel, called CPM (Component Performance Metamodel), is then used to define a profile for UML 2. 0. The profile is structured into several layers that group extensions and OCL (Object Constraint Language) queries and constraints. In order to solve the semantic ambiguities of UML, the metamodel of UML 2. 0 Sequence Diagrams is restricted to a subset close to the MSC (Message Sequence Charts) language. Using the formal semantics of MSC, the problem of model executability is tackled by additional constraints allowing to define OCL queries that can drive transformations on the basis of the semantics of the models. The generated performance models can be executed by an industrial performance simulator
Giguère, Kathleen. "Utilisation du temps de latence de réponse comme mesure de performance à des tâches de raisonnement inductif." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ61344.pdf.
Full textTyndiuk, Florence. "Référentiels spatiaux des tâches d'interaction et caractéristiques de l'utilisateur influençant la performance en réalité virtuelle : Florence Tyndiuk." Bordeaux 2, 2005. http://www.theses.fr/2005BOR21269.
Full textIn virtual reality, to adapt interfaces to users, two dimensions must be studied : the task and the user. The first dimension deals with interaction tasks in virtual reality, more precisely manipulation and locomotion. The study and the comparison of spatial properties of virtual and real environments allow us to propose tasks hierarchical models which show problematical interaction situations for users. Depending on these problematical situations, a designer should be forced movement or help interaction. The main identified difficulty is to adapt the interface to spatial user's frame of reference (egocentric, exocentric). The second dimension concerns the identification of user's characteristics, which have an influence on performance depending on the task (locomotion, manipulation) and the interface (immersive in visual way and partial-immersive in a visual way). Concerning the partial-immersive situation, the interface is used with two displays : a desktop monitor or a large display. The field of view is kept constant. This study shows the impact on performance of spatial capacities, field dependence and video game experience, in different tasks and with different interfaces. We show also that a large display can help to have a high level of performance and decrease the impact of spatial capacities impact
Dirand, Estelle. "Développement d'un système in situ à base de tâches pour un code de dynamique moléculaire classique adapté aux machines exaflopiques." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM065/document.
Full textThe exascale era will widen the gap between data generation rate and the time to manage their output and analysis in a post-processing way, dramatically increasing the end-to-end time to scientific discovery and calling for a shift toward new data processing methods. The in situ paradigm proposes to analyze data while still resident in the supercomputer memory to reduce the need for data storage. Several techniques already exist, by executing simulation and analytics on the same nodes (in situ), by using dedicated nodes (in transit) or by combining the two approaches (hybrid). Most of the in situ techniques target simulations that are not able to fully benefit from the ever growing number of cores per processor but they are not designed for the emerging manycore processors.Task-based programming models on the other side are expected to become a standard for these architectures but few task-based in situ techniques have been developed so far. This thesis proposes to study the design and integration of a novel task-based in situ framework inside a task-based molecular dynamics code designed for exascale supercomputers. We take benefit from the composability properties of the task-based programming model to implement the TINS hybrid framework. Analytics workflows are expressed as graphs of tasks that can in turn generate children tasks to be executed in transit or interleaved with simulation tasks in situ. The in situ execution is performed thanks to an innovative dynamic helper core strategy that uses the work stealing concept to finely interleave simulation and analytics tasks inside a compute node with a low overhead on the simulation execution time.TINS uses the Intel® TBB work stealing scheduler and is integrated into ExaStamp, a task-based molecular dynamics code. Various experiments have shown that TINS is up to 40% faster than state-of-the-art in situ libraries. Molecular dynamics simulations of up to 2 billions particles on up to 14,336 cores have shown that TINS is able to execute complex analytics workflows at a high frequency with an overhead smaller than 10%
Jamet, Mallaury. "Place du traitement cognitif des informations sensorielles dans la performance de la fonction d'équilibration en situation multi-tâches." Nancy 1, 2004. http://www.theses.fr/2004NAN10173.
Full textNormal ageing or senescence is known to affect these three functions and incidentally balance control, reception of sensory information, integration of information destined to the construction and the programming by the nervous system of a set of instructions and motor execution of these orders by the muscular system. This work aimed to assess the place of cognitive sensorial information processing in balance performance. These results were discussed in terms of age-dependent evolution of the relative places of automatic and cognitive processings in balance performance, particularly for the validation of the assumption that ageing induces a requisite increase of cognitive load with expression in a large mobilisation of attentional resources and cognitive operative processes which, in the same time, decreased in availability and potentiality
Cursan, Anthony. "Impact de l'ostracisme au sein d'un groupe d'individus de même sexe ou de sexe opposé sur les performances à plusieurs tâches stéréotypées selon le genre." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0249/document.
Full textWith this research, we study the impact of ostracism (the feeling of social exclusion) onperformance on several stereotyped tasks, depending on the sex of the ostracism's source.Many researches showed that ostracism could lead to cognitive performance decrease(Baumeister, Twenge & Nuss, 2002). Some studies also pointed out that executing a task atthe same time as members of the opposite sex may cause a decrease in performance, if thetask is negatively stereotyped toward the targeted person (Inzlicht & Ben-Zeev, 2000).According to those studies, we were expecting ostracism would lead to performance decrease,and also that this effect (for a negatively stereotyped task) would be more pronounced,coming from members of the opposite sex. We tested this hypothesis with 4 experiments: 3 onfemale samples (using numeric task) and one on male sample (using an affective task). Wealso proposed a cumulated analysis of experiments conducted on female samples. Eventually,we didn't validate our hypothesis. On the contrary, we observed that only ostracism fromsame-sex persons led to performance decrease on a negatively stereotyped task. We proposeda number of leads to interpret the result we repeatedly highlighted
Lai, Jau-Shyuam. "The interactive effects of formal and informal information exchanges on team performance and team satisfaction." Thesis, Cergy-Pontoise, Ecole supérieure des sciences économiques et commerciales, 2012. http://www.theses.fr/2012ESEC0004.
Full textThis study establishes a multilevel model to investigate how managers can increase team performance and team satisfaction by influencing team members’ communications. I applied organizational communication theory, social exchange theory along with social network theory to a multilevel data set (consisting 37 frontline sales teams in a Fortune 500 company). Two-step interviews and a survey study were completed to test this model. This project made a substantive contribution by providing an in-depth look at the effects of task characteristics and network features (i.e. centrality, tie strength) on formal and informal information exchanges. It showed that formal information exchange led to positive team performance when team diversity in work tenure was low; while informal information exchange led to positive team performance when team diversity in work tenure was high. Informal information exchange also had direct impact on team satisfaction. This study also provided an insightful explanation of how managerial influences (such as clan control and competitive climate) shifted individual formal and informal communications by molding work environment
Sun, Huichao. "L'amélioration de la performance du produit par l'intégration des tâches d'utilisation dès la phase de conception : une approche de conception comportementale." Phd thesis, Université de Strasbourg, 2012. http://tel.archives-ouvertes.fr/tel-00712072.
Full textCaillou, Nicolas. "Effet des instructions sur l'apprentissage d'une habileté de coordination bimanuelle." Montpellier 1, 2003. http://www.theses.fr/2003MON14001.
Full textDeschênes, Annie. "Modèle animal des dysfonctions schizophréniques du cortex préfrontal : performance de rats avec injections systémiques de phencyclidine (PCP) dans deux tâches axées sur des changements de règles." Master's thesis, Université Laval, 2003. http://hdl.handle.net/20.500.11794/44341.
Full textGama, Pinheiro Vinicius. "The management of multiple submissions in parallel systems : the fair scheduling approach." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM042/document.
Full textWe study the problem of scheduling in parallel and distributedsystems with multiple users. New platforms for parallel and distributedcomputing offers very large power which allows to contemplate the resolution ofcomplex interactive applications. Nowadays, it is still difficult to use thispower efficiently due to lack of resource management tools. The work done inthis thesis lies in this context: to analyse and develop efficient algorithmsfor manage computing resources shared among multiple users. We analyzescenarios with many submissions issued from multiple users over time. Thesesubmissions contain one or more jobs and the set of submissions are organizedin successive campaigns. Any job from a campaign can not start until allthe jobs from the previous campaign are completed. Each user is interested inminimizing the sum of flow times of the campaigns.In the first part of this work, we define a theoretical model for Campaign Scheduling under restrictive assumptions andwe show that, in the general case, it is NP-hard. For the single-user case, we show that an$ho$-approximation scheduling algorithm for the (classic) parallel jobscheduling problem is also an $ho$-approximation for the Campaign Schedulingproblem. For the general case with $k$ users, we establish a fairness criteriainspired by time sharing. Then, we propose FairCamp, a scheduling algorithm whichuses campaign deadlines to achieve fairness among users between consecutivecampaigns. We prove that FairCamp increases the flow time of each user by afactor of at most $kho$ compared with a machine dedicated to the user. Wealso prove that FairCamp is an $ho$-approximation algorithm for the maximumstretch.We compare FairCamp to {em First-Come-First-Served} (FCFS) by simulation. We showthat, compared with FCFS, FairCamp reduces the maximum stretch by up to $3.4$times. The difference is significant in systems used by many ($k>5$) users.Our results show that, rather than just individual, independent jobs, campaignsof jobs can be handled by the scheduler efficiently and fairly
Lanctôt, Myriam. "Étude de l'influence des antécédents psychologiques, neurologiques et scolaires sur la performance à des tâches de l'attention et d'inhibition cognitive chez des adultes atteints d'un traumatisme craniocérébral léger." Thèse, Université du Québec à Trois-Rivières, 2004. http://depot-e.uqtr.ca/4701/1/000111853.pdf.
Full textDelaplace-Reisser, Chantal. "Contribution à l'analyse du crawl du nageur non expert : étude des paramètres spacio-temporels, des parties nagées et non nagées et de la coordination de nage en fonction du niveau d'expertise, de la distance et du genre." Montpellier 1, 2004. http://www.theses.fr/2004MON14005.
Full textFelicioni, Flavia. "Stabilité et performance des systèmes distribués de contrôle-commande." Thesis, Vandoeuvre-les-Nancy, INPL, 2011. http://www.theses.fr/2011INPL013N/document.
Full textThe main contributions of this thesis are related to the analysis, synthesis and design of control systems sharing communication and computational resources. The research focuses on control systems where the feedback loops are closed over communication networks which transmit the information provided to its nodes by sensors, actuators and controllers. The shared resource in this scenario is the network. Some of the results are valid when the resource is a processor locally placed respect to several controller executing their algorithms on it. In any of the preceding scenarios, the control loops must contend for the shared resource. The limited capacity of the resource can cause delays and packet losses when information is transmitted. These effects can degrade the control system performance and even destabilize it.The first part of this thesis contributes to the performance analysis of specific classes of systems and to the design of robust controllers for network characteristics modeled by Quality of Service parameters. A series of methods to assist the control systems engineer are provided.In the second part, a contribution to the CoDesign approach is made via the integration of control system synthesis and design techniques with rules allowing to define the communication policy to manage real-time tasks sharing a limited resource. Putting in correspondence a scheduling of instances of the controller tasks with their sampling periods, the proposed policy results in discrete-time varying systems. The stabilization problem of these systems is solved with methods based on the solvability of Lie-algebras. Specifically, the proposed methodology provides adaptive controllers
Roels, Belle. "Systemic and muscular adaptations to different hypoxic training methods." Montpellier 1, 2005. http://www.theses.fr/2005MON14002.
Full textGaud, Fabien. "Étude et amélioration de la performance des serveurs de données pour les architectures multi-cœurs." Phd thesis, Université de Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00563868.
Full textBelen, Lucien. "Les pédaliers expérimentaux : effets sur la demande métabolique, sur les réponses cardio-ventilatoires et sur la performance de deux prototypes au cours d'épreuves d'effort." Montpellier 1, 2006. http://www.theses.fr/2006MON14004.
Full textMöller, Nathalie. "Adaptation de codes industriels de simulation en Calcul Haute Performance aux architectures modernes de supercalculateurs." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLV088.
Full textFor many years, the stability of the architecture paradigm has facilitated the performance portability of large HPC codes from one generation of supercomputers to another.The announced breakdown of the Moore's Law, which rules the progress of microprocessor engraving, ends this model and requires new efforts on the software's side.Code modernization, based on an algorithmic which is well adapted to the future systems, is mandatory.This modernization is based on well-known principles as the computation concurrency, or degree of parallelism, and the data locality.However, the implementation of these principles in large industrial applications, which often are the result of years of development efforts, turns out to be way more difficult than expected.This thesis contributions are twofold :On the one hand, we explore a methodology of software modernization based on the concept of proto-applications and compare it with the direct approach, while optimizing two simulation codes developed in a similar context.On the other hand, we focus on the identification of the main challenges for the architecture, the programming models and the applications.The two chosen application fields are the Computational Fluid Dynamics and Computational Electro Magnetics
Durny, Annick. "Contribution du domaine psychomoteur à la compréhension de la performance sportive : différences inter-individuelles dans des tâches mettant en jeu les aptitudes psychomotrices chez des sportifs d'âges, de disciplines et de niveaux de spécialisation différents." Paris 10, 1996. http://www.theses.fr/1996PA100001.
Full textOur work concerns the contribution of psychomotor abilities to sport performance. It is about the view of the motor performance presented by Fleishman. We study the inter-individual differences in tasks who request psychomotor abilities by sportsmen who have different ages and practice sports in different level of expertise. The factorial analysis helps to particularize the mechanical and psychomotor abilities. It shows that for heterogeneous people, the experimental tasks performance depend principally on general factors, common to all tests. When groups are more homogeneous, we can see appear more specific factors, common to fewer tests. Our discussion concerns the relationships between psychomotor abilities and sport performance and particularly about its interest for the performance prediction and the "talent" choice in sports
Gueunet, Charles. "Calcul haute performance pour l'analyse topologique de données par ensembles de niveaux." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS120.
Full textTopological Data Analysis requires efficient algorithms to deal with the continuously increasing size and level of details of data sets. In this manuscript, we focus on three fundamental topological abstractions based on level sets: merge trees, contour trees and Reeb graphs. We propose three new efficient parallel algorithms for the computation of these abstractions on multi-core shared memory workstations. The first algorithm developed in the context of this thesis is based on multi-thread parallelism for the contour tree computation. A second algorithm revisits the reference sequential algorithm to compute this abstraction and is based on local propagations expressible as parallel tasks. This new algorithm is in practice twice faster in sequential than the reference algorithm designed in 2000 and offers one order of magnitude speedups in parallel. A last algorithm also relying on task-based local propagations is presented, computing a more generic abstraction: the Reeb graph. Contrary to concurrent approaches, these methods provide the augmented version of these structures, hence enabling the full extend of level-set based analysis. Algorithms presented in this manuscript result today in the fastest implementations available to compute these abstractions. This work has been integrated into the open-source platform: the Topology Toolkit (TTK)
Martsinkevich, Tatiana V. "Improving message logging protocols towards extreme-scale HPC systems." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112215.
Full textExisting petascale machines have a Mean Time Between Failures (MTBF) in the order of several hours. It is predicted that in the future systems the MTBF will decrease. Therefore, applications that will run on these systems need to be able to tolerate frequent failures. Currently, the most common way to do this is to use global application checkpoint/restart scheme: if some process fails the whole application rolls back the its last checkpointed state and re-executes from that point. This solution will become infeasible at large scale, due to its energy costs and inefficient resource usage. Therefore fine-grained failure containment is a strongly required feature for the fault tolerance techniques that target large-scale executions. In the context of message passing MPI applications, message logging fault tolerance protocols provide good failure containment as they require restart of only one process or, in some cases, a bounded number of processes. However, existing logging protocols experience a number of issues which prevent their usage at large scale. In particular, they tend to have high failure-free overhead because they usually need to store reliably any nondeterministic events happening during the execution of a process in order to correctly restore its state in recovery. Next, as message logs are usually stored in the volatile memory, logging may incur large memory footprint, especially in communication-intensive applications. This is particularly important because the future exascale systems expect to have less memory available per core. Another important trend in HPC is switching from MPI-only applications to hybrid programming models like MPI+threads and MPI+tasks in response to the increasing number of cores per node. This gives opportunities for employing fault tolerance solutions that handle faults on the level of threads/tasks. Such approach has even better failure containment compared to message logging protocols which handle failures on the level of processes. Thus, the work in these dissertation consists of three parts. First, we present a hierarchical log-based fault tolerance solution, called Scalable Pattern-Based Checkpointing (SPBC) for mitigating process fail-stop failures. The protocol leverages a new deterministic model called channel-determinism and a new always-happens-before relation for partial ordering of events in the application. The protocol is scalable, has low overhead in failure-free execution and does not require logging any events, provides perfect failure containment and has a fully distributed recovery. Second, to address the memory limitation problem on compute nodes, we propose to use additional dedicated resources, or logger nodes. All the logs that do not fit in the memory of compute nodes are sent to the logger nodes and kept in their memory. In a series of experiments we show that not only this approach is feasible, but, combined with a hierarchical logging scheme like the SPBC, logger nodes can be an ultimate solution to the problem of memory limitation for logging protocols. Third, we present a log-based fault tolerance protocol for hybrid applications adopting MPI+tasks programming model. The protocol is used to tolerate detected uncorrected errors (DUEs) that happen during execution of a task. Normally, a DUE caused the system to raise an exception which lead to an application crash. Then, the application has to restart from a checkpoint. In the proposed solution, we combine task checkpointing with message logging in order to support task re-execution. Such task-level failure containment can be beneficial in large-scale executions because it avoids the more expensive process-level restart. We demonstrate the advantages of this protocol on the example of hybrid MPI+OmpSs applications
Tremblay, Marc-André. "Caractérisation des effets des immobilisations orthopédiques sur les performances de conduite automobile lors de tâches simulées." Mémoire, Université de Sherbrooke, 2007. http://savoirs.usherbrooke.ca/handle/11143/3954.
Full textBoillot, Lionel. "Contributions à la modélisation mathématique et à l'algorithmique parallèle pour l'optimisation d'un propagateur d'ondes élastiques en milieu anisotrope." Thesis, Pau, 2014. http://www.theses.fr/2014PAUU3043/document.
Full textThe most common method of Seismic Imaging is the RTM (Reverse Time Migration) which depends on wave propagation simulations in the subsurface. We focused on a 3D elastic wave propagator in anisotropic media, more precisely TTI (Tilted Transverse Isotropic). We directly worked in the Total code DIVA (Depth Imaging Velocity Analysis) which is based on a discretization by the Discontinuous Galerkin method and the Leap-Frog scheme, and developed for intensive parallel computing – HPC (High Performance Computing). We choose to especially target two contributions. Although they required very different skills, they share the same goal: to reduce the computational cost of the simulation. On one hand, classical boundary conditions like PML (Perfectly Matched Layers) are unstable in TTI media. We have proposed a formulation of a stable ABC (Absorbing Boundary Condition) in anisotropic media. The technique is based on slowness curve properties, giving to our approach an original side. On the other hand, the initial parallelism, which is based on a domain decomposition and communications by message passing through the MPI library, leads to load-imbalance and so poor parallel efficiency. We have fixed this issue by replacing the paradigm for parallelism by the use of task-based programming through runtime system. This PhD thesis have been done in the framework of the research action DIP (Depth Imaging Partnership) between the Total oil company and Inria
Albinet, Cédric. "Vieillessement, activité physique et apprentissage moteur : effets de la complexité de la tâche." Toulouse 3, 2004. http://www.theses.fr/2004TOU30161.
Full textHuman aging is characterized by a decrease in both efficiency and speed of cognitive and sensory-motor processes. However, the dynamic of ageing is not the same for all the individuals. Some factors, linked to the way of life such as regular physical exercise, could modulate the aging effects. The aim of this dissertation is to examine under which conditions maintaining a physically active life-style could compensate for the age-related declines in the learning of a new motor-skill. More precisely, the effects of task complexity and the conditions under which the learning must be expressed were studied. Three experiments studied the interactive effects of age and regular physical exercise on the learning of visual-motor pointing-movements on a moving target. The target displacements were governed by probabilistic rules and different target sizes implied different accuracy constraints. Learning was assessed in a reactive task where participants' response speed was stressed (experiment 1 and 2), and in a prediction task without any temporal constraint (experiment 3). The main results revealed a beneficial effect of physical exercise on this skill learning, selective to the elderly and to conditions of high constraints on response execution. The age-related effect would be due to speed-stress conditions and to response complexity. The positive effects of physical exercise on the cognitive function of older adults would be related to motor efficiency improvement
Reggoug, Khalil. "Biais de sous-estimation de la catégorie "autres causes" : effet de la structure de la tâche et pertinence du modèle normatif pour l'évaluation de la performance des sujets." Aix-Marseille 1, 2000. http://www.theses.fr/2000AIX10042.
Full textGontier, Emilie. "Analyse de la spécificité temporelle des indices électrophysiologiques corticaux rapportés aux performances comportementales dans des tâches de discrimination de durées." Rouen, 2008. http://www.theses.fr/2008ROUEL609.
Full textOur research focuses on the participation of cortical structures in temporal processing and try to answer to the following questions: Does the prefrontal cortex involvement in decision-making is specific to temporal discrimination? Does the involvement of fronto-parietal network depends on the cognitive load and the nature of the stimulus to discriminate? Is there a hemispheric asymmetry in the temporal treatment? Our data have shown that 1) The involvement of prefrontal structures in the decision-making is not specific to temporal processing 2) Encoding and comparison of temporal information are based on a parieto-frontal functional loop, which determines the temporal performances of subjects 3)The parieto-frontal network reflects plasticity toward the cognitive load 4) Temporal discrimination periods depends on the integrity of the functions assured by the right hemisphere
Perret, Patrick. "Etude développementale de la variabilité des performances dans des tâches de raisonnement inclusif : rôle des niveaux de connaissance et de l'inhibition." Aix-Marseille 1, 2001. http://www.theses.fr/2001AIX10073.
Full textGalvaing, Marie-Pierre. "Effets d'audience et de coaction dans la tâche de Stroop : une explication attentionnelle de la facilitation sociale." Clermont-Ferrand 2, 2000. http://www.theses.fr/2000CLF20020.
Full textGeorges, Fanny. "L'effet des compliments de capacité et d'effort sur la motivation et la performance des élèves à une tâche cognitive." Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENH003.
Full textIn line with Mueller and Dweck (1998) framework, this thesis work aimed at studying the effects of praise (or attributional feedback) for effort or ability on pupils' goals, implication, causal attributions and academic performances. Beyond replication aspect, our goal was to examine the interaction effect between praise and failure attributions on performances. In a series of four studies, fifth graders received ability or effort praise for their success on a first set of exercises of moderate difficulty and pointed out their goal preference. After a second difficult set of exercises, pupils received negative feedback and were asked about their task implication and their failure attributions. Finally, a third set of exercises of middle difficulty allowed us to reevaluate their performances. None of the results observed by Mueller and Dweck (1998) appeared. However, results pointed out the role of causal attributions in the relation between praise and performances. One of these studies realized in the same time in France and China revealed different effects of praise according to the cultures. Two additional studies allowed us to test our hypotheses about the nonreplication of the results. The first one dealt with the differentiated development of the understanding of effort and ability notions. The second one was of methodological order and concerned the effect of simple positive feedback jointly given with praise. The results support the first hypothesis
Reboul, Lucie. "La construction de parcours de travail en santé et en compétences : le rôle des régulateurs dans la médiation des parcours de travail des personnels au sol d'une compagnie aérienne." Thesis, Paris, HESAM, 2020. http://www.theses.fr/2020HESAC016.
Full textThis research is carried out with ground staff and their supervisors in an airline company. It aims to report, from an ergonomic approach, on weakening or construction’s processes of these employees’ work paths in a context of multiple and continuous transformations (digitalization of the service relationship of customer service agents, internal and external flexibility of baggage handler teams, overall demographic ageing, etc.). This thesis pursues the hypothesis that the work of regulators (firstlevel managers in charge of assigning tasks to ground staff schedules) mediates the construction of health/work relations between ground staff by being a vector of the path wear and tear or path construction. It mobilizes methodological tools specific to the demography of work and the ergonomics of the activity, by combining diachronic (revealing multiple temporalities with individual, collective and managerial dimensions) and synchronic (articulation or tension between these temporalities in the activity) approaches. The results reveal the multiplicity of health indicators and their temporalities mobilized by regulators to organize work, the individual and collective prevention strategies developed in the course of their experience and the role of temporal constraints in the possibilities of implementing them
Diego, Maza William David. "Contrôle de trafic et gestion de la qualité de service basée sur les mécanismes IP pour les réseaux LTE." Thesis, Télécom Bretagne, 2016. http://www.theses.fr/2016TELB0406/document.
Full textThe mobile data landscape is changing rapidly and mobile operators are today facing the daunting challenge of providing cheap and valuable services to ever more demanding customers. As a consequence, cost reduction is actively sought by operators as well as Quality of Service (QoS) preservation. Current 3GPP standards for LTE/EPC networks offer a fine tuning QoS (per-flow level), which inherits many characteristics of legacy telco networks. In spite of its good performance, such a QoS model reveals costly and cumbersome and finally, it remains very rarely deployed, thereby giving way to basic best-effort hegemony. This thesis aims at improving QoS in mobile networks through cost-effective solutions; To this end, after an evaluation of the impact and cost of signaling associated with the standard QoS model, alternative schemes are proposed, such as the IP-centric QoS model (per aggregate) inspired from the DiffServ approach widely used in fixed IP networks. This model provides a simple, efficient and cost-effective IP level QoS management with a performance level similar to standardized solutions. However, as it requires enhancements in the eNB, this scheme cannot be expected in mobile networks before a rather long time.Thus, we introduce Slo-Mo, which is a lightweight implicit mechanism for managing QoS from a distant point when the congestion point (e.g. eNB) is not able to do it. Slo-Mo creates a self-adaptive bottleneck which adjusts dynamically to the available resources taking advantage of TCP native flow control. Straightforward QoS management at IP level is then performed in the Slo-Mo node, leading to enhanced customer experience at a marginal cost and short term
Bendib, Zébida. "Tâche d'assemblage d'un objet technique et aides au travail : analyse des difficultés et de l'organisation de l'action d'élèves de lycées d'enseignement professionnel." Paris 10, 1986. http://www.theses.fr/1986PA100055.
Full textIn the present research, an analysis has been conducted about the different performance modalities of a setting up task (checker of measure) with the assistance of work documents (technical drawing and setting up range), in order to characterize the different underlying cognitive functioning modes. The analysis has been founded upon the cognitive competence pattern (Vermersch, 1983) based on the plurality of the functioning registers and on the following hypothesis: the observed behaviors are the symptom of different forms of knowledge of the content and of varied modalities of acting organization. The results analysis allowed to observe with regard to: a) the behavior content: four groups of knowledge taking more or less into account the task properties and characterized by different modalities of cognitive functioning. B) The behavior organization: four types of proceeding characterizing different modalities of the acting on the temporal field dimension and going from the anticipation possibility to an organization by successive "groupings". Their "crossing" with the groups of knowledge defined above shows no convergence, which has been corroborated by the analysis of the utilization modalities of the work documents. The confrontation of these results has allowed observing 8 strategies in connection with the different forms of "interact" between technological knowledge, documents utilization, setting up planification and problems met by the subjects. These strategies go from "groping" to a more systematic proceeding. C) The aids: these, in particular the range, haven't been of great help. They have been consulted mainly when there were problems. Coordinations have seldom been made systematically, between the documents and between the documents and the real parts. These coordinations were rather a problem for the subjects. D) The delay: the orientation of the delay seems to have restricted the subjects’ fields. However, the acting planification if limited has been made possible through the use of the drawing and of the writing of the setting up range
Delafontaine, Arnaud. "Contrainte biomécanique unilatérale versus contrainte biomécanique bilatérale : rééquilibrage des acapacités fonctionnelles et amélioration de la performance dans une tâche locomotrice." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA113008.
Full textIn the literature, the rehabilitation of hemiplegic patients showed that motor performance of the affected limb is improved when both limbs are mobilised in a symmetrical movement. It has been suggested that it was easier for the nervous system to adapt to symmetrical bilateral command. The aim of this dissertation is to test the validity of these results in gait initiation (GI). Handicap was simulated by means of blocking the ankle unilaterally or bilaterally with a strap or orthosis.Results showed that in the presence of a constraint, electromyographic activity and the kinematics of both postural preparation and step execution phase of GI declined. Furthermore, the motor performance was also perturbed.However, « Local » differences appeared according to the localisation of the constraint, reflecting the adaptation of the motor command. Nonetheless, a result needs to be particularly underlying.Like in hemi-handicapped patients, the motor performance (i.e. centre of mass velocity at the end of the first step) was higher in « bilateral constraint, hypomobility on both ankles » versus « unilateral constraint, hypomobility on stance ankle ».In the dissertation, the results are discussed in terms of the adaptation of the motor command of unilateral and bilateral induced biomechanical constraint. More specifically, we discuss how rebalancing the functional capacity of both legs should allow to increase motor performance. These results put forward new perspectives in the domain of functional rehabilitation
Drouin-Germain, Anne. "La relation entre le sentiment de présence et la performance d'adolescents à une tâche d'attention présentée en réalité virtuelle." Thèse, Université du Québec à Trois-Rivières, 2014. http://depot-e.uqtr.ca/7388/1/030775493.pdf.
Full textMarchand, Andrée-Anne. "Altérations des performances sensorimotrices dans une tâche de pointage cervicale chez des patients atteints de céphalées de tension." Thèse, Université du Québec à Trois-Rivières, 2014. http://depot-e.uqtr.ca/7345/1/030673398.pdf.
Full textCirou, Bertrand. "Expression d'algorithmes scientifiques et gestion performante de leur parallélisme avec PRFX pour les grappes de SMP." Bordeaux 1, 2005. http://www.theses.fr/2005BOR12994.
Full textDewier, Agnès. "Les processus cognitifs mis en oeuvre dans l'interaction homme-ordinateur: l'influence du niveau d'expérience et des caractéristiques de la tâche sur la performance." Doctoral thesis, Universite Libre de Bruxelles, 1991. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212984.
Full textSimon-Fiacre, Caroline. "Médiations sémiotiques en situation de co-résolution de problèmes : effets sur les stratégies et performances d'enfants de CP et de CE2 dans les tâches d'exploration ou d'encodage d'un parcours." Aix-Marseille 1, 2005. http://www.theses.fr/2005AIX10019.
Full text