Gotowa bibliografia na temat „Traces distribuées”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Traces distribuées”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Traces distribuées":

1

Valle, Natalia La. "Temporalités distribuées et partagées. Une approche écologique des activités familiales dans le foyer". Tracés, nr 22 (21.06.2012): 43–64. http://dx.doi.org/10.4000/traces.5428.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Clément, Eric, i Michel Dagenais. "Traces Synchronization in Distributed Networks". Journal of Computer Systems, Networks, and Communications 2009 (2009): 1–11. http://dx.doi.org/10.1155/2009/190579.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
This article proposes a novel approach to synchronize a posteriori the detailed execution traces from several networked computers. It can be used to debug and investigate complex performance problems in systems where several computers exchange information. When the distributed system is under study, detailed execution traces are generated locally on each system using an efficient and accurate system level tracer, LTTng. When the tracing is finished, the individual traces are collected and analysed together. The messaging events in all the traces are then identified and correlated in order to estimate the time offset over time between each node. The time offset computation imprecision, associated with asymmetric network delays and operating system latency in message sending and receiving, is amortized over a large time interval through a linear least square fit over several messages covering a large time span. The resulting accuracy is such that it is possible to estimate the clock offsets in a distributed system, even with a relatively low volume of messages exchanged, to within the order of a microsecond while having a very low impact on the system execution, which is sufficient to properly order the events traced on the individual computers in the distributed system.
3

Xing, Lida, Martin Lockley, Anthony Romilio, Tao Wang i Liu Chang. "Dinosaur Tracks from the Lower Jurassic Lufeng Formation of Northern Central Yunnan, China". Biosis: Biological Systems 3, nr 1 (1.04.2022): e004. http://dx.doi.org/10.37819/biosis.003.01.0169.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
An increasing number of theropod-dominated tracksites have been reported from the Jurassic and Cretaceous of China. These include a significant number from the Lower Jurassic of the Lufeng Basin, famous for its Lufengosaurus fauna and known for a typical Lower Jurassic globally-distributed tetrapod footprint biochron. Here we report another localized theropod track occurrence regular of various scattered tracksites from the Lufeng Formation. The tracks are medium-sized tridactyl tracks from the basal member of the Zhangjia'ao Member, Lufeng Formation which shows an unusually wide divarication between the traces of digits III and IV, which suggest several possible interpretations.
4

Mukhutdinova, Alfiya R., Alexander V. Bolotov, Oleg V. Anikin i Mikhail A. Varfolomeev. "Algorithm for estimating boundary conditions of a distributed tracer for application in a single-well tracer test". Georesursy 24, nr 4 (20.12.2022): 75–81. http://dx.doi.org/10.18599/grs.2022.4.6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
An important tool in determining residual oil saturation today is the single-well tracer test, as the preferred method for assessing the potential for using enhanced oil recovery methods (EOR) and developing pilot projects. The success of the test performed directly depends on the optimal choice of the tracer composition, which will contribute to the qualitative determination of the parameters required in the calculation of the residual oil saturation of the formation. To assess the boundary conditions for the applicability of the tracer in the field, the kinetic and thermodynamic properties of tracers are considered under various reservoir conditions of the field. Based on the results of this work, an algorithm for assessing the applicability of the tracer for reservoirs in a wide range of salinity and temperatures is presented.
5

Carlini, Emanuele, Alessandro Lulli i Laura Ricci. "Model driven generation of mobility traces for distributed virtual environments with TRACE". Concurrency and Computation: Practice and Experience 30, nr 20 (28.07.2017): e4235. http://dx.doi.org/10.1002/cpe.4235.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

McCarthy, I. D., i S. P. Hughes. "Multiple tracer studies of bone uptake of 99mTc-MDP and 85Sr". American Journal of Physiology-Heart and Circulatory Physiology 256, nr 5 (1.05.1989): H1261—H1265. http://dx.doi.org/10.1152/ajpheart.1989.256.5.h1261.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Multiple tracer outflow dilution studies were performed on the normal canine tibia. In all cases 125I-labeled albumin was used as a vascular tracer. In one series of experiments 99mTc-labeled methylene diphosphonate and [14C]sucrose were used as test tracers, and in a second series 85Sr and 22Na were used. A bolus of three tracers was injected into the tibial nutrient artery, and fractional concentrations appearing in the ipsilateral femoral vein were measured for a period of 5 min. A distributed model, containing parameters for capillary and bone permeability and apparent volumes of distribution of interstitial fluid, was fitted to these data. It was found that there was no discrimination between movement of 85Sr or 22Na from interstitial fluid space into bone. Transcapillary exchange does not appear to be a significant barrier to exchange between blood and bone surfaces.
7

Pemper, Richard R., Michael J. Flecker, Vernie C. McWhirter i Donald W. Oliver. "Hydraulic fracture evaluation with multiple radioactive tracers". GEOPHYSICS 53, nr 10 (październik 1988): 1323–33. http://dx.doi.org/10.1190/1.1442410.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
For many years, wireline tracer surveys have been used to determine the height of fractures created during hydraulic stimulation procedures. A recent advancement in fracture evaluation technology has been to tag different stages of a fracture operation with multiple radioactive tracers, providing the capability to discern between created and propped fracture heights in one or more zones of interest. In this research, a wireline instrumentation and data analysis system is implemented to identify and separate the individual yields from multiple radioactive tracers, with an additional feature that determines whether the tracer material is inside of the borehole or distributed throughout the created fracture zone. A single postfracture pass of the logging instrument is used to accumulate gamma ray spectra at each 7.6 cm interval along a borehole. A weighted least‐squares spectrum unfolding algorithm calculates the radioactive intensities as a function of depth, while the peak‐to‐Compton down‐scatter ratio determines the proximity of the tracer material to the wellbore. Field examples illustrate the effectiveness of the system for the evaluation of multistage fracture operations.
8

POLATOĞLU, Ahmet, i Cahit YEŞİLYAPRAK. "Using and Testing Camera Sensors with Different Devices at Cosmic Ray Detection". Erzincan Üniversitesi Fen Bilimleri Enstitüsü Dergisi 16, nr 2 (24.08.2023): 590–97. http://dx.doi.org/10.18185/erzifbed.1167041.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Cosmic Ray (CR) is high-energy charged particles that reach the earth from space. CR detection methods and studies have been progressing rapidly since the beginning of the 20th century. One of these methods is the use of digital cameras with Charge Coupled Device (CCD) and Complementary Metal Oxide Semiconductor (CMOS) sensors. Mobile phone cameras or webcams offer an easily using and economical measurement system for CR measurement. The sensors are exposed to CR during a long exposure. CRs leave traces in the background. Cosmic particle tracks are then separated from the background noise and can be classified. Making the traces of the particles visible is important for understanding the subject. In this context, traces of particles such as electrons, muons, and alphas can be seen with the cloud chamber experiments. Help of sensor technology and cameras have developed in recent years, CR traces can be easily detected so that it can be seen. There are many software and international projects that detect CR using CMOS sensors in cell phone cameras. In this study, related projects, programs and studies were researched; CR traces that we captured with the help of Cosmic-Ray Extremely Distributed Observatory (CREDO) and Cosmic Ray Finder (CRF) software with web cam and a mobile phone cam CMOS sensor are presented. Links have been made about astrophysical events coinciding with previously detected particle images.
9

Nguyen, Tung T., Yashdip S. Pannu, Cynthia Sung, Robert L. Dedrick, Stuart Walbridge, Martin W. Brechbiel, Kayhan Garmestani, Markus Beitzel, Alexander T. Yordanov i Edward H. Oldfield. "Convective distribution of macromolecules in the primate brain demonstrated using computerized tomography and magnetic resonance imaging". Journal of Neurosurgery 98, nr 3 (marzec 2003): 584–90. http://dx.doi.org/10.3171/jns.2003.98.3.0584.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Object. Convection-enhanced delivery (CED), the delivery and distribution of drugs by the slow bulk movement of fluid in the extracellular space, allows delivery of therapeutic agents to large volumes of the brain at relatively uniform concentrations. This mode of drug delivery offers great potential for the treatment of many neurological disorders, including brain tumors, neurodegenerative diseases, and seizure disorders. An analysis of the treatment efficacy and toxicity of this approach requires confirmation that the infusion is distributed to the targeted region and that the drug concentrations are in the therapeutic range. Methods. To confirm accurate delivery of therapeutic agents during CED and to monitor the extent of infusion in real time, albumin-linked surrogate tracers that are visible on images obtained using noninvasive techniques (iopanoic acid [IPA] for computerized tomography [CT] and Gd—diethylenetriamine pentaacetic acid for magnetic resonance [MR] imaging) were developed and investigated for their usefulness as surrogate tracers during convective distribution of a macromolecule. The authors infused albumin-linked tracers into the cerebral hemispheres of monkeys and measured the volumes of distribution by using CT and MR imaging. The distribution volumes measured by imaging were compared with tissue volumes measured using quantitative autoradiography with [14C]bovine serum albumin coinfused with the surrogate tracer. For in vivo determination of tracer concentration, the authors examined the correlation between the concentration of the tracer in brain homogenate standards and CT Hounsfield units. They also investigated the long-term effects of the surrogate tracer for CT scanning, IPA-albumin, on animal behavior, the histological characteristics of the tissue, and parenchymal toxicity after cerebral infusion. Conclusions. Distribution of a macromolecule to clinically significant volumes in the brain is possible using convection. The spatial dimensions of the tissue distribution can be accurately defined in vivo during infusion by using surrogate tracers and conventional imaging techniques, and it is expected that it will be possible to determine local concentrations of surrogate tracers in voxels of tissue in vivo by using CT scanning. Use of imaging surrogate tracers is a practical, safe, and essential tool for establishing treatment volumes during high-flow interstitial microinfusion of the central nervous system.
10

Ala-aho, Pertti, Doerthe Tetzlaff, James P. McNamara, Hjalmar Laudon i Chris Soulsby. "Using isotopes to constrain water flux and age estimates in snow-influenced catchments using the STARR (Spatially distributed Tracer-Aided Rainfall–Runoff) model". Hydrology and Earth System Sciences 21, nr 10 (9.10.2017): 5089–110. http://dx.doi.org/10.5194/hess-21-5089-2017.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Abstract. Tracer-aided hydrological models are increasingly used to reveal fundamentals of runoff generation processes and water travel times in catchments. Modelling studies integrating stable water isotopes as tracers are mostly based in temperate and warm climates, leaving catchments with strong snow influences underrepresented in the literature. Such catchments are challenging, as the isotopic tracer signals in water entering the catchments as snowmelt are typically distorted from incoming precipitation due to fractionation processes in seasonal snowpack. We used the Spatially distributed Tracer-Aided Rainfall–Runoff (STARR) model to simulate fluxes, storage, and mixing of water and tracers, as well as estimating water ages in three long-term experimental catchments with varying degrees of snow influence and contrasting landscape characteristics. In the context of northern catchments the sites have exceptionally long and rich data sets of hydrometric data and – most importantly – stable water isotopes for both rain and snow conditions. To adapt the STARR model for sites with strong snow influence, we used a novel parsimonious calculation scheme that takes into account the isotopic fractionation through snow sublimation and snowmelt. The modified STARR setup simulated the streamflows, isotope ratios, and snow pack dynamics quite well in all three catchments. From this, our simulations indicated contrasting median water ages and water age distributions between catchments brought about mainly by differences in topography and soil characteristics. However, the variable degree of snow influence in catchments also had a major influence on the stream hydrograph, storage dynamics, and water age distributions, which was captured by the model. Our study suggested that snow sublimation fractionation processes can be important to include in tracer-aided modelling for catchments with seasonal snowpack, while the influence of fractionation during snowmelt could not be unequivocally shown. Our work showed the utility of isotopes to provide a proof of concept for our modelling framework in snow-influenced catchments.

Rozprawy doktorskie na temat "Traces distribuées":

1

Ripoche, Gabriel. "Sur les traces de Bugzilla : vers une analyse automatisée des interactions pour l'étude des pratiques collectives distribuées". Paris 11, 2006. http://www.theses.fr/2006PA112076.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
L'objectif de cette thèse est d'établir certaines bases théoriques, méthodologiques et pratiques d'une " sociologie assistée par ordinateur " des pratiques collectives distribuées (PCD). Le développement des nouvelles technologies de l'information et de la communication a mené à l'émergence de formes d'organisation dont les principales caractéristiques sont la forte distribution (spatiale et temporelle, mais aussi socio-cognitive) et l'utilisation prépondérante de moyens de communication médiés, qui laissent des " traces " persistantes de l'activité collective. Nos travaux portent sur l'exploitation de ces traces, et en particulier des traces d'interactions en langue naturelle, en tant que moyen d'analyse de l'activité sous-jacente du collectif, afin de mieux caractériser " ce qui se passe " dans le collectif. Les très grandes quantités de données disponibles nous poussent à chercher à développer des méthodes d'analyse automatisées capables de traiter les contenus langagiers de telles traces. Notre approche a consisté à 1) concevoir un modèle capable de représenter les interactions d'un collectif distribué et leurs relations avec l'activité du collectif, 2) évaluer l'utilité d'un tel modèle pour l'étude des PCD au travers d'une phase expérimentale et 3) étudier la faisabilité de l'automatisation des traitements requis par le modèle à l'aide de technologies d'apprentissage machine et de traitement du langage. Notre étude a plus particulièrement porté sur des données recueillies dans le collectif open-source Bugzilla
The aim of this thesis is to establish some of the theoretical, methodological and practical foundations of a “computer-supported sociology” of distributed collective practices (DCP). The development of new information and communication technologies lead to the emergence of organization forms which main characteristics are their large-scale distribution (spatial and temporal, but also socio-cognitive) and the central use of mediated communication channels, which leave persistent “traces” of collective activity. Our work focuses on exploiting these traces, especially traces of natural language interaction, as a mean of studying the underlying activity of the collective, in order to better study “what is going on” in the collective. The large amounts of data available lead us to attempt to elaborate methods relying on automated analyses and capable of handling the linguistic content of these traces. Our approach consisted in 1) designing a model capable of representing distributed collective interaction and the relations between such interactions and the collective's activity, 2) evaluating the usefulness of such model for the study of DCP through an experimental phase, and 3) studying the feasibility of automating the data processing needed by the model through use of machine learning and language processing technologies. Our study focused on data collected in the Bugzilla open-source collective
2

Cassé, Clement. "Prévision des performances des services Web en environnement Cloud". Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30268.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Le Cloud Computing a bouleversé la façon dont sont développés et déployées les logiciels. De nos jours, les applications Cloud sont conçues comme des systèmes distribués, en constante évolution, hébergés dans des data~center gérés par des tiers, et potentiellement même dispersés dans le monde entier. Ce changement de paradigme a également eu un impact considérable sur la façon dont les logiciels sont monitorés : les applications cloud se sont développées pour atteindre l'ordre de centaines de services, et les outils de monitoring ont rapidement rencontré des problèmes de mise à l'échelle. De plus, ces outils de monitoring doivent désormais également traiter les défaillances et les pannes inhérentes aux systèmes distribués, comme par exemple, les pannes partielles, les configurations incohérentes, les goulots d'étranglement ou même la vampirisation de ressources. Dans cette thèse, nous présentons une approche basée sur une nouvelle source de télémétrie qui s'est développée dans le domaine du monitoring des applications Cloud. En effet, en nous appuyant sur le récent standard OpenTelemetry, nous présentons un système qui convertit les données de "traces distribuées" en un graphe de propriétés hiérarchique. Grâce un tel modèle, il devient possible de mettre en évidence la topologie des applications, y compris la répartition sur les différentes machines des programmes, y compris sur plusieurs data-centers. L'objectif de ce modèle est donc d'exposer le comportement des fournisseurs de service Cloud aux développeurs qui maintiennent et optimisent leur application. Ensuite, nous présentons l'utilisation de ce modèle pour résoudre certains des défis majeurs des systèmes distribués~: la détection des communications inefficaces entre les services et l'anticipation des goulots d'étranglement. Nous abordons ces deux problèmes avec une approche basée sur la théorie des graphes. La composition inefficace des services est détectée avec le calcul de l'indice de hiérarchie de flux. Une plateforme Proof-of-Concept représentant un cluster Kubernetes zonal pourvu d'une instrumentation OpenTelemetry est utilisée pour créer et détecter les compositions de services inefficaces. Dans une dernière partie, nous abordons la problématique de la détection des goulots d'étranglement dans un réseau de services au travers de l'analyse de centralité du graphe hiérarchique précédent. Ce travail s'appuie sur un programme de simulation qui a aussi été instrumenté avec OpenTelemetry afin d'émettre des données de traçage. Ces traces ont été converties en un graphe de propriétés hiérarchique et une étude sur les algorithmes de centralité a permis d'identifier les points d'étranglement. Les deux approches présentées dans cette thèse utilisent et exploitent l'état de l'art en matière de monitoring des applications Cloud. Elles proposent une nouvelle utilisation des données de "distributed tracing" pas uniquement pour l'investigation et le débogage, mais pour la détection et la réaction automatiques sur un système réel
Cloud Computing has changed how software is now developed and deployed. Nowadays, Cloud applications are designed as rapidly evolving distributed systems that are hosted in third-party data centre and potentially scattered around the globe. This shift of paradigms also had a considerable impact on how software is monitored: Cloud application have been growing to reach the scale of hundreds of services, and state-of-the-art monitoring quickly faced scaling issues. In addition, monitoring tools also now have to address distributed systems failures, like partial failures, configuration inconsistencies, networking bottlenecks or even noisy neighbours. In this thesis we present an approach based on a new source of telemetry that has been growing in the realm of Cloud application monitoring. Indeed, by leveraging the recent OpenTelemetry standard, we present a system that converts "distributed tracing" data in a hierarchical property graph. With such a model, it becomes possible to highlight the actual topology of Cloud applications like the physical distribution of its workloads in multiple data centres. The goal of this model is to exhibit the behaviour of Cloud Providers to the developers maintaining and optimizing their application. Then, we present how this model can be used to solve some prominent distributed systems challenges: the detection of inefficient communications and the anticipation of hot points in a network of services. We tackle both of these problems with a graph-theory approach. Inefficient composition of services is detected with the computation of the Flow Hierarchy index. A Proof of Concept is presented based on a real OpenTelemetry instrumentation of a Zonal Kubernetes Cluster. In, a last part we address the concern of hot point detection in a network of services through the perspective of graph centrality analysis. This work is supported by a simulation program that has been instrumented with OpenTelemetry in order to emit tracing data. These traces have been converted in a hierarchical property graph and a study on the centrality algorithms allowed to identify choke points. Both of the approaches presented in this thesis comply with state-of-the-art Cloud application monitoring. They propose a new usage of Distributed Tracing not only for investigation and debugging but for automatic detection and reaction on a full system
3

Pilourdault, Julien. "Scalable algorithms for monitoring activity traces". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM040/document.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Dans cette thèse, nous étudions des algorithmes pour le monitoring des traces d’activité à grande échelle. Le monitoring est une aptitude clé dans plusieurs domaines, permettant d’extraire de la valeur des données ou d’améliorer les performances d’un système. Nous explorons d’abord le monitoring de données temporelles. Nous présentons un nouveau type de jointure sur des intervalles, qui inclut des fonctions de score caractérisant le degré de satisfaction de prédicats temporels. Nous étudions ces jointures dans le contexte du batch processing (traitement par lots). Nous formalisons la Ranked Temporal Join (RTJ), une jointure qui combine des collections d’intervalles et retourne les k meilleurs résultats. Nous montrons comment exploiter les propriétés des prédicats temporels et de la sémantique de score associée afin de concevoir TKIJ , une méthode d’évaluation de requête distribuée basée sur Map-Reduce. Nos expériences sur des données synthétiques et réelles montrent que TKIJ est plus performant que les techniques de l’état de l’art et démontre de bonnes performances sur des requêtes RTJ n-aires sur des données temporelles. Nous proposons également une étude préliminaire afin d’étendre nos travaux sur TKIJ au domaine du stream processing (traitement de flots). Nous explorons ensuite le monitoring dans le crowdsourcing (production participative). Nous soutenons la nécessité d’intégrer la motivation des travailleurs dans le processus d’affectation des tâches. Nous proposons d’étudier une approche adaptative, qui évalue la motivation des travailleurs lors de l’exécution des tâches et l’exploite afin d’améliorer l’affectation de tâches qui est réalisée de manière itérative. Nous explorons une première variante nommée Individual Task Assignment (Ita), dans laquelle les tâches sont affectées individuellement, un travailleur à la fois. Nous modélisons Ita et montrons que ce problème est NP-Difficile. Nous proposons trois méthodes d’affectation de tâches qui poursuivent différents objectifs. Nos expériences en ligne étudient l’impact de chaque méthode sur la performance globale dans l’exécution de tâches. Nous observons que différentes stratégies sont dominantes sur les différentes dimensions de performance. En particulier, la méthode affectant des tâches aléatoires et correspondant aux intérêts d’un travailleur donne le meilleur flux d’exécution de tâches. La méthode affectant des tâches correspondant au compromis d’un travailleur entre diversité et niveau de rémunération des tâches donne le meilleur niveau de qualité. Nos expériences confirment l’utilité d’une affectation de tâches adaptative et tenant compte de la motivation. Nous étudions une deuxième variante nommée Holistic Task Assignment (Hta), où les tâches sont affectées à tous les travailleurs disponibles, de manière holistique. Nous modélisons Hta et montrons que ce problème est NP-Difficile et MaxSNP-Difficile. Nous développons des algorithmes d’approximation pour Hta. Nous menons des expériences sur des données synthétiques pour évaluer l’efficacité de nos algorithmes. Nous conduisons également des expériences en ligne et comparons notre approche avec d’autres stratégies non adaptatives. Nous observons que notre approche présente le meilleur compromis sur les différentes dimensions de performance
In this thesis, we study scalable algorithms for monitoring activity traces. In several domains, monitoring is a key ability to extract value from data and improve a system. This thesis aims to design algorithms for monitoring two kinds of activity traces. First, we investigate temporal data monitoring. We introduce a new kind of interval join, that features scoring functions reflecting the degree of satisfaction of temporal predicates. We study these joins in the context of batch processing: we formalize Ranked Temporal Join (RTJ), that combine collections of intervals and return the k best results. We show how to exploit the nature of temporal predicates and the properties of their associated scored semantics to design TKIJ , an efficient query evaluation approach on a distributed Map-Reduce architecture. Our extensive experiments on synthetic and real datasets show that TKIJ outperforms state-of-the-art competitors and provides very good performance for n-ary RTJ queries on temporal data. We also propose a preliminary study to extend our work on TKIJ to stream processing. Second, we investigate monitoring in crowdsourcing. We advocate the need to incorporate motivation in task assignment. We propose to study an adaptive approach, that captures workers’ motivation during task completion and use it to revise task assignment accordingly across iterations. We study two variants of motivation-aware task assignment: Individual Task Assignment (Ita) and Holistic Task Assignment (Hta). First, we investigate Ita, where we assign tasks to workers individually, one worker at a time. We model Ita and show it is NP-Hard. We design three task assignment strategies that exploit various objectives. Our live experiments study the impact of each strategy on overall performance. We find that different strategies prevail for different performance dimensions. In particular, the strategy that assigns random and relevant tasks offers the best task throughput and the strategy that assigns tasks that best match a worker’s compromise between task diversity and task payment has the best outcome quality. Our experiments confirm the need for adaptive motivation-aware task assignment. Then, we study Hta, where we assign tasks to all available workers, holistically. We model Hta and show it is both NP-Hard and MaxSNP-Hard. We develop efficient approximation algorithms with provable guarantees. We conduct offline experiments to verify the efficiency of our algorithms. We also conduct online experiments with real workers and compare our approach with various non-adaptive assignment strategies. We find that our approach offers the best compromise between performance dimensions thereby assessing the need for adaptability
4

Vigouroux, Xavier. "Analyse distribuée de traces d'exécution de programmes parallèles". Lyon, École normale supérieure (sciences), 1996. http://www.theses.fr/1996ENSL0016.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Le monitoring consiste a generer des informations de trace durant une execution d'un programme parallele pour detecter les problemes de performances. La quantite d'information generee par de tres grosses machines paralleles rend les outils d'analyse classiques inutilisables. Cette these resout ce probleme en distribuant l'information de trace sur plusieurs fichiers contenus sur plusieurs sites, les fichiers pouvant etre lus en parallele. La manipulation de ces fichiers afin d'obtenir une information coherente est la base d'un logiciel client-serveur grace auquel des clients demandent de l'information deja filtree sur une execution. Cette architecture client serveur est extensible (l'utilisateur peut creer ses propres clients) et modulable. Nous avons, d'autre part, cree deja plusieurs clients novateurs: client hierarchique, sonore, recherche automatique de problemes, interface filtrante aux outils classique, integration d'outil 3D
5

Emeras, Joseph. "Workload Traces Analysis and Replay in Large Scale Distributed Systems". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM081/document.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
L'auteur n'a pas fourni de résumé en français
High Performance Computing is preparing the era of the transition from Petascale to Exascale. Distributed computing systems are already facing new scalability problems due to the increasing number of computing resources to manage. It is now necessary to study in depth these systems and comprehend their behaviors, strengths and weaknesses to better build the next generation.The complexity of managing users applications on the resources conducted to the analysis of the workload the platform has to support, this to provide them an efficient service.The need for workload comprehension has lead to the collection of traces from production systems and to the proposal of a standard workload format. These contributions enabled the study of numerous of these traces. This also lead to the construction of several models, based on the statistical analysis of the different workloads from the collection.Until recently, existing workload traces did not enabled researchers to study the consumption of resources by the jobs in a temporal way. This is now changing with the need for characterization of jobs consumption patterns.In the first part of this thesis we propose a study of existing workload traces. Then we contribute with an observation of cluster workloads with the consideration of the jobs resource consumptions over time. This highlights specific and unattended patterns in the usage of resources from users.Finally, we propose an extension of the former standard workload format that enables to add such temporal consumptions without loosing the benefit of the existing works.Experimental approaches based on workload models have also served the goal of distributed systems evaluation. Existing models describe the average behavior of observed systems.However, although the study of average behaviors is essential for the understanding of distributed systems, the study of critical cases and particular scenarios is also necessary. This study would give a more complete view and understanding of the performance of the resources and jobs management. In the second part of this thesis we propose an experimental method for performance evaluation of distributed systems based on the replay of production workload trace extracts. These extracts, replaced in their original context, enable to experiment the change of configuration of the system in an online workload and observe the different configurations results. Our technical contribution in this experimental approach is twofold. We propose a first tool to construct the environment in which the experimentation will take place, then we propose a second set of tools that automatize the experiment setup and that replay the trace extract within its original context.Finally, these contributions conducted together, enable to gain a better knowledge of HPC platforms. As future works, the approach proposed in this thesis will serve as a basis to further study larger infrastructures
6

Emeras, Joseph. "Analyse et rejeu de traces de charge dans les grands systèmes de calcul distribués". Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00940055.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
High Performance Computing is preparing the era of the transition from Petascale to Exascale. Distributed computing systems are already facing new scalability problems due to the increasing number of computing resources to manage. It is now necessary to study in depth these systems and comprehend their behaviors, strengths and weaknesses to better build the next generation. The complexity of managing users applications on the resources conducted to the analysis of the workload the platform has to support, this to provide them an efficient service. The need for workload comprehension has led to the collection of traces from production systems and to the proposal of a standard workload format. These contributions enabled the study of numerous of these traces. This also led to the construction of several models, based on the statistical analysis of the different workloads from the collection. Until recently, existing workload traces did not enabled researchers to study the consumption of resources by the jobs in a temporal way. This is now changing with the need for characterization of jobs consumption patterns. In the first part of this thesis we propose a study of existing workload traces. Then we contribute with an observation of cluster workloads with the consideration of the jobs resource consumptions over time. This highlights specific and unattended patterns in the usage of resources from users. Finally, we propose an extension of the former standard workload format that enables to add such temporal consumptions without loosing the benefit of the existing works. Experimental approaches based on workload models have also served the goal of distributed systems evaluation. Existing models describe the average behavior of observed systems. However, although the study of average behaviors is essential for the understanding of distributed systems, the study of critical cases and particular scenarios is also necessary. This study would give a more complete view and under- standing of the performance of resource and job management. In the second part of this thesis we propose an experimental method for performance evaluation of distributed systems based on the replay of production workload trace extracts. These extracts, replaced in their original context, enable to experiment the change of configuration of the system in an online workload and observe the different configurations results. Our technical contribution in this experimental approach is twofold. We propose a first tool to construct the environment in which the experi- mentation will take place, then we propose a second set of tools that automatize the experiment setup and that replay the trace extract within its original context. Finally, these contributions conducted together, enable to gain a better knowledge of HPC platforms. As future works, the approach proposed in this thesis will serve as a basis to further study larger infrastructures.
7

Lerman, Benjamin. "Vérification et Spécification des Systèmes Distribués". Phd thesis, Université Paris-Diderot - Paris VII, 2005. http://tel.archives-ouvertes.fr/tel-00322322.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Cette thèse se place dans le cadre de la vérification automatique des systèmes distribués. Elle aborde le problème de la spécification pour de tels systèmes, qui consiste à définir un formalisme logique pour décrire des propriétés des comportements de systèmes. On en attend qu'il soit facile d'exprimer les propriétés courantes (accessibilité, sûreté, exclusion mutuelle, vivacité, etc.). On souhaite par ailleurs que la vérification de ces propriétés soient aisée. Il s'agit donc de trouver un compromis entre pouvoir d'expression et simplicité d'utilisation.

On s'intéresse ensuite à la modélisation des systèmes concurrents, en recherchant à nouveau un compromis entre réalisme des modèles et facilité de vérification. Les modèles étudiés dans ce travail sont les automates asynchrones, qui modélisent des processus concurrents communiquant par mémoire partagée.

La thèse s'intéresse enfin au problème de la synthèse de contrôleur. Étant donné un système spécifié de façon incomplète, donc non-déterministe, en interaction avec un environnement, il s'agit de calculer de manière automatique comment restreindre son comportement afin qu'il vérifie une spécification donnée (quelles que soient les actions de l'environnement). Ce problème se formule en
termes de jeux. Dans le cas distribué, les jeux ont naturellement plusieurs joueurs. Dans ce cadre, la plupart des résultats sont négatifs : il est indécidable de savoir si on peut ou non contrôler un tel système. Cette thèse prouve que certaines propriétés de l'architecture de communication garantissent décidabilité pour toute spécification régulière.
8

Kassem, Zein Oussama. "Indexation-découverte et composition de services distribués". Lorient, 2005. http://www.theses.fr/2005LORIS047.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Actuellement, les entreprises, les organisations et les fournisseurs de services ont besoin de publier leurs services et de les rendre accessibles à leurs clients. Les clients ont besoin de découvrir les services et de choisir ceux qui répondent à leurs exigences. Dans ce contexte, il s'agit de décrire les services le plus précisément possible afin qu'un client puisse trouver le service désiré. La description de services par des propriétés devient donc importante pour l'interrogation et la sélection de services et doit être prise en compte dans l'étape de l'indexation et de la publication. Les propriétés de services peuvent être utilisées par les fournisseurs (entreprises, organisations, etc. ) afin de publier leurs services. D'autre part, pour rechercher un service, les clients doivent disposer des approches leur permettent de découvrir les services en interrogeant leurs propriétés et en donnant la valeur désirée de chacune d'entre elles. Beaucoup d'approches ont été réalisées dans ce contexte, citons par exemple ODP trader et OMG CORBA trader, UDDI de Web services, etc. Dans cette thèse, nous proposons un modèle de méta données pour la description d'un service. Il peut être utilisé par les clients/serveurs pour publier/interroger un service. Il comporte trois niveaux de description : propriétés statiques, comportement et interface. Nous utilisons les automates pour décrire le comportement d'un service. Nous développons un trader en utilisant les ontologies. Il permet la recherche de services d'une manière flexible en utilisant la logique du premier ordre. Nous étendons ce trader pour adresser le comportement du service ainsi que son interface. À partir de cette description de service, sous plusieurs vues, nous pouvons récupérer des informations sur les propriétés, l'interface et le comportement sur les services enregistrés. Ceci nous permet alors de composer des services et d'en créer de nouveaux. Dans ce contexte, nous proposons une approche sur la composition de services qui permet de combiner et d'assembler plusieurs services pour satisfaire les demandes de clients. Le trader étendu comporte aussi une approche pour adapter les services selon les exigences de clients
Currently, the companies, the organizations and the services providers need to publish their services and to make them available to their clients. The clients need to discover the services and to select those that satisfy their requirements. In this context, services must be described as precisely as possible so that a client can find the service desired. The service description by properties becomes important to query and to select services and it must be taken into account in the indexing and publication stage. The providers (companies, organizations, etc. ) can use the services properties to publish their services. On the other side the clients, to discover a service, must dispose approaches that allow them to discover services by querying their properties and by assigning desired values for each property. In this context, many approaches have been developed like ODP trader and OMG CORBA, UDDI of services Web, etc. In this dissertation, we propose a meta data model for service description. It can be used by clients/servers to query/publish a service. It contains three levels of description : static properties, behavior and interface. We use automata to describe the service behavior. We design and implement a trader based on ontologies. It permits the service discovery in a flexible and expressive way by using the logic of the first order. We extend this trader to address the behavior of a service and its interface. Based on this description, under several views, we can get information about the properties, the interface and the behavior on the services stored. This allows us to compose services in order to create novel ones. In this context, we propose an approach for services composition that allows us to combine and collect service to satisfy the client's requests. The trader extended comprises also an approach for adapting service to the client's requirements
9

Rabo, Hannes. "Distributed Trace Comparisons for Code Review : A System Design and Practical Evaluation". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280707.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Ensuring the health of a distributed system with frequent updates is complicated. Many tools exist to improve developers’ comprehension and productivity in this task, but room for improvement exists. Based on previous research within request flow comparison, we propose a system design for using distributed tracing data in the process of reviewing code changes. The design is evaluated from the perspective of system performance and developer productivity using a critical production system at a large software company. The results show that the design has minimal negative performance implications while providing a useful service to the developers. They also show a positive but statistically insignificant effect on productivity during the evaluation period. To a large extent, developers adopted the tool into their workflow to explore and improve system understanding. This use case deviates from the design target of providing a method to compare changes between software versions. We conclude that the design is successful, but more optimization of functionality and a higher rate of adoption would likely improve the effects the tool could have.
Att säkerställa stabilitet i ett distribuerat system med hög frekvens av uppdateringar är komplicerat. I dagsläget finns många verktyg som hjälper utvecklare i deras förståelse och produktivitet relaterat till den här typen av problem, dock finns fortfarande möjliga förbättringar. Baserat på tidigare forskning inom teknik för att jämföra protokollförfrågningsflöden mellan mjukvaruversioner så föreslår vi en systemdesign för ett nytt verktyg. Designen använder sig av data från distribuerad tracing för att förbättra arbetsflödet relaterat till kodgranskning. Designen utvärderas både prestanda och produktivitetsmässigt under utvecklingen av ett affärskritiskt produktionssystem på ett stort mjukvaruföretag. Resultaten visar att designen har mycket låg inverkan på prestandan av systemet där det införs, samtidigt som den tillhandahåller ett användbart verktyg till utvecklarna. Resultaten visar också på en positiv men statistiskt insignifikant effekt på utvecklarnas produktivitet. Utvecklarna använde primärt verktyget för att utforska och förbättra sin egen förståelse av systemet som helhet. Detta användningsområde avvek från det ursprungliga målet med designen, vilket var att tillhandahålla en tjänst för att jämföra mjukvaruversioner med varandra. Från resultaten drar vi slutsatsen att designen som helhet var lyckad, men mer optimering av funktionalitet och mer effektivt införande av verktyget i arbetsflödet hade troligtvis resulterat i större positiva effekter på organisationen.
10

Hoffman, Kari Lee. "Coordinated memory trace reactivation across distributed neural ensembles in the primate neocortex". Diss., The University of Arizona, 2003. http://hdl.handle.net/10150/289891.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
The process of forming a long-lasting memory may involve the selective linking together of neural representations stored widely throughout neocortex. The successful binding together of these disparate representations may require their coordinated reactivation while the cortex is 'offline' i.e., not engaged in processing external stimuli. This hypothesis was tested through simultaneous extracellular recording of 28-99 cells over four sites in the macaque neocortex. The recordings were conducted as the monkey performed repetitive reaching tasks, and in rest periods immediately preceding and following the task. In motor, somatosensory and parietal cortex (but not prefrontal cortex), the task-related neural activity patterns within and across regions were similar to the activity patterns seen afterwards, during the rest epoch. Moreover, the temporal sequences of neural ensemble activity that occurred during task performance were preserved in subsequent rest. The preservation of correlation structure and temporal sequencing are consistent with the reactivation of a memory trace and not merely the persistence of a fixed activity pattern. The observed memory trace reactivation was coordinated over large expanses of neocortex, confirming a fundamental tenet of the trace replay theory of memory consolidation.

Książki na temat "Traces distribuées":

1

K, Ousterhout John, i United States. National Aeronautics and Space Administration., red. A trace-driven analysis of name and attribute caching in a distributed system. Berkeley, Calif: Computer Science Division (EECS), University of California, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wang, Zhongliang. Trace data analysis and job scheduling simulation for large-scale distributed systems. Ottawa: National Library of Canada, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

W, Farrar Jerry, i Geological Survey (U.S.), red. Report on the U.S. Geological Survey's evaluation program for standard reference samples distributed in October 1994: T-131 (trace constituents), T-133 (trace constituents), M-132 (major constituents), N-43 (nutrients), N-44 (nutrients), P-23 (low ionic strength) and Hg-19 (mercury). Golden, Colo: Dept. of the Interior, U.S. Geological Survey, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

W, Farrar Jerry, i Geological Survey (U.S.), red. Report on the U.S. Geological Survey's evaluation program for standard reference samples distributed in October 1994: T-131 (trace constituents), T-133 (trace constituents), M-132 (major constituents), N-43 (nutrients), N-44 (nutrients), P-23 (low ionic strength) and Hg-19 (mercury). Golden, Colo: Dept. of the Interior, U.S. Geological Survey, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

W, Farrar Jerry, i Geological Survey (U.S.), red. Report on the U.S. Geological Survey's evaluation program for standard reference samples distributed in April 1993: T-123 (trace constituents), T-125 (trace constituents), M-126 (major constituents), N-38 (nutrients), N-39 (nutrients), P-20 (low ionic strength), and Hg-16 (mercury). Golden, Colo: U.S. Geological Survey, 1993.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

W, Farrar Jerry, i Geological Survey (U.S.), red. Report on the U.S. Geological Survey's evaluation program for standard reference samples distributed in October 1994: T-131 (trace constituents), T-133 (trace constituents), M-132 (major constituents), N-43 (nutrients), N-44 (nutrients), P-23 (low ionic strength) and Hg-19 (mercury). Golden, Colo: Dept. of the Interior, U.S. Geological Survey, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

W, Farrar Jerry, i Geological Survey (U.S.), red. Report on the U.S. Geological Survey's evaluation program for standard reference samples distributed in October 1994: T-131 (trace constituents), T-133 (trace constituents), M-132 (major constituents), N-43 (nutrients), N-44 (nutrients), P-23 (low ionic strength) and Hg-19 (mercury). Golden, Colo: Dept. of the Interior, U.S. Geological Survey, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

W, Farrar Jerry, i Geological Survey (U.S.), red. Report on the U.S. Geological Survey's evaluation program for standard reference samples distributed in October 1994: T-131 (trace constituents), T-133 (trace constituents), M-132 (major constituents), N-43 (nutrients), N-44 (nutrients), P-23 (low ionic strength) and Hg-19 (mercury). Golden, Colo: Dept. of the Interior, U.S. Geological Survey, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

W, Farrar Jerry, i Geological Survey (U.S.), red. Report on the U.S. Geological Survey's evaluation program for standard reference samples distributed in October 1994: T-131 (trace constituents), T-133 (trace constituents), M-132 (major constituents), N-43 (nutrients), N-44 (nutrients), P-23 (low ionic strength) and Hg-19 (mercury). Golden, Colo: Dept. of the Interior, U.S. Geological Survey, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

W, Farrar Jerry, i Geological Survey (U.S.), red. Report on the U.S. Geological Survey's evaluation program for standard reference samples distributed in April 1992: T-119 (trace constituents), M-122 (major constituents), N-34 (nutrients), N-35 (nutrients), Hg-14 (mercury). Golden, Colo: U.S. Geological Survey, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Traces distribuées":

1

Carlini, Emanuele, Alessandro Lulli i Laura Ricci. "TRACE: Generating Traces from Mobility Models for Distributed Virtual Environments". W Euro-Par 2016: Parallel Processing Workshops, 272–83. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58943-5_22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Gatev, Radoslav. "Observability: Logs, Metrics, and Traces". W Introducing Distributed Application Runtime (Dapr), 233–52. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6998-5_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Cérin, Christophe, i Michel Koskas. "Mining Traces of Large Scale Systems". W Distributed and Parallel Computing, 132–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11564621_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Rava, Gabriella. "Traces and Their (In)significance". W Frontiers in Sociology and Social Research, 269–81. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11756-5_17.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
AbstractThe concept of trace is useful for a semiotic reflection upon what is left behind. Similar to the concepts of index and footprint, traces are traditionally described as already signs, or more precisely as something recognized as a sign (Violi, Riv Ital Filos Linguaggio, 2016, http://www.rifl.unical.it/index.php/rifl/article/view/365; Mazzucchelli, Riv Ital Filos Linguaggio, 2015, http://www.rifl.unical.it/index.php/rifl/article/view/312). This act of recognition is fundamentally dependent on a community’s work of interpretation, in order to actualize a potential narration lying in the trace, but what if the promised sense is not grasped? Adopting the notion of intentionality (Greimas and Courtés, Sémiotique: dictionnaire raisonné de la théorie du langage. Hachette, Paris, 1979) to include partially unconscious traces within the sphere of semiotic investigation, the article considers the possibility to conceive traces as paradoxical signs standing for nothing, i.e., signs of insignificance (Leone, On insignificance. The loss of meaning in the post-material age. Routledge, 2020). Through the analysis of digital traces and trolling, (in)significance is disputed on the basis of a proposed paradigm, within which even such seemingly accidental traces may possess profound significance within a digital network constructed of distributed subjectivity. One conclusion drawn from the example is that strong normative claims about what may qualify as significant often conceal an ideologically charged agenda. For this reason in particular, a detailed account of digital traces should be the highest priority of semiotics today.
5

Hu, Xiao, Pengyong Ma, Shuming Chen, Yang Guo i Xing Fang. "TraceDo: An On-Chip Trace System for Real-Time Debug and Optimization in Multiprocessor SoC". W Parallel and Distributed Processing and Applications, 806–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11946441_73.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Lindsey, Quentin, i Vijay Kumar. "Distributed Construction of Truss Structures". W Springer Tracts in Advanced Robotics, 209–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36279-8_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Bedillion, Mark, i William Messner. "Distributed Manipulation with Rolling Contact". W Springer Tracts in Advanced Robotics, 453–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-45058-0_27.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Murphey, Todd D., i Joel W. Burdick. "Feedback Control for Distributed Manipulation". W Springer Tracts in Advanced Robotics, 487–503. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-45058-0_29.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Wu, Chongjian. "Discrete Distributed Tuned Mass Damper". W Springer Tracts in Mechanical Engineering, 167–78. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7237-1_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Murata, Satoshi, i Haruhisa Kurokawa. "Basics in Mathematics and Distributed Algorithms". W Springer Tracts in Advanced Robotics, 59–75. Tokyo: Springer Tokyo, 2012. http://dx.doi.org/10.1007/978-4-431-54055-7_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Traces distribuées":

1

Xia, Hui-Rong, Jian-Wen Xu, Zuo-Di Pan, Ji-Guang Cai, Long-Shen Ma i L.-Shen Cheng. "Study of triplet states by equal-frequency two-photon transitions in Na2". W International Laser Science Conference. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/ils.1986.thl21.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
The experiments were arranged with a pulsed dye laser and a cross oven. A double-pen chart recorder simultaneously traced the two-photon excitation spectra monitored by two photomultipliers with interference filters centered at 3600 and 4300 Å, respectively. The observed two-photon transitions with a linewidth of ~0.2 Å were distributed discretely and nonuniformly and were close to the calculated values for searching the intermediate singlet-triplet mixing levels in Na2. This means that the observed signals were individually enhanced by a near-resonant mixing level. While most of the corresponding recorded lines in the two traces had coincident wavelength locations and comparable intensities, a part of the lines at the longer wavelength sides of the traces with moderate or weaker intensities appeared alternately. The upper states for the lines which only appeared in the 4300 Å trace or in both of the traces were assumed to be (2)3Π g and a higher (n)3Λ g state, respectively. Temperature and total pressure was varied. Finally, the laser beam was split into two with combined polarization situations. These aided the identification of the two- photon transitions.
2

Guasto, Jeffrey S., Peter Huang i Kenneth S. Breuer. "Statistical Particle Tracking Velocimetry Using Molecular and Quantum Dot Tracer Particles". W ASME 2005 International Mechanical Engineering Congress and Exposition. ASMEDC, 2005. http://dx.doi.org/10.1115/imece2005-80051.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
We present the theory and experimental validation of a particle tracking velocimetry algorithm developed for application with nanometer-sized tracer particles such as fluorescent molecules and quantum dots (QDs). Traditional algorithms are challenged by extremely small tracers due to difficulties in determining the particle center, shot noise, high drop-in/drop-out and, in the case of quantum dots, fluorescence intermittency (blinking). The algorithms presented here determine real velocity distributions from measured particle displacement distributions by statistically removing randomly distributed tracking events. The theory was verified through tracking experiments using 54 nm flourescent dextran molecules and 6 nm QDs.
3

Mytkowicz, T., A. Diwan, M. Hauswirth i P. F. Sweeney. "Aligning traces for performance evaluation". W Proceedings 20th IEEE International Parallel & Distributed Processing Symposium. IEEE, 2006. http://dx.doi.org/10.1109/ipdps.2006.1639592.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Buono, Paolo, i Giuseppe Desolda. "Visualizing collaborative traces in distributed teams". W the 2014 International Working Conference. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2598153.2600050.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Hecht, Fabio V., Thomas Bocek i Burkhard Stiller. "B-Tracker: Improving load balancing and efficiency in distributed P2P trackers". W 2011 IEEE International Conference on Peer-to-Peer Computing (P2P). IEEE, 2011. http://dx.doi.org/10.1109/p2p.2011.6038749.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Sun, Penghao, Zehua Guo, Junchao Wang, Junfei Li, Julong Lan i Yuxiang Hu. "DeepWeave: Accelerating Job Completion Time with Deep Reinforcement Learning-based Coflow Scheduling". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/458.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
To improve the processing efficiency of jobs in distributed computing, the concept of coflow is proposed. A coflow is a collection of flows that are semantically correlated in a multi-stage computation task. A job consists of multiple coflows and can be usually formulated as a Directed-Acyclic Graph (DAG). A proper scheduling of coflows can significantly reduce the completion time of jobs in distributed computing. However, this scheduling problem is proved to be NP-hard. Different from existing schemes that use hand-crafted heuristic algorithms to solve this problem, in this paper, we propose a Deep Reinforcement Learning (DRL) framework named DeepWeave to generate coflow scheduling policies. To improve the inter-coflow scheduling ability in the job DAG, DeepWeave employs a Graph Neural Network (GNN) to process the DAG information. DeepWeave learns from the history workload trace to train the neural networks of the DRL agent and encodes the scheduling policy in the neural networks, which make coflow scheduling decisions without expert knowledge or a pre-assumed model. The proposed scheme is evaluated with a simulator using real-life traces. Simulation results show that DeepWeave completes jobs at least 1.7X faster than the state-of-the-art solutions.
7

Li, Bodong, Guodong Zhan, Michael Okot i Vahid Dokhani. "Analysis of Circulating Pressure and Temperature using Drilling Microchips". W International Petroleum Technology Conference. IPTC, 2023. http://dx.doi.org/10.2523/iptc-22805-ms.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Abstract Accurate knowledge of circulating pressure and temperature is essential for making critical decisions while drilling operation. Through implementation of miniaturized semiconductor technology, we obtained near real-time dynamic pressure and temperature profile of the wellbore, making previously simulated critical operational data such as equivalent circulation density (ECD) and wellbore thermal distribution now measurable using drilling microchip. The application of drilling microchips to collect distributed pressure and temperature data while drilling is investigated, where each microchip measures both pressure and temperature simultaneously. This study also presents a revised method to calibrate measurements of drilling microchip with depth. Four field trials were attempted in a slightly inclined well using water-based or oil-based muds, where 10 drilling microchips were deployed in each trial. The recovered data from the drilling microchips are first downloaded and compiled. An in-house software is developed to process and convert time-scale of each drilling microchip to depth considering slippage of drilling microchips in drill string and annulus. An iterative algorithm is designed to calibrate the predicted arrival time with the actual arrival time of each tracer, which ultimately yields the true velocity of tracers in flow conduits. The maximum measured pressure is used as an indicator to locate each tracer at the bottom hole. It is realized that a plateau of pressure versus time can signify a trapped tracer in the flow path if the pump rate was maintained constant. The results of field trials show that some of the tracers were trapped for few minutes in the lower section of annular space or before the bit nozzle. The results of temperature profiles conclude a unique pattern for almost all of the deployed drilling microchips. However, the results of pressure profiles can be classified in two different groups as drilling microchips could have moved in different batches while pumping. The calculated temperature gradients show a heating zone near the bottom hole and continuous cooling of drilling fluid as tracers move toward the surface. The average pressure gradient is in the range of 0.52 – 0.61 psi/ft among different trials. It is shown that the velocity of tracers in each interval strongly depends on the flow regime. To our best knowledge, a combined measurement of circulating temperature and pressure using drilling microchips for the first-time is successfully conducted in these field trials. The results can be used for calculation of ECD and temperature profiles, which provide near real-time downhole data for monitoring and diagnostic applications. The measured pressure data also provide new insights about tracking of drilling microchips in the wellbore.
8

Galimzyanov, Artem, Orkhan Heydarov, Bakhtiyar Jafarov, Rufat Mirzayev, Kamal Kamalov i Akgun Kilic. "Offshore Caspian Sea: Appraisal Well Monitoring Using Inflow Tracer Technology". W SPE Annual Caspian Technical Conference. SPE, 2021. http://dx.doi.org/10.2118/207065-ms.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Abstract A gas condensate field development in the offshore Caspian Sea experienced monitoring challenges and costly operations. In regular field-wide surveillance it is a challenging task to evaluate the numerous well monitoring options on the market, such as production logging, permanent downhole gauges, and distributed temperature sensing along the wellbore. These solutions require wellbore interventions and introduce operational risk during well logging or completion installation risk when fiber is installed. Permanently installed inflow tracer technology is an alternative monitoring solution which avoids the above-mentioned risks but still obtain valuable inflow information concerning well performance over several years. An appraisal well in the field was selected to pilot inflow tracing technology for assessment of reserves and productivity, for the first time in the Caspian Sea. Multiple sampling campaigns to capture the data was incorporated into a well testing programme to complement the pressure transient data collection and interpretation. The inflow tracer interpretations were successful in providing additional insight towards clean-up efficiency and flow distribution between zones. The latter was verified later by production logging, strengthening confidence with inflow tracer technology. The application of the permanent inflow tracers has proven to be a viable alternative to other well monitoring solutions without any risk and will become an effective long-term monitoring solution for planned production wells in the field development.
9

Al-Jahdhami, Ahmed Rashid, Juan Carlos Chavez i Shaima Abdul Aziz Al-Farsi. "Fiber Optic Deployed Behind Cemented Casing in a Vertical Deep Tight Gas Well Used to Enhance Hydraulic Fracturing, Monitoring and Diagnostics". W Abu Dhabi International Petroleum Exhibition & Conference. SPE, 2021. http://dx.doi.org/10.2118/207614-ms.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Abstract The use of fiber optic (FO) to obtain distributed sensing be it Distributed Temperature Sensing (DTS), Distributed Acoustic Sensing (DAS) or Distributed Strain Sensing (DSS) is a well & reservoir surveillance engineer's dream. The ability to obtain real-time live data has proven useful not only for production monitoring but during fracture stimulation as well. A trial the first of its kind in Petroleum Development Oman (PDO) used fiber optic cable cemented in place behind casing to monitor the fracture operations. Several techniques are used to determine fracture behaviour and geometry e.g. data fracs, step down test and after closure analysis. All these use surface pressure readings that can be limited due to uncertainty in friction pressure losses and the natural complexity in the formation leading to very different interpretations. Post frac data analysis and diagnostics also involves importing the actual frac data into the original model used to design the frac in order to calibrate the strains (tectonics), width exponent (frac fluid efficiency) and the relative permeability. Monitoring the frac using DAS and DTS proved critical in understanding a key component in fracture geometry; frac height. The traditional method to determine fracture height is to use radioactive tracers (RA). But these are expensive and the data only available after the job (after drilling the plugs and cleaning the wellbore). In contrast fiber optic can provide real time data throughout the frac stages including the proppant free PAD stage which tracers can't. The comparison of DTS and Radioactive Tracers showed very good agreement suggesting that DTS could replace RA diagnostic. Hydraulic fracture stimulation operations in well-xx was the first one of its kind to be monitored with fiber optic. The integrated analysis of the available logs allowed us to benchmark various information and gain confidence in the conclusions. This helped fine tune the model for future wells for a more optimized zonal targeting and hydraulic fracture design. In this paper we will share the detailed evaluation of the fracture propagation behaviour and how combining the fiber optic data with the surface pressure, pumping rates and tracer logs in conjunction with a fracture simulation platform where a detailed geomechanical and subsurface characterization data is incorporated to get a more accurate description of fracture geometry.
10

Bogatinovski, Jasmin, Sasho Nedelkoski, Jorge Cardoso i Odej Kao. "Self-Supervised Anomaly Detection from Distributed Traces". W 2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC). IEEE, 2020. http://dx.doi.org/10.1109/ucc48980.2020.00054.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Traces distribuées":

1

SINGH, ANUP K., ALOK GUPTA, ASHOK MULCHANDANI, WILFRED CHEN, RIMPLE B. BHATIA, JOSEPH S. SCHOENIGER, CAROL S. ASHLEY i in. Distributed Sensor Particles for Remote Fluorescence Detection of Trace Analytes: UXO/CW. Office of Scientific and Technical Information (OSTI), listopad 2001. http://dx.doi.org/10.2172/789593.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Jones, M. C., R. Nassimbene, J. Wolfe i N. Frederick. Distributed measurements of tracer response on packed bed flows using a fiberoptic probe array. Final report. Office of Scientific and Technical Information (OSTI), październik 1994. http://dx.doi.org/10.2172/72901.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Ratmanski, Kiril, i Sergey Vecherin. Resilience in distributed sensor networks. Engineer Research and Development Center (U.S.), październik 2022. http://dx.doi.org/10.21079/11681/45680.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
With the advent of cheap and available sensors, there is a need for intelligent sensor selection and placement for various purposes. While previous research was focused on the most efficient sensor networks, we present a new mathematical framework for efficient and resilient sensor network installation. Specifically, in this work we formulate and solve a sensor selection and placement problem when network resilience is also a factor in the optimization problem. Our approach is based on the binary linear programming problem. The generic formulation is probabilistic and applicable to any sensor types, line-of-site and non-line-of-site, and any sensor modality. It also incorporates several realistic constraints including finite sensor supply, cost, energy consumption, as well as specified redundancy in coverage areas that require resilience. While the exact solution is computationally prohibitive, we present a fast algorithm that produces a near-optimal solution that can be used in practice. We show how such formulation works on 2D examples, applied to infrared (IR) sensor networks designed to detect and track human presence and movements in a specified coverage area. Analysis of coverage and comparison of sensor placement with and without resilience considerations is also performed.
4

Trahan, Corey, Jing-Ru Cheng i Amanda Hines. ERDC-PT : a multidimensional particle tracking model. Engineer Research and Development Center (U.S.), styczeń 2023. http://dx.doi.org/10.21079/11681/48057.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
This report describes the technical engine details of the particle- and species-tracking software ERDC-PT. The development of ERDC-PT leveraged a legacy ERDC tracking model, “PT123,” developed by a civil works basic research project titled “Efficient Resolution of Complex Transport Phenomena Using Eulerian-Lagrangian Techniques” and in part by the System-Wide Water Resources Program. Given hydrodynamic velocities, ERDC-PT can track thousands of massless particles on 2D and 3D unstructured or converted structured meshes through distributed processing. At the time of this report, ERDC-PT supports triangular elements in 2D and tetrahedral elements in 3D. First-, second-, and fourth-order Runge-Kutta time integration methods are included in ERDC-PT to solve the ordinary differential equations describing the motion of particles. An element-by-element tracking algorithm is used for efficient particle tracking over the mesh. ERDC-PT tracks particles along the closed and free surface boundaries by velocity projection and stops tracking when a particle encounters the open boundary. In addition to passive particles, ERDC-PT can transport behavioral species, such as oyster larvae. This report is the first report of the series describing the technical details of the tracking engine. It details the governing equation and numerical approaching associated with ERDC-PT Version 1.0 contents.
5

Blyde, Juan S., i Mauricio Mesquita Moreira. Chile's Integration Strategy: Is There Room for Improvement? Inter-American Development Bank, październik 2006. http://dx.doi.org/10.18235/0011112.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
What are the main issues in Chile's trade agenda? This paper argues that the country's agenda does not lend itself to that traditional kind of policy advice usually given throughout Latin America. Protection is low and uniform, institutions that govern trade policy are strong and well protected from capture and the country has put a lot of effort in opening markets in the region and abroad. The important issues that come out of the analysis are to a great extent, "second generational". That is: export diversification, the regional distribution of trade gains, completion of the "multidimensional" trade strategy and transport costs. Whereas Chile has made progress in diversifying its exports away from copper, concentration is still high even when compared to other resource intensive countries. On the regional issue, it seems clear that Chile's export-led growth in the last two decades was not evenly distributed across the regions. On Chile's "multidimensional" trade strategy, Asia is clearly the missing link in the country's wide net of preferential agreements and the evidence available suggest that transport costs are these days a more important obstacle to Chile's trade than traditional trade barriers.
6

Akbari, Chirag, Ninad Gore i Srinivas Pulugurtha. Understanding the Effect of Pervasive Events on Vehicle Travel Time Patterns. Mineta Transportation Institute, grudzień 2023. http://dx.doi.org/10.31979/mti.2023.2319.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
The COVID-19 pandemic has disrupted daily activities and travel patterns, affecting personal and commercial trips. This study investigates the effect of different stages of the pandemic on travel time patterns. Eighty-six geographically distributed links (sections of road) in Mecklenburg County and Buncombe County, North Carolina, were selected for analysis. The selected links accounted for the variation in road geometry, land use, and speed limit. Travel time data for three years (i.e., 2019, 2020, and 2021) were extracted from a private data source at 5-min intervals. Travel time reliability (TTR) and travel time variability (TTV) are estimated for different phases of the pandemic and compared to analyze the effect of COVID-19 on TTR and TTV. The seasonal arithmetic integrated moving average (SARIMA) model was developed to investigate the effect of COVID-19 on average daily travel time patterns. Unreliable and uncertain travel times were observed on lower speed limit links during the off-peak hours while reliable and certain travel times were observed during morning and evening peak hours of the COVID-19 pandemic. This highlights that the COVID-19 pandemic significantly affected the scheduling of trips. For higher speed limits, travel times were reliable and certain during off-peak and peak hours. Among the different phases of COVID-19, significant improvement in TTR and TTV was observed during Phase II, which could be attributed to stay-at-home directives. Trucks followed a similar pattern as passenger cars. Post-COVID-19, i.e., for 2021, travel times were reliable and certain for most links during the morning peak hours. The SARIMA model revealed a significant effect of COVID-19 on average daily travel time patterns. Stable travel time patterns were noted during Phase II of COVID-19. Moreover, a maximum reduction in travel time was observed during Phase II of the COVID-19 pandemic. The findings emphasize the influence of government norms and regulations on travel time patterns during pervasive events such as COVID-19.
7

Plouffe, A., D. Petts, I M Kjarsgaard i M. Polivchuk. Laser ablation inductively coupled plasma mass spectrometry mapping of porphyry -related epidote from south-central British Columbia. Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/331671.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
The microscopic composition of thirteen samples of epidote related to porphyry Cu mineralization was mapped using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) at the Geological Survey of Canada. The objective of this research is to improve the indicator mineral method of mineral exploration in glaciated terrains by utilizing the trace element composition of epidote. Six bedrock samples from porphyry Cu deposits of south-central British Columbia (Gibraltar, Mount Polley and Woodjam), three bedrock samples from the Nicola Group located close (<2 km) from the intrusions host of porphyry mineralization and afar (12 km), and four epidote grains from two till samples, one at Gibraltar and a second one at Mount Polley, were analyzed. Backscattered electron (BSE) images and the LA-ICP-MS maps show an heterogeneous distribution of Fe and Al in epidote following complex and mottled patterns and consistent zoning typically with high Fe and low Al concentrations in the core progressing to low Fe and high Al concentrations in the rim. Trace elements are heterogeneously distributed in epidote following the Fe/Al zoning in some samples. Evidence of late infiltration of trace elements (e.g. Cu, Zn, and REE) along fractures in epidote is observed in some samples. The variability in epidote composition is thought to be related to the changing conditions during its crystallization including oxidation state, pH, oxygen fugacity, fluid composition, temperature and pressure. Multiple LA-ICP-MS spot analyses need to be conducted on this mineral to fully evaluate its composition as an indicator mineral of porphyry Cu mineralization.
8

Mathew, Jijo K., Christopher M. Day, Howell Li i Darcy M. Bullock. Curating Automatic Vehicle Location Data to Compare the Performance of Outlier Filtering Methods. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317435.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
Agencies use a variety of technologies and data providers to obtain travel time information. The best quality data can be obtained from second-by-second tracking of vehicles, but that data presents many challenges in terms of privacy, storage requirements and analysis. More frequently agencies collect or purchase segment travel time based upon some type of matching of vehicles between two spatially distributed points. Typical methods for that data collection involve license plate re-identification, Bluetooth, Wi-Fi, or some type of rolling DSRC identifier. One of the challenges in each of these sampling techniques is to employ filtering techniques to remove outliers associated with trip chaining, but not remove important features in the data associated with incidents or traffic congestion. This paper describes a curated data set that was developed from high-fidelity GPS trajectory data. The curated data contained 31,621 vehicle observations spanning 42 days; 2550 observations had travel times greater than 3 minutes more than normal. From this baseline data set, outliers were determined using GPS waypoints to determine if the vehicle left the route. Two performance measures were identified for evaluating three outlier-filtering algorithms by the proportion of true samples rejected and proportion of outliers correctly identified. The effectiveness of the three methods over 10-minute sampling windows was also evaluated. The curated data set has been archived in a digital repository and is available online for others to test outlier-filtering algorithms.
9

Foster, James E., i Miguel Székely. Is Economic Growth Good for the Poor?: Tracking Low Incomes Using General Means. Inter-American Development Bank, czerwiec 2001. http://dx.doi.org/10.18235/0010794.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
In this paper we propose the use of an alternative methodology to track low incomes based on Atkinson's (1970) family of "equally distributed equivalent income" functions, which are called "general means" here. We provide a new characterization of general means that justifies their use in this context. Our method of evaluating the effects of growth on poor incomes is based on a comparison of growth rates for two standards of living: the ordinary mean and a bottom-sensitive general mean. The motivating question is: To what extent is growth in the ordinary mean accompanied by growth in the general mean? A key indicator in this approach is the growth elasticity of the general mean, or the percentage change in the general mean over the percentage change in the usual mean. Our empirical analysis estimates this growth elasticity for a data set containing 144 household surveys from 20 countries over the last quarter century. Among other results, we find that the growth elasticity of bottom sensitive general means is positive, but significantly smaller than one. This suggests that the incomes of the poor do not grow one-for-one with increases in average income.
10

Boysen-Urban, Kirsten, Hans Grinsted Jensen i Martina Brockmeier. Extending the GTAP Data Base and Model to Cover Domestic Support Issues using the EU as Example. GTAP Technical Paper, czerwiec 2014. http://dx.doi.org/10.21642/gtap.tp35.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Streszczenie:
The EU Single Farm Payment (SFP) is currently distributed in proportion to primary factor shares in version 8 of the GTAP database. In this paper, we investigate whether this way of modeling the EU SFP makes a difference in analyzing agricultural policy reforms. To do so, we create alternative versions of the GTAP database to compare the effects with the default setting in GTAP. Employing OECD data, along with the GTAP framework, we vary the assumptions about the allocation of the SFP. In the process, we demonstrate how to alter and update the GTAP database to implement domestic support of OECD PSE tables. We provide a detailed overview supplemented with assumptions of payment allocation, shock calculations and in particular, the Altertax procedure to update value flows and price equations extended in the GTAP model. Subsequently, we illustrate the impact of those assumptions by simulating a 100% removal of the SFP using the deviating versions of GTAP database. This sensitivity analysis reveals strong differences in results, but particularly in production responses of food and agricultural sectors that decrease with an increasing degree of decoupling. Furthermore, our analysis shows that the effect on welfare and the trade balance decreases with an increasing degree of decoupling. This experiment shows that the allocation of the SFP can have strong impacts on simulation results.

Do bibliografii