To see the other types of publications on this topic, follow the link: Tem data.

Dissertations / Theses on the topic 'Tem data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Tem data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Torikka, Niklas. "3D Modelling of TEM Data : from Rajapalot Gold-Cobalt prospect, northern Finland." Thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-75756.

Full text
Abstract:
The Rajapalot gold-cobalt project in northern Finland is an exciting, relatively new discovery, still being explored with hopes to start mining in the future. The area was found by a IP/Resistivity survey in 2013. Extensive geophysical follow-up surveys have delineated several electromagnetic targets, one of which, named Raja, is the target anomaly this master thesis is built upon. A TEM survey was carried out during late August to early September 2018. The data collected was analyzed, processed and later modelled in Maxwell using Leroi, a CSIRO module. Three separate models are produced with one, two, and three plates respectively. The result is compared to existing VTEM and resistivity models.
Rajapalot guld-kobolt-projektet i norra Finland är en spännande, relativt ny upptäckt som fortfarande undersöks med hopp om att starta gruvbrytning i framtiden. Området upptäcktes via en IP/Resistivitets-undersökning under 2013. Omfattande geofysiska undersökningar har avgränsat flera elektromagnetiska anomalier, varav en, döpt Raja, är den anomali som den här masteruppsatsen är uppbyggd kring. En TEM-undersökning utfördes under slutet av augusti, början av september 2018. Insamlade data analyserades, bearbetades och modellerades senare i Maxwell med hjälp av Leroi, en insticksmodul från CSIRO. Tre separata modeller togs fram med respektive, en, två, och tre plattor. Resultatet jämfördes mot befintliga VTEM-, och resistivitetsmodeller.
APA, Harvard, Vancouver, ISO, and other styles
2

Moore, David Anton. "Processing and analysis of seismic reflection and transient electromagnetic data for kimberlite exploration in the Mackenzie Valley, NT." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/5027.

Full text
Abstract:
The Lena West property near Lac des Bois, NT, held by Diamondex Resources Ltd., is an area of interest for exploration for kimberlitic features. In 2005, Frontier Geosciences Inc. was contracted to carry out seismic reflection and time-domain transient electromagnetic (TEM) surveys to investigate the possibility of kimberlite pipes being the cause of total magnetic intensity (TMI) anomalies previously identified on the property. One small part of the property, Area 1915, was surveyed with two perpendicular seismic reflection lines 1550 m and 1790 m long and three TEM lines consisting of six or seven individual soundings each with a 200 m transmitter loop. The results generated by Frontier Geosciences did not indicate any obvious vertical features that correlated with the TMI anomaly. The purpose of this study is to reprocess the seismic reflection data using different approaches than those of Frontier Geosciences and to invert the TEM data using a 1-D inversion code, EM1DTM recently developed by the UBC Geophysical Inversion Facility, to improve upon previous results and enhance the interpretation. A secondary objective is to test the robustness of EM1DTM when applied to observed TEM data, since prior to this study it had only been applied to synthetic data. Selective bandpass filtering, refraction and residual statics and f-x deconvolution procedures contributed to improved seismic images to the recorded two-way traveltime of 511.5 ms (approximately 1100 m depth). The TEM data were successfully inverted and converted to pseudo 2-D recovered resistivity sections that showed similar results to those from Frontier Geosciences. On the final seismic reflection sections, several strong reflectors are identified and the base of the overlying sedimentary layers is interpreted at a depth of ~600 m. The TEM results show consistent vertical structure with minimum horizontal variation across all lines to a valid depth of ~150 m. However, neither TEM nor seismic reflection results provide any information that correlates well with the observed TMI anomaly.
APA, Harvard, Vancouver, ISO, and other styles
3

Rydman, Oskar. "Integration of Borehole, Ground, andAirborne Data to Improve Identificationof Areas With Quick Clays in Sweden." Thesis, Uppsala universitet, Geofysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-445689.

Full text
Abstract:
The main focus of the project was the comparison of results from a new towed transientelectromagnetic (tTEM) data set with existing data including airborne transient EM(ATEM), radio magnetotellurics (RMT), cone penetration test with resistivity (CPT-R),geotechnical interpretations and geological observations in a quick clay landslide site atFråstad close to Lilla Edet in south-west Sweden. The tTEM data set was processedand inverted twice in the software Aarhus workbench using different inversion constraintsand settings. The resulting resistivity models where compared with previous geophysicalmodels based on both ATEM and RMT as well as geotechnical information in the form ofborehole logs and CPTR measurements. The results compared well with all other modelsand predicts resistivities in the range of 10−40Ωmin areas of interpreted to hold quickclay by geotechnical methods. As a ground geophysical method the tTEM method is fastand cost-effective, particularly in more open areas with little topographical variations. Inthe example presented in this study tTEM measurements are deemed an effective andaccurate tool to map areas of potential quick clay using the inverted resistivity models incombination with other geological and geotechnical data.
Huvudsyftet med detta projekt är en sammanställning och jämförelse mellan resistivitetsmod-eller från ett nytt markburet TEM data set (tTEM) och tidigare insamlade luftburnaTEM data (ATEM), RMT (radiomagnetotellurik) samt detaljerade resistivitetsmätningari borrhål (CPT-R). Mätområdet ligger i Fråstad vid Göta älv i Lilla Edets kommun isydvästra Sverige. Tidigare undersökningar har visat att området innehåller kvickleraoch där förekommer även skredärr från tidigare kvickleraskred. tTEM datan bearbe-tades,filtrerades och inverterades med hjälp av mjukvaran Aarhus workspace med tvåolika set av begränsningar och inställningar. De resulterande resistivitetsmodellerna jäm-fördes med tidigare geofysiska metoder i ATEM och RMT samt med geoteknisk infor-mation i formen av borrhålsloggar samt CPTR mätningar. Resultatet visar en mycketgod korrelation mellan resistivitetsmodellerna från de olika dataseten. De modelleraderesistiviteterna var 10−40Ωm för de områden som med geotekniska metoder identifieratssom kvickleraområden. Som en markbunden metod är tTEM snabb och kostnadseffektiv,särskilt vid användning i öppna ytor med liten topografisk variation. I exemplen somvisas i denna studie dras slutsatsen att tTEM är ett effektivt och noggrant verktyg föratt hitta områden som potentiellt kan hålla kvickleror. Där kan sedan de resulteranderesistivitetsmodellerna användas tillsammans med annan geoteknisk och geologisk dataför att effektivt kartlägga dessa kvicklersområden.
APA, Harvard, Vancouver, ISO, and other styles
4

Polcer, Simon. "Detekce a rozměření elektronového svazku v obrazech z TEM." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413022.

Full text
Abstract:
This diploma thesis deals with automatic detection and measurement of the electron beam in the images from a transmission electron microscope (TEM). The introduction provides a description of the construction and the main parts of the electron microscope. In the theoretical part, there are summarized modes of illumination from the fluorescent screen. Machine learning, specifically convolution neural network U-Net is used for automatic detection of the electron beam in the image. The measurement of the beam is based on ellipse approximation, which defines the size and dimension of the beam. Neural network learning requires an extensive database of images. For this purpose, the own augmentation approach is proposed, which applies a specific combination of geometric transformations for each mode of illumination. In the conclusion of this thesis, the results are evaluated and summarized. This proposed algorithm achieves 0.815 of the DICE coefficient, which describes an overlap between two sets. The thesis was designed in Python programming language.
APA, Harvard, Vancouver, ISO, and other styles
5

Wicht, Sebastian. "Atomar aufgelöste Strukturuntersuchungen für das Verständnis der magnetischen Eigenschaften von FePt-HAMR-Prototypmedien." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-216054.

Full text
Abstract:
Dank der hohen uniaxialen Kristallanisotropie der L10-geordneten Phase gelten nanopartikuläre FePt+C-Schichten als aussichtsreiche Kandidaten zukünftiger Datenspeichersysteme. Aus diesem Grund werden in der vorliegenden Arbeit in Kooperation mit HGST- A Western Digital Company Prototypen solcher Medien strukturell bis hin zu atomarer Auflösung charakterisiert. Anhand von lokalen Messungen der Gitterparameter der FePt-Partikel wird gezeigt, dass die Partikel dünne, zementitartige Verbindungen an ihrer Oberfläche aufweisen. Zusätzlich werden große Partikel mit kleinem Oberfläche-Volumen-Verhältnis von kontinuierlichen Kohlenstoffschichten umschlossen, was die Deposition weiteren Materials verhindert. Eine Folge davon ist die Entstehung einer zweiten Lage statistisch orientierter Partikel, die sich negativ auf das magnetische Verhalten der FePt-Schicht auswirkt. Weiterhin wird die besondere Bedeutung des eingesetzten Substrats sowie seiner Gitterfehlpassung zur L10-geordneten Einheitszelle nachgewiesen. So lässt sich das Auftreten fehlorientierter ebenso wie das L12-geordneter Kristallite im Fall großer Fehlpassung und einkristalliner Substrate unterdrücken, was andererseits jedoch zu einer stärkeren Verkippung der [001]-Achsen der individuellen FePt-Partikel führt. Abschließend wird mithilfe der Elektronenholographie nachgewiesen, dass die Magnetisierungsrichtungen der FePt-Partikel aufgrund von Anisotropieschwankungen von den [001]-Achsen abweichen können
Highly textured L10-ordered FePt+C-films are foreseen to become the next generation of magnetic data storage media. Therefore prototypes of such media (provided by HGST- A Western Digital Company) are structurally investigated down to the atomic level by HR-TEM and the observed results are correlated to the magnetic performance of the film. In a first study the occurrence of a strongly disturbed surface layer with a lattice spacing that corresponds to cementite is observed. Furthermore the individual particles are surrounded by a thin carbon layer that suppresses the deposition of further material and leads, therefore, to the formation of a second layer of particles. Without a contact to the seed layer these particles are randomly oriented and degrade the magnetic performance of the media. A further study reveals, that a selection of single-crystalline substrates with appropriate lattice mismatch to the L10-ordered unit cell can be applied to avoid the formation of in-plane oriented and L12-ordered crystals. Unfortunately, the required large mismatch results in a broadening of the texture of the [001]-axes of the individual grains. As electron holography studies reveal, the orientation of the magnetization of the individual grains can differ from the structural [001]-axis due to local fluctuations of the uniaxial anisotropy
APA, Harvard, Vancouver, ISO, and other styles
6

Schönström, Linus. "Programming a TEM for magnetic measurements : DMscript code for acquiring EMCD data in a single scan with a q-E STEM setup." Thesis, Uppsala universitet, Tillämpad materialvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-306167.

Full text
Abstract:
Code written in the DigitalMicrograph® scripting language enables a new experimental design for acquiring the magnetic dichroism in EELS. Called the q-E STEM setup, it provides simultaneous acquisition of the dichroic pairs of spectra (eliminating major error sources) while preserving the real-space resolution of STEM. This gives the setup great potential for real-space maps of magnetic momenta which can be instrumental in furthering the understanding of e.g. interfacial magnetic effects. The report includes a thorough presentation of the created acquisition routine, a detailed outline of future work and a fast introduction to the DMscript language.
APA, Harvard, Vancouver, ISO, and other styles
7

Schmidt, Frank. "Die Bedeutung der Segregations- und Oxidationsneigung Seltener Erden für die Einstellung hartmagnetischer intermetallischer Phasen in SmCo-basierten Nanopartikeln." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234251.

Full text
Abstract:
Aufgrund der sehr hohen magnetokristallinen Anisotropiekonstante eignet sich besonders die Phase SmCo5 für zukünftige Festplattenmedien mit hoher Speicherdichte. Durch die starke Oxidationsneigung und die gegebene chemischen Ähnlichkeit anderer Seltenen Erden ist es eine Herausforderung hartmagnetische SmCo-basierte Nanopartikel mittels Inertgaskondensation herzustellen. Zudem bestimmt die Oberflächenenergie maßgeblich die Eigenschaften von Nanopartikeln, sodass ein Element mit einer geringen solchen energetisch bevorzugt die Oberfläche bildet. Diese Arbeit zeigt auf, wie die sauerstoffbasierte Oxidation und die unterschiedlichen Oberflächenenergien der legierungsbildenden Elemente die Struktur, die Morphologie und die chemische Verteilung der Elemente innerhalb der Nanopartikel beeinflussen und so die Legierungsbildung einer hartmagnetischen Sm(Pr)Co-Phase steuern. Mithilfe von aberrationskorrigierter, hochauflösender Transmissionselektronenmikroskopie in Verbindung mit Elektronenenergieverlustspektroskopie werden Morphologie, Elementverteilung und Struktur von unterschiedlich hergestellten Sm(Pr)Co-Nanopartikeln untersucht und analysiert. Die auftretende Segregation der Seltenen Erden an die Oberfläche der Nanopartikel wird zum einen auf eine sauerstoffinduzierte, zum anderen auf eine intrinsische Segregation, also eine durch unterschiedliche Oberflächenenergien der legierungsbildenden Elementen hervorgerufene Segregation zurückgeführt. Anhand eines entwickelten geometrischen Modells wird zwischen den beiden Ursachen der Segregation unterschieden. Das Verständnis um die kausalen Zusammenhänge der Segregation lässt den Schritt zur Herstellung hartmagnetischer intermetallischer SmCo-basierter Nanopartikel zu. Hierzu werden speziell Nanopartikelagglomerate geformt und optisch in einem Lichtofen erhitzt, sodass die Primärpartikel in den Agglomeraten versintern und schließlich das resultierende sphärische Partikel kristallisiert. HRTEM-Aufnahmen und Elektronenbeugung bestätigen die erfolgreiche Herstellung von SmCo5- und Sm2Co17-basierten Nanopartikeln. Die Koerzitivfeldstärke dieser Partikelensembles beträgt 1,8T und einem Maximum in der Schaltfeldverteilung bei 3,6T. Die magnetischen Eigenschaften spiegeln die analysierten strukturellen, morphologischen und chemischen Eigenschaften der Nanopartikel wider.
APA, Harvard, Vancouver, ISO, and other styles
8

Coué, Martin. "Caractérisation électrique et étude TEM des problèmes de fiabilité dans les mémoires à changement de phase enrichis en germanium." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT018/document.

Full text
Abstract:
Dans cette thèse, nous proposons une étude détaillée des mécanismes responsables de la perte de données dans les mémoires à changement de phase enrichies en germanium (Ge-rich PRAMs), à savoir la dérive de la résistance au cours du temps et la recristallisation de la phase amorphe. Nous commençons par une présentation du contexte dans lequel s'inscrit cette étude an donnant un aperçu rapide du marché des mémoires à semiconducteur et une comparaison des mémoires non volatiles émergentes. Les principes de fonctionnement de la technologie PRAM sont introduits, avec ses avantages, ses inconvénients, ainsi que la physique régissant le processus de cristallisation dans les matériaux à changement de phase, avant de décrire les problèmes de fiabilité qui nous intéressent.Une caractérisation électrique complète de dispositifs intégrant des alliages de GST enrichi en germanium est ensuite proposée, en commençant par la caractérisation des matériaux utilisés dans nos cellules, introduisant alors les avantages des alliages enrichis en Ge sur le GST standard. Les performances électriques des dispositifs intégrant ces matériaux sont analysées, avec une étude statistique des caractéristiques SET & RESET, de la fenêtre de programmation, de l'endurance et de la vitesse de cristallisation. Nous nous concentrons ensuite sur le thème principal de cette thèse en analysant la dérive en résistance de l'état SET de nos dispositifs Ge-rich, ainsi que les performances de rétention de l'état RESET.Dans la dernière partie, nous étudions les mécanismes physiques impliqués dans ces phénomènes en fournissant une étude détaillée de la structure des cellules, grâce à l'utilisation de la Microscopie Électronique en Transmission (MET). Les conditions et configurations expérimentales sont décrites, avant de présenter les résultats qui nous ont permis d'aller plus loin dans la compréhension de la dérive en résistance et de la recristallisation de la phase amorphe dans les dispositifs Ge-rich. Une discussion est finalement proposée, reliant les résultats des caractérisations électriques avec ceux des analyses TEM, conduisant à de nouvelles perspectives pour l'optimisation des dispositifs PRAMs
In this thesis we provide a detailed study of the mechanisms responsible for data loss in Ge-rich Ge2Sb2Te5 Phase-Change Memories, namely resistance drift over time and recrystallization of the amorphous phase. The context of this work is first presented with a rapid overview of the semiconductor memory market and a comparison of emerging non-volatile memories. The working principles of PRAM technology are introduced, together with its advantages, its drawbacks, and the physics governing the crystallization process in phase-change materials, before describing the reliability issues in which we are interested.A full electrical characterization of devices integrating germanium-enriched GST alloys is then proposed, starting with the characterization of the materials used in our PCM cells and introducing the benefits of Ge-rich GST alloys over standard GST. The electrical performances of devices integrating those materials are analyzed, with a statistical study of the SET & RESET characteristics, programming window, endurance and crystallization speed. We then focus on the main topic of this thesis by analyzing the resistance drift of the SET state of our Ge-rich devices, as well as the retention performances of the RESET state.In the last part, we investigate on the physical mechanisms involved in these phenomena by providing a detailed study of the cells' structure, thanks to Transmission Electron Microscopy (TEM). The experimental conditions and setups are described before presenting the results which allowed us to go deeper into the comprehension of the resistance drift and the recrystallization of the amorphous phase in Ge-rich devices. A discussion is finally proposed, linking the results of the electrical characterizations with the TEM analyses, leading to new perspectives for the optimization of PRAM devices
APA, Harvard, Vancouver, ISO, and other styles
9

Bonnamy, Sylvie. "Caractérisation des produits pétroliers lors de la pyrolyse de leur fraction lourde : étude géochimique et structurale." Orléans, 1987. http://www.theses.fr/1987ORLE2027.

Full text
Abstract:
La texture et la microtexture de precurseurs carbones de composition elementaire variee ont ete etudiees lors de la carbonisation depuis la nucleation, jusqu'a la coalescence et a la fin de la croissance des domaines orientes. Dans la 2**(e) partie, les relations roche-mere et petrole sont abordees par l'etude de la filiation kerogene-asphaltene de roche-asphaltenes d'huile. Dans la 3**(e) partie, on analyse l'influence de la biodegradation sur l'importance des domaines orientes des aspfhaltenes a partir d'une serie d'huile a biodegradation croissante. La degradation microtexturale s'accentue avec le pourcentage ou hydrocarbures aromatiques partant precocement au cours de la pyrolyse. Les asphaltenes sont affectes par les facteurs modifiant la matiere organique totale
APA, Harvard, Vancouver, ISO, and other styles
10

Muhammad, Azhar Ranjha, and Adnan Ghalib Ahmad. "Data Analysis and Graph Presentation of Team Training Data." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-66998.

Full text
Abstract:
This Report illustrates the team training system presentation as a web based graphs.The research is done based on the presentation of web information stored in database into the graphicalform. Ice-Faces with SQL database at back end data source is the way to demonstrate the implementationof graph system. By having research and comparisons it is found suitably best the Graph generating systemfor analysis of C3fire records.Several models for graphs are been selected for the illustration of best visualization of the demography andat last one with best demonstration of result is selected.The information which was displayed in tables stored in database is now viewable in the graphical format.The implementation was done by modifying and embedding codes in the previous version and successfullyimplementation is done. The graphs are displayed by the values stored in database and dynamicallyupdated as the values in the database are changed. There are four graphs finally selected and implementedthat shows the data, which are pie, bar, line and cluster bar graphs representing data in best viewableform.
C3Fire
APA, Harvard, Vancouver, ISO, and other styles
11

Holtzapffel, Thierry. "Minéraux argileux lattes : les smectites du domaine atlantique." Angers, 1986. http://www.theses.fr/1986ANGE0006.

Full text
Abstract:
Les smectites des sédiments atlantiques du jurassique supérieur à l'actuel ; on distingue des particules floconneuses mixtes et lattées. Les premières, d'origine détritique probable, n'ont subi aucune modification post-sédimentaire ; les dernières résultent du réajustement diagenétique précoce des premières. L'intensité de ce réajustement, qui a lieu à bilans chimique et minéralogique pratiquement constants, a été quantifiée puis comparée à de nombreux paramètres sédimentaires. Trois facteurs importants : microperméabilité initiale du sédiment, temps de contact entre particules et fluides interstitiels et la composition de ces fluides.
APA, Harvard, Vancouver, ISO, and other styles
12

Peralta, Veronika. "Data Quality Evaluation in Data Integration Systems." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2006. http://tel.archives-ouvertes.fr/tel-00325139.

Full text
Abstract:
Les besoins d'accéder, de façon uniforme, à des sources de données multiples, sont chaque jour plus forts, particulièrement, dans les systèmes décisionnels qui ont besoin d'une analyse compréhensive des données. Avec le développement des Systèmes d'Intégration de Données (SID), la qualité de l'information est devenue une propriété de premier niveau de plus en plus exigée par les utilisateurs. Cette thèse porte sur la qualité des données dans les SID. Nous nous intéressons, plus précisément, aux problèmes de l'évaluation de la qualité des données délivrées aux utilisateurs en réponse à leurs requêtes et de la satisfaction des exigences des utilisateurs en terme de qualité. Nous analysons également l'utilisation de mesures de qualité pour l'amélioration de la conception du SID et de la qualité des données. Notre approche consiste à étudier un facteur de qualité à la fois, en analysant sa relation avec le SID, en proposant des techniques pour son évaluation et en proposant des actions pour son amélioration. Parmi les facteurs de qualité qui ont été proposés, cette thèse analyse deux facteurs de qualité : la fraîcheur et l'exactitude des données. Nous analysons les différentes définitions et mesures qui ont été proposées pour la fraîcheur et l'exactitude des données et nous faisons émerger les propriétés du SID qui ont un impact important sur leur évaluation. Nous résumons l'analyse de chaque facteur par le biais d'une taxonomie, qui sert à comparer les travaux existants et à faire ressortir les problèmes ouverts. Nous proposons un canevas qui modélise les différents éléments liés à l'évaluation de la qualité tels que les sources de données, les requêtes utilisateur, les processus d'intégration du SID, les propriétés du SID, les mesures de qualité et les algorithmes d'évaluation de la qualité. En particulier, nous modélisons les processus d'intégration du SID comme des processus de workflow, dans lesquels les activités réalisent les tâches qui extraient, intègrent et envoient des données aux utilisateurs. Notre support de raisonnement pour l'évaluation de la qualité est un graphe acyclique dirigé, appelé graphe de qualité, qui a la même structure du SID et contient, comme étiquettes, les propriétés du SID qui sont relevants pour l'évaluation de la qualité. Nous développons des algorithmes d'évaluation qui prennent en entrée les valeurs de qualité des données sources et les propriétés du SID, et, combinent ces valeurs pour qualifier les données délivrées par le SID. Ils se basent sur la représentation en forme de graphe et combinent les valeurs des propriétés en traversant le graphe. Les algorithmes d'évaluation peuvent être spécialisés pour tenir compte des propriétés qui influent la qualité dans une application concrète. L'idée derrière le canevas est de définir un contexte flexible qui permet la spécialisation des algorithmes d'évaluation à des scénarios d'application spécifiques. Les valeurs de qualité obtenues pendant l'évaluation sont comparées à celles attendues par les utilisateurs. Des actions d'amélioration peuvent se réaliser si les exigences de qualité ne sont pas satisfaites. Nous suggérons des actions d'amélioration élémentaires qui peuvent être composées pour améliorer la qualité dans un SID concret. Notre approche pour améliorer la fraîcheur des données consiste à l'analyse du SID à différents niveaux d'abstraction, de façon à identifier ses points critiques et cibler l'application d'actions d'amélioration sur ces points-là. Notre approche pour améliorer l'exactitude des données consiste à partitionner les résultats des requêtes en portions (certains attributs, certaines tuples) ayant une exactitude homogène. Cela permet aux applications utilisateur de visualiser seulement les données les plus exactes, de filtrer les données ne satisfaisant pas les exigences d'exactitude ou de visualiser les données par tranche selon leur exactitude. Comparée aux approches existantes de sélection de sources, notre proposition permet de sélectionner les portions les plus exactes au lieu de filtrer des sources entières. Les contributions principales de cette thèse sont : (1) une analyse détaillée des facteurs de qualité fraîcheur et exactitude ; (2) la proposition de techniques et algorithmes pour l'évaluation et l'amélioration de la fraîcheur et l'exactitude des données ; et (3) un prototype d'évaluation de la qualité utilisable dans la conception de SID.
APA, Harvard, Vancouver, ISO, and other styles
13

Tran, Viet-Trung. "Scalable data-management systems for Big Data." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00920432.

Full text
Abstract:
Big Data can be characterized by 3 V's. * Big Volume refers to the unprecedented growth in the amount of data. * Big Velocity refers to the growth in the speed of moving data in and out management systems. * Big Variety refers to the growth in the number of different data formats. Managing Big Data requires fundamental changes in the architecture of data management systems. Data storage should continue being innovated in order to adapt to the growth of data. They need to be scalable while maintaining high performance regarding data accesses. This thesis focuses on building scalable data management systems for Big Data. Our first and second contributions address the challenge of providing efficient support for Big Volume of data in data-intensive high performance computing (HPC) environments. Particularly, we address the shortcoming of existing approaches to handle atomic, non-contiguous I/O operations in a scalable fashion. We propose and implement a versioning-based mechanism that can be leveraged to offer isolation for non-contiguous I/O without the need to perform expensive synchronizations. In the context of parallel array processing in HPC, we introduce Pyramid, a large-scale, array-oriented storage system. It revisits the physical organization of data in distributed storage systems for scalable performance. Pyramid favors multidimensional-aware data chunking, that closely matches the access patterns generated by applications. Pyramid also favors a distributed metadata management and a versioning concurrency control to eliminate synchronizations in concurrency. Our third contribution addresses Big Volume at the scale of the geographically distributed environments. We consider BlobSeer, a distributed versioning-oriented data management service, and we propose BlobSeer-WAN, an extension of BlobSeer optimized for such geographically distributed environments. BlobSeer-WAN takes into account the latency hierarchy by favoring locally metadata accesses. BlobSeer-WAN features asynchronous metadata replication and a vector-clock implementation for collision resolution. To cope with the Big Velocity characteristic of Big Data, our last contribution feautures DStore, an in-memory document-oriented store that scale vertically by leveraging large memory capability in multicore machines. DStore demonstrates fast and atomic complex transaction processing in data writing, while maintaining high throughput read access. DStore follows a single-threaded execution model to execute update transactions sequentially, while relying on a versioning concurrency control to enable a large number of simultaneous readers.
APA, Harvard, Vancouver, ISO, and other styles
14

Dierickx, Lawrence O. "Quantitative data analysis and functional testicular evaluation using PET-CT and FDG." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30400.

Full text
Abstract:
Le but de cette thèse est d'évaluer l'utilisation de la TEP/CT avec le 18F-FDG pour l'évaluation de la fonction testiculaire et d'optimiser et de standardiser le protocole d'acquisition et l'analyse du volume testiculaire pour ce faire. Dans le chapitre I, nous donnons un aperçu de la littérature où nous établissons que l'absorption de 18F-FDG est corrélée avec la spermatogenèse en raison de la présence des transporteurs GLUT 3 sur les cellules de Sertoli et les spermatides et non sur les cellules de Leydig qui sont responsables de la stéroïdogenèse. Nous donnons ensuite un aperçu du problème de santé publique que pose la stérilité masculine en indiquant les différentes applications cliniques possibles de l'imagerie fonctionnelle des testicules. Dans le chapitre II, nous examinons la corrélation significative entre l'absorption de 18F-FDG en termes d'intensité et de volume d'absorption et la fonction testiculaire via les paramètres de l'analyse du sperme. Dans le chapitre III, nous nous concentrons sur la standardisation du protocole d'acquisition pour cette indication spécifique, après un bref aperçu technique de la TEP/TDM et de ses limites. La première étude ayant été réalisée par le biais d'un volume testiculaire délimité manuellement, nous avons ré-analysé la corrélation avec une méthode de segmentation adaptative solide et reproductible du volume dans un deuxième article. Nous nous sommes également concentrés sur l'optimisation du protocole d'acquisition en évaluant l'impact de l'activité urinaire intense sur l'absorption testiculaire. Nous avons d'abord examiné cet impact à l'aide d'études de fantômes dans lesquelles nous avons simulé la vessie et les testicules. Nous avons ensuite procédé à une étude clinique visant à évaluer et à comparer deux protocoles de diurétiques. Dans le chapitre IV, nous abordons le sujet important, et encore plus dans ce contexte andrologique, des problèmes liés à la radioprotection d'une TEP/CT avec le 18F-FDG. Enfin, dans le chapitre V, nous donnons un aperçu de certaines des questions qui restent à traiter et des perspectives futures de cette nouvelle orientation dans le domaine de la médecine nucléaire que nous pourrions appeler "andrologie nucléaire"
The aim of this thesis is to evaluate the use of PET/CT with 18F-FDG for an assessment of the testicular function and to optimise and standardise the acquisition protocol and the testicular volume analysis in order to do that. In chapter I we provide a literature overview where we establish that the 18F-FDG uptake is correlated with the spermatogenesis because of the presence of GLUT 3 transporters on the Sertoli cells and the spermatides and not on the Leydig cells which are responsible for the steroidogenesis. We then provide an overview of the public health problem of male infertility where we point out different possible clinical applications for testicular functional imaging. In chapter II we examine the significant correlation between 18F-FDG uptake in terms of intensity and volume of uptake and the testicular function via the parameters of the sperm analysis. In chapter III, we focus on the standardisation of the acquisition protocol for this specific indication, after a brief technical overview of the PET/CT and of its limitations. Because the first study was done via a manually delineated testicular volume, we re-analysed the correlation with a solid and reproducible adaptive volume segmentation method in a second article. We further focussed on optimising the acquisition protocol by evaluating the impact of the intense urinary activity on the testicular uptake. First we examined this impact with phantom studies where we simulated the bladder and the testes. We proceeded with a clinical study where we aimed to evaluate and compare 2 diuretic protocols. In chapter IV we address the overall important subject, and even more so in this andrological context, of the radioprotection related issues of a PET/CT with 18F-FDG. Finally, in chapter V we provide an overview of some of the issues still to be addressed and the future perspectives for this new direction in the field of nuclear medicine that we could name 'nuclear andrology'
APA, Harvard, Vancouver, ISO, and other styles
15

Lindelow, Ponce De Leon Malin Kristina. "Parenting in long term perspectives : modelling longitudinal data." Thesis, King's College London (University of London), 1995. https://kclpure.kcl.ac.uk/portal/en/theses/parenting-in-long-term-perspectives--modelling-longitudinal-data(250a0dba-948d-4e6a-993b-1bea4945ebf4).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Charfi, Manel. "Declarative approach for long-term sensor data storage." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI081/document.

Full text
Abstract:
De nos jours, on a de plus en plus de capteurs qui ont tendance à apporter confort et facilité dans notre vie quotidienne. Ces capteurs sont faciles à déployer et à intégrer dans une variété d’applications (monitoring de bâtiments intelligents, aide à la personne,...). Ces milliers (voire millions)de capteurs sont de plus en plus envahissants et génèrent sans arrêt des masses énormes de données qu’on doit stocker et gérer pour le bon fonctionnement des applications qui en dépendent. A chaque fois qu'un capteur génère une donnée, deux dimensions sont d'un intérêt particulier : la dimension temporelle et la dimension spatiale. Ces deux dimensions permettent d'identifier l'instant de réception et la source émettrice de chaque donnée. Chaque dimension peut se voir associée à une hiérarchie de granularités qui peut varier selon le contexte d'application. Dans cette thèse, nous nous concentrons sur les applications nécessitant une conservation à long terme des données issues des flux de données capteurs. Notre approche vise à contrôler le stockage des données capteurs en ne gardant que les données jugées pertinentes selon la spécification des granularités spatio-temporelles représentatives des besoins applicatifs, afin d’améliorer l'efficacité de certaines requêtes. Notre idée clé consiste à emprunter l'approche déclarative développée pour la conception de bases de données à partir de contraintes et d'étendre les dépendances fonctionnelles avec des composantes spatiales et temporelles afin de revoir le processus classique de normalisation de schéma de base de données. Étant donné des flux de données capteurs, nous considérons à la fois les hiérarchies de granularités spatio-temporelles et les Dépendances Fonctionnelles SpatioTemporelles (DFSTs) comme objets de premier ordre pour concevoir des bases de données de capteurs compatibles avec n'importe quel SGBDR. Nous avons implémenté un prototype de cette architecture qui traite à la fois la conception de la base de données ainsi que le chargement des données. Nous avons mené des expériences avec des flux de donnés synthétiques et réels provenant de bâtiments intelligents. Nous avons comparé notre solution avec la solution de base et nous avons obtenu des résultats prometteurs en termes de performance de requêtes et d'utilisation de la mémoire. Nous avons également étudié le compromis entre la réduction des données et l'approximation des données
Nowadays, sensors are cheap, easy to deploy and immediate to integrate into applications. These thousands of sensors are increasingly invasive and are constantly generating enormous amounts of data that must be stored and managed for the proper functioning of the applications depending on them. Sensor data, in addition of being of major interest in real-time applications, e.g. building control, health supervision..., are also important for long-term reporting applications, e.g. reporting, statistics, research data... Whenever a sensor produces data, two dimensions are of particular interest: the temporal dimension to stamp the produced value at a particular time and the spatial dimension to identify the location of the sensor. Both dimensions have different granularities that can be organized into hierarchies specific to the concerned context application. In this PhD thesis, we focus on applications that require long-term storage of sensor data issued from sensor data streams. Since huge amount of sensor data can be generated, our main goal is to select only relevant data to be saved for further usage, in particular long-term query facilities. More precisely, our aim is to develop an approach that controls the storage of sensor data by keeping only the data considered as relevant according to the spatial and temporal granularities representative of the application requirements. In such cases, approximating data in order to reduce the quantity of stored values enhances the efficiency of those queries. Our key idea is to borrow the declarative approach developed in the seventies for database design from constraints and to extend functional dependencies with spatial and temporal components in order to revisit the classical database schema normalization process. Given sensor data streams, we consider both spatio-temporal granularity hierarchies and Spatio-Temporal Functional Dependencies (STFDs) as first class-citizens for designing sensor databases on top of any RDBMS. We propose a specific axiomatisation of STFDs and the associated attribute closure algorithm, leading to a new normalization algorithm. We have implemented a prototype of this architecture to deal with both database design and data loading. We conducted experiments with synthetic and real-life data streams from intelligent buildings
APA, Harvard, Vancouver, ISO, and other styles
17

Madon, Michel. "Cellules à enclumes de diamant et microscopie électronique en transmission : étude expérimentale des transformations de phase du manteau terrestre." Paris 6, 1986. http://www.theses.fr/1986PA066123.

Full text
Abstract:
Changements de phase associés aux discontinuités sismiques de 400 et 700km et conséquences sur la rhéologie du manteau. Etude des transitions polymorphiques entre les trois structures alpha , beta et gamma que peut prendre l'olivine et la décomposition de spinelle en pérovskite et magnésiowüstite. Etude réalisée en microscopie électronique en transmission sur des échantillons provenant de météorites choquées ou synthétisées à très haute pression et température dans une cellule à enclume de diamant.
APA, Harvard, Vancouver, ISO, and other styles
18

McCoy, Kenneth A. "A recirculating optical loop for short-term data storage." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/14871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Hua, Dong. "3-level latent structure models for TCM data analysis /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20HUA.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 38-41). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
20

Mestre, Adrover Miquel Angel. "Data center optical networks : short- and long-term solutions." Thesis, Evry, Institut national des télécommunications, 2016. http://www.theses.fr/2016TELE0022/document.

Full text
Abstract:
Les centres de données deviennent de plus en plus importants, allant de petites fermes de serveurs distribuées à des grandes fermes dédiées à des tâches spécifiques. La diffusion de services "dans le nuage" conduit à une augmentation incessante de la demande de trafic dans les centres de données. Dans cette thèse, nous étudions l'évolution des réseaux dans les centres de données et proposons des solutions à court et à long terme pour leur intra-connexion physique. Aujourd'hui, la croissance de la demande de trafic met en lumière la nécessité urgente d’interfaces à grande vitesse capables de faire face à la bande passante exigeant de nouvelles applications. Ainsi, à court terme, nous proposons de nouveaux transpondeurs optiques à haut débit, mais à faible coût, permettant la transmission de 200 Gb /s utilisant des schémas de modulation en intensité et à détection directe. Plusieurs types de modulations d’impulsions en amplitude avancées sont explorés, tout en augmentant la vitesse à des débits symboles allant jusqu’à 100 GBd. La génération électrique à haute vitesse est réalisé grâce à un nouveau convertisseur analogique-numérique intégré, capable de doubler les vitesses des entrées et de générer des signaux à plusieurs niveaux d’amplitude. Cependant, le trafic continuera sa croissance. Les centres de données actuels reposent sur plusieurs niveaux de commutateurs électroniques pour construire un réseau d'interconnexion capable de supporter une telle grande quantité de trafic. Dans une telle architecture, la croissance du trafic est directement liée à une augmentation du nombre des composants du réseau, y-compris les commutateurs avec plus de ports, les interfaces et les câbles. Le coût et la consommation d'énergie qui peut être attendus à l'avenir est intenable, ce qui appelle à une réévaluation du réseau. Par conséquent, nous présentons ensuite un nouveau concept fondé sur la commutation de "slots" optiques (Burst Optical Slot Switching, i.e. BOSS) dans lequel les serveurs sont connectés via des nœuds BOSS à travers des anneaux de fibres multiplexé en longueur d'onde et en temps, et organisés dans une topologie en tore. Au cours de cette thèse, nous étudions la mise en œuvre des nœuds BOSS; en particulier, la matrice de commutation et les transpondeurs optiques. L'élément principal au sein de la matrice de commutation est le bloqueur de slots, qui est capable d'effacer n’importe quel paquet (slot) sur n’importe quelle longueur d'onde en quelques nanosecondes seulement. D'une part, nous explorons l'utilisation d'amplificateurs optiques à semi-conducteurs comme portes optiques à utiliser dans le bloqueur des slots, et étudier leur cascade. D'autre part, nous développons un bloqueur de slots intégré monolithiquement capable de gérer jusqu'à seize longueurs d'onde avec la diversité de polarisation. Ensuite, nous présentons plusieurs architectures de transpondeur et nous étudions leur performance. La signalisation des transpondeurs doit répondre à deux exigences principales: le fonctionnement en mode paquet et la résistance au filtrage serré. D'abord, nous utilisons des transpondeurs élastiques qui utilisent des modulations Nyquist N-QAM, et qui adaptent le format de modulation en fonction du nombre de nœuds à traverser. Ensuite, nous proposons l'utilisation du multiplexage par répartition orthogonale de la fréquence en cohérence optique (CO-OFDM). Avec une structure de paquet inhérente et leur grande adaptabilité fréquentielle, nous démontrons que les transpondeurs CO-OFDM offrent une capacité plus élevée et une meilleure portée que leurs homologues Nyquist. Finalement, nous comparons notre solution BOSS avec la topologie Clos replié utilisée aujourd'hui. Nous montrons que notre architecture BOSS nécessite 400 fois moins de transpondeurs et de câbles que les réseaux de commutation électronique d'aujourd'hui, ce qui ouvre la voie à des centres de données hautement évolutifs et durables
Data centers are becoming increasingly important and ubiquitous, ranging from large server farms dedicated to various tasks such as data processing, computing, data storage or the combination thereof, to small distributed server farms. The spread of cloud services is driving a relentless increase of traffic demand in datacenters, which is doubling every 12 to 15 months. Along this thesis we study the evolution of data center networks and present short- and long-term solutions for their physical intra-connection. Today, rapidly-growing traffic in data centers spotlights the urgent need for high-speed low-cost interfaces capable to cope with hungry-bandwidth demanding new applications. Thereby, in the short-term we propose novel high-datarate low-cost optical transceivers enabling up to 200 Gb/s transmission using intensity-modulation and direct-detection schemes. Several advanced pulse amplitude modulation schemes are explored while increasing speeds towards record symbol-rates, as high as 100 GBd. High-speed electrical signaling is enabled by an integrated selector-power digital-to- analog converter, capable of doubling input baud-rates while outputting advance multi-level pulse amplitude modulations. Notwithstanding, data centers’ global traffic will continue increasing incessantly. Current datacenters rely on high-radix all-electronic Ethernet switches to build an interconnecting network capable to pave with such vast amount of traffic. In such architecture, traffic growth directly relates to an increase of networking components, including switches with higher port-count, interfaces and cables. Unsustainable cost and energy consumption that can be expected in the future calls for a network reassessment. Therefore, we subsequently present a novel concept for intra-datacenter networks called burst optical slot switching (BOSS); in which servers are connected via BOSS nodes through wavelength- and time-division multiplexed fiber rings organized in a Torus topology. Along this thesis we investigate on the implementation of BOSS nodes; in particular, the switching fabric and the optical transceivers. The main element within the switching fabric is the slot blocker, which is capable of erasing any packet of any wavelength in a nanosecond time-scale. On the one hand, we explore the use of semiconductor optical amplifiers as means of gating element to be used within the slot blocker and study their cascadability. On the other hand we develop a monolithically integrated slot blocker capable of handling up to sixteen wavelength channels with dual-polarization diversity. Then we present several transceiver architectures and study their performances. Transceivers’ signaling needs to fulfill two main requirements: packet-mode operation, i.e. being capable of recovering few microsecond –long bursts; and resiliency to tight filtering, which occurs when cascading many nodes (e.g. up to 100). First we build packet-mode Nyquist-pulse-shaped N-QAM transceivers, which adapt the modulation format as a function of the number of nodes to traverse. Later we propose the use of coherent-optical orthogonal frequency division multiplexing (CO-OFDM). With inherent packet structure and high spectral tailoring capabilities, we demonstrate that CO-OFDM-based transceivers offer higher capacity and enhanced reach than its Nyquist counterpart. Finally, we compare our BOSS solution to today’s Folded Clos topology, and show that our BOSS architecture requires x400 fewer transponders and cables than today’s electronic switching networks, which paves the way to highly scalable and sustainable datacenters
APA, Harvard, Vancouver, ISO, and other styles
21

Barreto-Munoz, Armando. "Multi-Sensor Vegetation Index and Land Surface Phenology Earth Science Data Records in Support of Global Change Studies: Data Quality Challenges and Data Explorer System." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/301661.

Full text
Abstract:
Synoptic global remote sensing provides a multitude of land surface state variables. The continuous collection, for more than 30 years, of global observations has contributed to the creation of a unique and long term satellite imagery archive from different sensors. These records have become an invaluable source of data for many environmental and global change related studies. The problem, however, is that they are not readily available for use in research and application environment and require multiple preprocessing. Here, we looked at the daily global data records from the Advanced Very High Resolution Radiometer (AVHRR) and the Moderate Resolution Imaging Spectroradiometer (MODIS), two of the most widely available and used datasets, with the objective of assessing their quality and suitability to support studies dealing with global trends and changes at the land surface. Findings show that clouds are the major data quality inhibitors, and that the MODIS cloud masking algorithm performs better than the AVHRR. Results show that areas of high ecological importance, like the Amazon, are most prone to lack of data due to cloud cover and aerosols leading to extended periods of time with no useful data, sometimes months. While the standard approach to these challenges has been compositing of daily images to generate a representative map over a preset time periods, our results indicate that preset compositing is not the optimal solution and a hybrid location dependent method that preserves the high frequency of these observations over the areas where clouds are not as prevalent works better. Using this data quality information the Vegetation Index and Phenology (VIP) Laboratory at The University of Arizona produced over 30 years of seamless sensor independent record of vegetation indices and land surface phenology metrics. These data records consist of 0.05-degree resolution global images for daily, 7-days, 15-days and monthly temporal frequency. These sort of remote sensing based products are normally made available through the internet by large data centers, like the Land Processes Distributed Active Archive Center (LP DAAC), however, in this project an online tool, the VIP Data Explorer, was developed to support the visualization, exploration, and distribution of these Earth Science Data Records (ESDRs) keeping it closer to the data generation center which provides a more active data support and distribution model. This web application has made it possible for users to explore and evaluate the products suite before download and use.
APA, Harvard, Vancouver, ISO, and other styles
22

MacGibbon, David George. "An investigation into the effects of perceptions of person-team fit during online recruitment; and the uses of clickstream data associated with this medium." Thesis, University of Canterbury. Psychology, 2012. http://hdl.handle.net/10092/7007.

Full text
Abstract:
Given the increasing predominance of work teams within organisations, this study aimed to investigate the role that perceptions of person-team fit has in the recruitment process, in addition to other forms of person-environment fit. An experimental design was followed which manipulated the amount of team information made available to participants. It was hypothesised that participants who received more information would exhibit higher perceptions of person-team fit. Results supported this prediction with levels of person-team fit being successfully manipulated. Results also showed significant correlations between person-team fit and organisational attraction which is important in the early stages of recruitment. This study was conducted remotely over the internet with clickstream data associated with this medium being collected. It was hypothesised that viewing order and times may be related to dependent variables. No support for this prediction was found, however it did identify a group of participants that appeared not to engage in the task, which has implications for future research carried out online.
APA, Harvard, Vancouver, ISO, and other styles
23

Nguyen, Benjamin. "Privacy-Centric Data Management." Habilitation à diriger des recherches, Université de Versailles-Saint Quentin en Yvelines, 2013. http://tel.archives-ouvertes.fr/tel-00936130.

Full text
Abstract:
This document will focus on my core computer science research since 2010, covering the topic of data management and privacy. More speci cally, I will present the following topics : -ˆ A new paradigm, called Trusted Cells for privacy-centric personal data management based on the Asymmetric Architecture composed of trusted or open (low power) distributed hardware devices acting as personal data servers and a highly powerful, highly available supporting server, such as a cloud. (Chapter 2). ˆ- Adapting aggregate data computation techniques to the Trusted Cells environment, with the example of Privacy-Preserving Data Publishing (Chapter 3). - Minimizing the data that leaves a Trusted Cell, i.e. enforcing the general privacy principle of Limited Data Collection (Chapter 4). This document contains only results that have already been published. As such, rather than focus on the details and technicalities of each result, I have tried to provide an easy way to have a global understanding of the context behind the work, explain the problematic of the work, and give a summary of the main scienti c results and impact.
APA, Harvard, Vancouver, ISO, and other styles
24

Olson, Julius, and Emma Södergren. "Long Term Memory in Conversational Robots." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260316.

Full text
Abstract:
This study discusses an implementation of a long term memory in the robot Furhat. The idea was to find a way to prevent identical and very similar questions from being asked several times and to store the information of which questions have already been asked in a document database. The project encompasses tf-idf, as well as a small-scale test with Word2Vec, to find a vector representation of all questions from Furhat’s database and then clustering these questions with the k-means method. The tests resulted in high scores on all the evaluation metrics used, which is promising for implementation into the actual Furhat robot, as well as further research on similar implementations of long term memory functions in chatbots.
I denna rapport behandlas implementeringen av ett långtidsminne i roboten Furhat. Idén bakom detta minne var att hindra roboten från att vara repetitiv och ställa allt för likartade eller identiska frågor till en konversationspartner. Projektet inkluderar användandet av tf-idf, samt inledande försök med word2vec i skapandet av vektorrepresentationer av dialogsystemets frågor, samt klustring av dessa representationer med algoritmen k-means. De genomförda testerna renderade goda resultat, vilket är lovande för implementering av en liknande mekanism i Furhats dialogsystem samt för framtida forskning inom långtidsminnesfunktionalitet i chatbots i allmänhet.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Tingting. "Multi-agent team competitions and the implementation of a team-strategy." HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Galland, Alban. "Distributed data management with access control : social Networks and Data of the Web." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00640725.

Full text
Abstract:
The amount of information on the Web is spreading very rapidly. Users as well as companies bring data to the network and are willing to share with others. They quickly reach a situation where their information is hosted on many machines they own and on a large number of autonomous systems where they have accounts. Management of all this information is rapidly becoming beyond human expertise. We introduce WebdamExchange, a novel distributed knowledge-base model that includes logical statements for specifying information, access control, secrets, distribution, and knowledge about other peers. These statements can be communicated, replicated, queried, and updated, while keeping track of time and provenance. The resulting knowledge guides distributed data management. WebdamExchange model is based on WebdamLog, a new rule-based language for distributed data management that combines in a formal setting deductiverules as in Datalog with negation, (to specify intensional data) and active rules as in Datalog:: (for updates and communications). The model provides a novel setting with a strong emphasis on dynamicity and interactions(in a Web 2.0 style). Because the model is powerful, it provides a clean basis for the specification of complex distributed applications. Because it is simple, it provides a formal framework for studying many facets of the problem such as distribution, concurrency, and expressivity in the context of distributed autonomous peers. We also discuss an implementation of a proof-of-concept system that handles all the components of the knowledge base and experiments with a lighter system designed for smartphones. We believe that these contributions are a good foundation to overcome theproblems of Web data management, in particular with respect to access control.
APA, Harvard, Vancouver, ISO, and other styles
27

López, Segovia Lucas. "Survival data analysis with heavy-censoring and long-term survivors." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/276170.

Full text
Abstract:
The research developed in this thesis has been motivated by two datasets, which are introduced in Chapter 2, one concerning the mortality of calves from birth to weaning while the other refers to survival of patients diagnosed with melanoma. In both cases the percentage of censoring is high, it is very likely to have immune individuals and proper analysis accounting for the possibility of a not negligible proportion of cured individuals has to be performed. Cure models are introduced in Chapter 3 together with the available software to perform the analysis, such as SAS, R and STATA, among others. We investigate the effect that heavy censoring could have on the estimation of the regression coefficients in the Cox model via a simulation study which considers several scenarios given by different sample sizes and censoring levels, results presented in Chapter 4. An application of a mixture cure model, which includes a Cox model for the survival part and a logistic model for the cure part of patients with melanoma, is described in Chapter 5. In addition, discussions about test for sufficient follow-up and censoring levels are also presented for this data. The data analysis is carried out using the macro in SAS: PSPMCM. The results show that patients with Sentinel Lymph Node (SLN): negative status to biopsy, Clark's level of invasion I-III, Histopathological of Malignant Melanoma subtype: Superficial Spreading Melanoma (SSM), younger than 46 years, and female, are more likely to be cured, whereas patients with melanoma in head and neck, Breslow's micrometric depth = 4mm and ulceration presents, are patients with increased risk of relapse. In particular, patients with Breslow's micrometric depth = 4mm are at higher risk for death. Furthermore, since mixture cure models do not have the property of proportional hazards for the entire population, they can be extended to non-mixture cure models by means of nonlinear transformation models as defined in Tsodikov (2003). An application of the extended hazard models is presented for the mortality of calves in Chapter 6. The methodology allows to get estimates for the cure rate as well as for genetic and environmental effects for each herd. A relevant feature of the non-mixture cure models is that they model, separately, factors which could affect survival from those affecting the cure model, making the interpretation of these models relatively easy. Results are shown in section 6.3.1, and were obtained using the library NLTM of the statistical package R. The short (mortality) and long term (survivors) effects are determined for each factors, as well as its statistical significance in each herd. For example in the herd 1, we find that calving month and difficulty at birth is the set of statistically significant factors for the nonsusceptible (long-term survivors) proportion. Calves born in the period march-august have lower probability of survive than those born in September-February; and the probability of survive is much lower for those that have difficulties at calving for herd 1. For herd 7 the effect of difficulty at calving is different as for herd 1, here only is significative the category strongly assisted. Calves that born from strongly assisted calving have lower probability of survive that calves from without assistance calving. Regarding short-term (mortality) effects, we only find statistically significant predictors in herd 7 where the risk of death of calves born from older mothers, hence with a longer reproductive life, is twice the risk of death of calves born from younger mothers. The obtained results have been compared with those coming from standard survival models. It is also included, a discussion about the likely erroneous conclusions that may yield from standard models, without taking into account the cure.
La investigación desarrollada en esta tesis ha sido motivada por dos conjuntos de datos, introducidos en el capítulo 2, uno relacionado con la mortalidad de terneros desde el nacimiento hasta el destete, el otro con la supervivencia de los pacientes diagnosticados con melanoma. En ambos el porcentaje de censura es alto, la presencia de individuos inmunes es probable y un modelo que tome en cuenta esta proporción no despreciable de individuos inmunes será el más apropiado para su análisis. Los modelos de cura combinados se introducen en el capítulo 3 junto con el software disponible para realizar el análisis, tales como SAS, R y STATA, entre otros. Investigamos el efecto que una alta censura podría tener en la estimación de los coeficientes de regresión en el modelo de Cox, vía estudios de simulación para varios escenarios dado por diferentes tamaños de muestra y niveles de censura. Los resultados son presentados en el capítulo 4. La aplicación de un modelo de cura combinado, que incluye un modelo de Cox para la parte de supervivencia y un modelo logístico para la parte de cura de los pacientes con melanoma, se describe en el capítulo 5. Se presentan discusiones acerca de la prueba para el seguimiento suficiente y niveles de censura. El análisis se realiza mediante la macro de SAS: PSPMCM. Los resultados muestran que los pacientes con ganglios linfáticos Centinela (SLN): con biopsia negativa, nivel de Clark de invasión I-III, subtipo histopatológica de Melanoma maligno: con extensión superficial (SSM), menores de 46 años y mujer, tienen más probabilidades de ser curados, mientras que pacientes con melanoma en cabeza o cuello, Breslow micrométrico mayor o igual a 4mm de profundidad y ulceración presente, son pacientes con mayor riesgo de recaída. En particular, pacientes con Breslow micrométrico mayor o igual 4mm de profundidad están en riesgo de muerte. Por otra parte, como los modelos de cura combinados no tienen la propiedad de riesgos proporcionales para la población, estos pueden ser extendidos a modelos de cura no combinados via modelos de transformación no lineal definidos en Tsodikov (2003). Se presenta aplicación de los modelos de riesgo extendido para los datos de mortalidad de terneros en el capítulo 6. La metodología permite obtener estimaciones de la proporción de cura, así como los efectos de los factores genéticos y ambientales para cada rebaño. Una característica relevante de los modelos de cura no combinados es que modelan por separado, los factores que podrían afectar la supervivencia de aquellos que afectan el modelo de cura, y la interpretación es relativamente fácil. Los resultados se muestran en la sección 6.3.1 y se obtuvieron utilizando la librería NLTM del paquete estadístico R. Los efectos a corto plazo (mortalidad) y a largo plazo (sobrevivientes) son determinados para cada factor, así como su significación estadística en cada rebaño. Por ejemplo en el rebaño 1, encontramos que el mes del parto y la dificultad al nacer son estadísticamente significativos para la proporción no susceptible (sobrevivientes a largo plazo). Terneros nacidos en el periodo Marzo-Agosto tienen baja probabilidad de sobrevivir que aquellos nacidos en septiembre y febrero; y la probabilidad de sobrevivir es mucho menor para aquellos que tienen dificultades en el parto. Para el rebaño 7 el efecto de la dificultad al parto es diferente al rebaño 1, sólo es significativa la categoría fuertemente asistida. Los terneros de partos fuertemente asistidos tienen menor probabilidad de sobrevivir que aquellos sin asistencia. Respecto a los efectos a corto plazo (mortalidad), sólo encontramos predictores estadísticamente significativos en el rebaño 7 donde el riesgo de muerte de los nacidos de madres con una larga vida reproductiva, están al doble del riesgo de muerte que los nacidos de madres más jóvenes. Se incluye una discusión sobre las conclusiones erróneas que pueden obtenerse de los modelos estándar sino se toma en cuenta la cura.
APA, Harvard, Vancouver, ISO, and other styles
28

Ellis, Craig, of Western Sydney Macarthur University, and Faculty of Business and Technology. "An investigation of long-term dependence in time-series data." THESIS_FBT_XXX_Ellis_C.xml, 1998. http://handle.uws.edu.au:8081/1959.7/242.

Full text
Abstract:
Traditional models of financial asset yields are based on a number of simplifying assumptions. Among these are the primary assumptions that changes in asset yields are independent, and that the distribution of these yields is approximately normal. The development of financial asset pricing models has also incorporated these assumptions. A general feature of the pricing models is that the relationship between the model variables is fundamentally linear. Recent empirical research has however identified the possibility for these relations to be non-linear. The empirical research focused primarily on methodological issues relating to the application of the classical rescaled adjusted range. Some of the major issues investigated were: the use of overlapping versus contiguous subseries lengths in the calculation of the statistic's Hurst exponent; the asymptotic distribution of the Hurst exponent for Gaussian time-series and long-term dependent fBm's; matters pertaining to the estimation of the expected rescaled adjusted range. Empirical research in this thesis also considered alternate applications of rescaled range analysis, other than modelling non-linear long-term dependence. Issues relating to the use of the technique for estimating long-term dependent ARFIMA processes, and some implications of long-term dependence for financial time-series have both been investigated. Overall, the general shape of the asymptotic distribution of the Hurst exponent has been shown to be invariant to the level of dependence in the underlying series. While the rescaled adjusted range is a biased indicator of the level of long-term dependence in simulated time-series, it was found that the bias could be efficiently modelled. For real time-series containing structured short-term dependence, the bias was shown to be inconsistent with the simulated results.
Doctor of Philosophy (PhD)
APA, Harvard, Vancouver, ISO, and other styles
29

Bams, Wilhelmus Fransiscus Maria. "The term structure of interest rates a panel data analysis /." [Maastricht : Maastricht : Universiteit Maastricht] ; University Library, Maastricht University [Host], 1999. http://arno.unimaas.nl/show.cgi?fid=6719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ellis, Craig. "An investigation of long-term dependence in time-series data /." View thesis, 1998. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030723.150913/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Gražulis, Saulius, Andryus Merkys, Antanas Vaitkus, Cédric Duée, Nicolas Maubec, Valérie Laperche, Laure Capar, et al. "Efficient long-term open-access data archiving in mining industries." TU Bergakademie Freiberg, 2017. https://tubaf.qucosa.de/id/qucosa%3A23193.

Full text
Abstract:
Efficient data collection, analysis and preservation are needed to accomplish adequate business decision making. Long-lasting and sustainable business operations, such as mining, add extra requirements to this process: data must be reliably preserved over periods that are longer than that of a typical software life-cycle. These concerns are of special importance for the combined on-line-on-mine-real-time expert system SOLSA (http://www.solsa-mining.eu/) that will produce data not only for immediate industrial utilization, but also for the possible scientific reuse. We thus applied the experience of scientific data publishing to provide efficient, reliable, long term archival data storage. Crystallography, a field covering one of the methods used in the SOLSA expert system, has long traditions of archiving and disseminating crystallographic data. To that end, the Crystallographic Interchange Framework (CIF, [1]) was developed and is maintained by the International Union of Crystallography (IUCr). This framework provides rich means for describing crystal structures and crystallographic experiments in an unambiguous, human- and machine- readable way, in a standard that is independent of the underlying data storage technology. The Crystallography Open Database (COD, [2]) has been successfully using the CIF framework to maintain its open-access crystallographic data collection for over a decade [3,4]. Since the CIF framework is extensible it is possible to use it for other branches of knowledge. The SOLSA system will generate data using different methods of material identification: XRF, XRD, Raman, IR and DRIFT spectroscopy. For XRD, the CIF is usable out-of-the-box, since we can rely on extensive data definition dictionaries (ontologies) developed by the IUCr and the crystallographic community. For spectroscopic techniques such dictionaries, to our best knowledge, do not exist; thus, the SOLSA team is developing CIF dictionaries for spectroscopic techniques to be used in the SOLSA expert system. All dictionaries will be published under liberal license and communities are encourage to join the development, reuse and extend the dictionaries where necessary. These dictionaries will enable access to open data generated by SOLSA by all interested parties. The use of the common CIF framework will ensure smooth data exchange among SOLSA partners and seamless data publication from the SOLSA project.
APA, Harvard, Vancouver, ISO, and other styles
32

Tuyishimire, Emmanuel. "Cooperative data muling using a team of unmanned aerial vehicles." University of the Western Cape, 2019. http://hdl.handle.net/11394/7067.

Full text
Abstract:
Philosophiae Doctor - PhD
Unmanned Aerial Vehicles (UAVs) have recently o ered signi cant technological achievements. The advancement in related applications predicts an extended need for automated data muling by UAVs, to explore high risk places, ensure e ciency and reduce the cost of various products and services. Due to advances in technology, the actual UAVs are not as expensive as they once were. On the other hand, they are limited in their ight time especially if they have to use fuel. As a result, it has recently been proposed that they could be assisted by the ground static sensors which provide information of their surroundings. Then, the UAVs need only to provide actions depending on information received from the ground sensors. In addition, UAVs need to cooperate among themselves and work together with organised ground sensors to achieve an optimal coverage. The system to handle the cooperation of UAVs, together with the ground sensors, is still an interesting research topic which would bene t both rural and urban areas. In this thesis, an e cient ground sensor network for optimal UAVs coverage is rst proposed. This is done using a clustering scheme wherein, each cluster member transmits its sensor readings to its cluster head. A more e cient routing scheme for delivering readings to cluster head(s) for collection by UAVs is also proposed. Furthermore, airborne sensor deployment models are provided for e cient data collection from a unique sensor/target. The model proposed for this consists of a scheduling technique which manages the visitation of UAVs to target. Lastly, issues relating to the interplay between both types of sensor (airborne and ground/underground) networks are addressed by proposing the optimal UAVs task allocation models; which take caters for both the ground networking and aerial deployment. Existing network and tra c engineering techniques were adopted in order to handle the internetworking of the ground sensors. UAVs deployment is addressed by adopting Operational Research techniques including dynamic assignment and scheduling models. The proposed models were validated by simulations, experiments and in some cases, formal methods used to formalise and prove the correctness of key properties.
APA, Harvard, Vancouver, ISO, and other styles
33

Hansson, Johan. "Detection of Long Term Vibration Deviations in GasTurbine Monitoring Data." Thesis, Linköpings universitet, Fordonssystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170266.

Full text
Abstract:
Condition based monitoring is today essential for any machine manufacturer tobe able to detect and predict faults in their machine fleet. This reduces the maintenancecost and also reduces machine downtime. In this master’s thesis twoapproaches are evaluated to detect long term vibration deviations also called vibrationanomalies in Siemens gas turbines of type SGT-800. The first is a simplerule-based approach where a series of CUSUM test are applied to several signalsin order to check if the an vibration anomaly has occurred. The secondapproach uses three common machine learning anomaly detection algorithm todetects these vibration anomalies. The machine learning algorithms evaluatedare k-means clustering , Isolation Forest and One-class SVM. This master’s thesisconclude that these vibration anomalies can be detected with these ML modelsbut also with the rule-based model with different levels of success. A set of featureswas also obtained that was the most important for detection of vibrationanomalies. This thesis also presents which of these models are the best suitedanomaly detection and would be the most appropriate for Siemens to implement.
APA, Harvard, Vancouver, ISO, and other styles
34

Gražulis, Saulius, Andryus Merkys, Antanas Vaitkus, Cédric Duée, Nicolas Maubec, Valérie Laperche, Laure Capar, et al. "Efficient long-term open-access data archiving in mining industries." Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2018. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-231338.

Full text
Abstract:
Efficient data collection, analysis and preservation are needed to accomplish adequate business decision making. Long-lasting and sustainable business operations, such as mining, add extra requirements to this process: data must be reliably preserved over periods that are longer than that of a typical software life-cycle. These concerns are of special importance for the combined on-line-on-mine-real-time expert system SOLSA (http://www.solsa-mining.eu/) that will produce data not only for immediate industrial utilization, but also for the possible scientific reuse. We thus applied the experience of scientific data publishing to provide efficient, reliable, long term archival data storage. Crystallography, a field covering one of the methods used in the SOLSA expert system, has long traditions of archiving and disseminating crystallographic data. To that end, the Crystallographic Interchange Framework (CIF, [1]) was developed and is maintained by the International Union of Crystallography (IUCr). This framework provides rich means for describing crystal structures and crystallographic experiments in an unambiguous, human- and machine- readable way, in a standard that is independent of the underlying data storage technology. The Crystallography Open Database (COD, [2]) has been successfully using the CIF framework to maintain its open-access crystallographic data collection for over a decade [3,4]. Since the CIF framework is extensible it is possible to use it for other branches of knowledge. The SOLSA system will generate data using different methods of material identification: XRF, XRD, Raman, IR and DRIFT spectroscopy. For XRD, the CIF is usable out-of-the-box, since we can rely on extensive data definition dictionaries (ontologies) developed by the IUCr and the crystallographic community. For spectroscopic techniques such dictionaries, to our best knowledge, do not exist; thus, the SOLSA team is developing CIF dictionaries for spectroscopic techniques to be used in the SOLSA expert system. All dictionaries will be published under liberal license and communities are encourage to join the development, reuse and extend the dictionaries where necessary. These dictionaries will enable access to open data generated by SOLSA by all interested parties. The use of the common CIF framework will ensure smooth data exchange among SOLSA partners and seamless data publication from the SOLSA project.
APA, Harvard, Vancouver, ISO, and other styles
35

Buffenbarger, Lauren. "Ethics in Data Science: Implementing a Harm Prevention Framework." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623166419961692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Gomes, Ricardo Rafael Baptista. "Long-term biosignals visualization and processing." Master's thesis, Faculdade de Ciências e Tecnologia, 2011. http://hdl.handle.net/10362/7979.

Full text
Abstract:
Thesis submitted in the fulfillment of the requirements for the Degree of Master in Biomedical Engineering
Long-term biosignals acquisitions are an important source of information about the patients’state and its evolution. However, long-term biosignals monitoring involves managing extremely large datasets, which makes signal visualization and processing a complex task. To overcome these problems, a new data structure to manage long-term biosignals was developed. Based on this new data structure, dedicated tools for long-term biosignals visualization and processing were implemented. A multilevel visualization tool for any type of biosignals, based on subsampling is presented, focused on four representative signal parameters (mean, maximum, minimum and standard deviation error). The visualization tool enables an overview of the entire signal and a more detailed visualization in specific parts which we want to highlight, allowing an user friendly interaction that leads to an easier signal exploring. The ”map” and ”reduce” concept is also exposed for long-term biosignal processing. A processing tool (ECG peak detection) was adapted for long-term biosignals. In order to test the developed algorithm, long-term biosignals acquisitions (approximately 8 hours each) were carried out. The visualization tool has proven to be faster than the standard methods, allowing a fast navigation over the different visualization levels of biosignals. Regarding the developed processing algorithm, it detected the peaks of long-term ECG signals with fewer time consuming than the nonparalell processing algorithm. The non-specific characteristics of the new data structure, visualization tool and the speed improvement in signal processing introduced by these algorithms makes them powerful tools for long-term biosignals visualization and processing.
APA, Harvard, Vancouver, ISO, and other styles
37

Bonnevie, Rodrigue. "Long-Term Exploration in Unknown Dynamic Environments." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289219.

Full text
Abstract:
In order for autonomous robots to perform tasks and safely navigate environments they need to have a reliable and detailed map. These maps are generally created by the robot itself since maps with the required level of detail rarely exist beforehand. In order to create that map the robot has to explore an unknown environment. Such activity is referred to as autonomous exploration within the field of robotics. Most research done in autonomous exploration assumes a static environment. Since most environments in the real world often changes over time an exploration algorithm that is able to re-explore areas where changes may occur is of interest for autonomous long term missions. This thesis presents a method to predict where changes may occur in the environment using Markov chains and an occupancy grid map. An exploration algorithm is also developed with the aim of keeping an updated map of a changing environment. The exploration algorithm is based on a static exploration algorithm that uses RRT? to sample poses and evaluates these poses based on the length of the path to get there and the information gain at and on the path to the sampled pose. An evaluation of both the mapping and exploration is made respectively. The mapping is evaluated on its ability of suppressing noisy measurements whilst being able to accurately model the dynamics of the map. The exploration algorithm is evaluated in three different environments of increasing complexity. Its ability to seek out areas susceptible of change whilst providing data for the mapping is evaluated in each environment. The results show both a mapping and exploration algorithm who works well but are noise sensitive.
För att autonoma robotar ska kunna utföra handlingar och tillförlitligt kunna navigera i sin omvärld så behöver de en pålitlig och detaljerad karta. Dessa kartor är oftast generade av roboten själv då det sällan finns kartor som uppfyller dessa krav. För att skapa dessa kartor så behöver roboten kunna utforska okända miljöer och kartlägga dessa. Detta kallas autonom utforskning inom mobil robotik. Det mesta av forskningen som är gjord inom detta antar att miljön är statisk och inte förändrar sig men eftersom detta sällan är fallet i verkligheten så kan utforskning och återutforskning av dynamiska miljöer vara av intresse, särskilt för robotar som skall vara aktiva i samma miljö under en längre tid. Denna rapport presenterar en metod att förutspå förändringar i en miljö med hjälp av Markovkedjor och occupancy grid kartor samt en utforskningsalgoritm vars mål är att hålla en uppdaterad version av en dynamisk miljö. Utforskningsalgoritmen är baserad på en sådan anpassad för en statisk miljö som använder RRT? för att välja ut positioner och orienteringar för roboten. Dessa evalueras baserat på sträckan för att komma dit och den nya informationen som kan observeras där och på vägen dit. En evaluering av både kartläggning och utforskning är gjord. Kartläggningen är evaluerad på dess förmåga att hantera brus i mätningarna samtidigt som den behåller en bra representation av de dynamiska aspekterna i miljön. Utforskningsalgorithmen är testad i tre olika miljöer av olika komplexitet. Dess förmåga att upptäcka och utforska områden med ökad sannolikhet för förändring och samtidigt förse kartläggningen med data för att modellera miljön och dess dynamik är det som evalueras i experimenten. Resultaten visar att både kartläggning och utforskningsalgoritmen fungerar bra men båda är känsliga för mätbrus.
APA, Harvard, Vancouver, ISO, and other styles
38

Martins, Vidal. "Data Replication in P2P Systems." Phd thesis, Université de Nantes, 2007. http://tel.archives-ouvertes.fr/tel-00481828.

Full text
Abstract:
Cette thèse porte sur la réplication de données dans les systèmes pair-à-pair (P2P). Elle est motivée par l'importance croissante des applications de collaboration répartie et leurs besoins spécifiques en termes de réplication de données, cohérence de données, passage à l'échelle, et haute disponibilité. En employant comme exemple un Wiki P2P, nous montrons que les besoins de réplication pour les applications collaborative sont : haut niveau d'autonomie, réplication multi-maître, détection et résolution de conflit basé sur sémantique, cohérence éventuelle parmi des répliques, hypothèses faibles de réseau, et indépendance des types de données. Bien que la réplication optimiste adresse la plupart de ces besoins, les solutions existantes sont peu applicables aux réseaux P2P puisqu'elles sont centralisées ou ne tiennent pas compte des limitations de réseau. D'autre part, les solutions existantes de réplication P2P ne répondent pas à toutes ces exigences simultanément. En particulier, aucune d'elles ne fournit la cohérence éventuelle parmi des répliques avec des hypothèses faibles de réseau. Cette thèse vise à fournir une solution de réconciliation fortement disponible et qui passe à l'échelle pour des applications de collaboration P2P en développant un protocole de réconciliation qui assure la cohérence éventuelle parmi des répliques et tient compte des coûts d'accès aux données. Cet objectif est accompli en cinq étapes. D'abord, nous présentons des solutions existantes pour la réplication optimiste et des stratégies de réplication P2P et nous analysons leurs avantages et inconvénients. Cette analyse nous permet d'identifier les fonctionnalités et les propriétés que notre solution doit fournir. Dans une deuxième étape, nous concevons un service de réplication pour le système APPA (en anglais, Atlas Peer-to-Peer Architecture). Troisièmement, nous élaborons un algorithme pour la réconciliation sémantique répartie appelée DSR, qui peut être exécuté dans différents environnements répartis (par ex. grappe, grille, ou P2P). Dans une quatrième étape, nous faisons évoluer DSR en protocole de réconciliation pour des réseaux P2P appelé P2P-reconciler. Finalement, la cinquième étape produit une nouvelle version de P2P-reconciler, appelée P2P-reconciler-TA, qui exploite les réseaux P2P conscients de leur topologie (en anglais, topology-aware) afin d'améliorer les performances de la réconciliation. Nous avons validé nos solutions et évalué leurs performances par l'expérimentation et la simulation. Les résultats ont montré que notre solution de réplication apporte haute disponibilité, excellent passage à l'échelle, avec des performances acceptables et surcharge limitée.
APA, Harvard, Vancouver, ISO, and other styles
39

Lafuente, Martinez Cristina. "Essays on long-term unemployment in Spain." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31085.

Full text
Abstract:
This thesis is comprised of three essays relating to long term unemployment in Spain. The first chapter is methodological analysis of the main dataset that is used throughout the thesis. The second and third chapter provide two applications of the dataset for the study of long term unemployment. The methodology in these chapters can be easily adapted to study unemployment in other countries. Chapter 1. On the use of administrative data for the study of unemployment Social security administrative data are increasingly becoming available in many countries. These are very attractive data as they have a long panel structure (large N, large T) and allow to measure many different variables with higher precision. Because of their nature they can capture aspects that are usually hidden due to design or timing of survey data. However, administrative data are not ready to be used for labour market research, especially studies involving unemployment. The main reason is that administrative data only capture those registered unemployed, and in some cases only those receiving unemployment benefits. The gap between total unemployment and registered unemployment is not constant neither across workers characteristics nor time. In this paper I augment Spanish Social Security administrative data by adding missing unemployment spells using information from the institutional framework. I compare the resulting unemployment rate to that of the Labour Force Survey, showing that both are comparable and thus the administrative dataset is useful for labour market research. I also explore how the administrative data can be used to study some important aspects of the labour market that the Labour Force survey can’t capture. Administrative data can also be used to overcome some of the problems of the Labour Force survey such as changes in the structure of the survey. This paper aims to provide a comprehensive guide on how to adapt administrative datasets to make them useful for studying unemployment. Chapter 2. Unemployment Duration Variance Decomposition `a la ABS: Evidence from Spain Existing studies of unemployment duration typically use self-reported information from labour force surveys. We revisit this question using precise information on spells from administrative data. We follow the recent method proposed by Alvarez, Borovickova and Shimer (2015) for estimating the different components of the duration of unemployment using administrative data and have applied it to Austria. In this paper we apply the same method (the ABS method hereafter) to Spain using Spanish Social Security data. Administrative data have many advantages compared to Labour Force Survey data, but we note that there are some incompleteness that need to be enhanced in order to use the data for unemployment analysis (e.g., unemployed workers that run out of unemployment insurance have no labour market status in the data). The degree and nature of such incompleteness is country-specific and are particularly important in Spain. Following Chapter 1, we deal with these data issues in a systematic way by using information from the Spanish LFS data as well as institutional information. We hope that our approach will provide a useful way to apply the ABS method in other countries. Our findings are: (i) the unemployment decomposition is quite similar in Austria and Spain, specially when minimizing the effect of fixed-term contracts in Spain. (ii) the constant component is the most important one; while (total) heterogeneity and duration dependence are roughly comparable. (iii) also, we do not find big differences in the contribution of the different components along the business cycle. Chapter 3. Search Capital and Unemployment Duration I propose a novel mechanism called search capital to explain long term unemployment patters across different ages: workers who have been successful in finding jobs in the recent past become more efficient at finding jobs in the present. Search ability increases with search experience and depreciates with tenure if workers do not search often enough. This leaves young (who have not gained enough search experience) and older workers in a disadvantaged position, making them more likely to suffer long term unemployment. I focus on the case of Spain, as its dual labour market structure favours the identification of search capital. I provide empirical evidence that search capital affects unemployment duration and wages at the individual level. Then I propose a search model with search capital and calibrate it using Spanish administrative data. The addition of search capital helps the model match the dynamics of unemployment and job finding rates in the data, especially for younger workers.
APA, Harvard, Vancouver, ISO, and other styles
40

Agnew, G. E., and P. B. Baker. "Pesticide Use in Arizona Cotton: Long-Term Trends and 1999 Data." College of Agriculture, University of Arizona (Tucson, AZ), 2000. http://hdl.handle.net/10150/197492.

Full text
Abstract:
Arizona pesticide use, as reported on the Department of Agriculture's form 1080, can be summarized to provide a rich picture of pest management in Arizona cotton. Limitations in the pesticide use reporting system complicate the process but do not undermine results. Overall pesticide use decreased over the period 1991 to 1998 despite a peak during the whitefly infestation of 1995. Decreases in insecticide use are responsible for most of the reduction in pesticide use. Recently released 1999 data indicates that reductions continued. Comparison of the composition of pesticide applications between 1995 and 1998 reflect the changes in pest control efforts. A new "target pest" category on the 1080 provides an even richer picture of pest management practices in Arizona cotton.
APA, Harvard, Vancouver, ISO, and other styles
41

Wong, Kin-yau, and 黃堅祐. "Analysis of interval-censored failure time data with long-term survivors." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48199473.

Full text
Abstract:
Failure time data analysis, or survival analysis, is involved in various research fields, such as medicine and public health. One basic assumption in standard survival analysis is that every individual in the study population will eventually experience the event of interest. However, this assumption is usually violated in practice, for example when the variable of interest is the time to relapse of a curable disease resulting in the existence of long-term survivors. Also, presence of unobservable risk factors in the group of susceptible individuals may introduce heterogeneity to the population, which is not properly addressed in standard survival models. Moreover, the individuals in the population may be grouped in clusters, where there are associations among observations from a cluster. There are methodologies in the literature to address each of these problems, but there is yet no natural and satisfactory way to accommodate the coexistence of a non-susceptible group and the heterogeneity in the susceptible group under a univariate setting. Also, various kinds of associations among survival data with a cure are not properly accommodated. To address the above-mentioned problems, a class of models is introduced to model univariate and multivariate data with long-term survivors. A semiparametric cure model for univariate failure time data with long-term survivors is introduced. It accommodates a proportion of non-susceptible individuals and the heterogeneity in the susceptible group using a compound- Poisson distributed random effect term, which is commonly called a frailty. It is a frailty-Cox model which does not place any parametric assumption on the baseline hazard function. An estimation method using multiple imputation is proposed for right-censored data, and the method is naturally extended to accommodate interval-censored data. The univariate cure model is extended to a multivariate setting by introducing correlations among the compound- Poisson frailties for individuals from the same cluster. This multivariate cure model is similar to a shared frailty model where the degree of association among each pair of observations in a cluster is the same. The model is further extended to accommodate repeated measurements from a single individual leading to serially correlated observations. Similar estimation methods using multiple imputation are developed for the multivariate models. The univariate model is applied to a breast cancer data and the multivariate models are applied to the hypobaric decompression sickness data from National Aeronautics and Space Administration, although the methodologies are applicable to a wide range of data sets.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
42

Gendriz, Ignacio Sánchez. "A methodology for analyzing data from long-term passive acoustic monitoring." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-26062017-145831/.

Full text
Abstract:
Despite the extensive Brazilian coast areas, little is known on underwater acoustic environments in Brazil. Acoustic environments (or soundscape) are composed by biological, geological and man-made sound sources. Soundscapes are strongly linked to ecosystems dynamics, and follow temporal patters that can vary at daily and seasonal scales. Thus, for soundscape characterization, it is necessary to undertake sound recordings for long periods, which demands innovative analyzing methods. Accordingly, the present research focuses in two principal objectives: (1) to develop methods for analyzing long-term acoustic recordings and, (2) to characterize marine soundscapes of selected points in São Paulo State. Four deployment sites were selected for the underwater acoustic monitoring: a point located at the channel entrance of the Santos Harbor, and three marine Protected Areas (PAs) in Sao Paulo state. As a result, the largest underwater acoustic database from Brazilian seas was acquired. The present work used Power Spectral Density (PSD), Sound Pressure Level (SPL) and Spectrograms to develop an innovative methodology for analyzing long-term acoustic data. In addition, a new visualization tool and a method for automatic detection of dawn and dusk choruses are presented. The achieved results validated the proposed methodology as an effective tool for analyzing long-term acoustic data. The area close to the first site, the vicinity of Santos Harbor, was dominated by ship noise, which values reach levels that can affect some species of fish and marine mammals. The soundscapes of the other three remaining measurement sites were dominated by fish and crustacean choruses, with daily and seasonal patterns (related to sunrise and sunset). For the monitored regions, the present work signifies the first contribution for cataloguing fish choruses, and establishes a baseline for the study of their underwater acoustic environment. Although the proposed methodology has used long-term undersea acoustic datasets as case-study, it can also be extended for monitoring other aquatic or terrestrial ecosystems. Finally, the research indicates to Brazilian environmental agencies and to the related scientific community that passive acoustic monitoring is a noninvasive and cost-effective tool that can be used for the management of PAs and points of economic relevance.
Apesar da ampla área dos mares brasileiros, pouco se conhece sobre paisagens acústicas submarinas no Brasil. Estas paisagens são compostas por sons de origens biológicas, geológicas e as produzidas pelo homem. As paisagens acústicas estão fortemente ligadas à dinâmica dos ecossistemas, mostrando padrões temporais diários e sazonais. Para caracterizar paisagens acústicas é necessário realizar gravações de sons por períodos de tempos prolongados, o que demanda métodos de análise inovadores. Neste sentido, a presente pesquisa visa dois objetivos principais: (1) desenvolver métodos para a análise de gravações acústicas de longa duração, (2) caracterizar a paisagem acústica do litoral do estado de São Paulo. Quatro pontos de coleta foram selecionados para monitoramento acústico passivo: um ponto situado no canal de entrada do Porto de Santos e os outros três em áreas de proteção marinhas (APM) do estado de São Paulo. Como resultado foi obtida a base de dados de sons submarinhos mais extensa dos mares brasileiros. Do ponto de vista da análise destes dados, o presente trabalho baseia-se no cálculo da Densidade Espectral de Potência, Níveis de Pressão Sonora e Espectrogramas, obtendo métodos de análise novedosos a partir técnicas tradicionais. Neste contexto a tese apresenta uma ferramenta para a visualização de dados acústicos e um método para a detecção automática de coros biológicos matutinos e vespertinos. Os resultados obtidos permitiram validar a efetividade dos métodos propostos na descrição e análise de dados acústicos de longa duração. O ambiente acústico nas proximidades do Porto de Santos foi dominado por ruído de embarcações, alcançando valores de níveis sonoros capazes de afetar algumas espécies de peixes e mamíferos marinhos. As paisagens acústicas dos três pontos restantes foram dominadas por coros de peixes e crustáceos, com padrões diários e sazonais (relacionados ao nascer e pôr do sol). O presente trabalho constitui a primeira pesquisa que cataloga coro de peixes e que estabelece uma referência para o estudo do ambiente acústico das regiões monitoradas. Embora os métodos apresentados usaram como estudo de caso dados de sons submarinos, a sua aplicação pode ser estendida para o monitoramento de outros ambientes aquáticos ou terrestres. Por último, a pesquisa mostra aos órgãos ambientais brasileiros que o monitoramento acústico passivo é uma ferramenta eficaz para o manejo e monitoramento de áreas protegidas e pontos de relevância econômica.
APA, Harvard, Vancouver, ISO, and other styles
43

Norouzi, Mehdi. "Tracking Long-Term Changes in Bridges using Multivariate Correlational Data Analysis." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1416570591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kratchman, Jessica. "Predicting Chronic Non-Cancer Toxicity Levels from Short-Term Toxicity Data." Thesis, The George Washington University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10263969.

Full text
Abstract:

This dissertation includes three separate but related studies performed in partial fulfillment of the requirements for the degree of Doctor of Public Health in Environmental and Occupational Health. The main goal this dissertation was to develop and assess quantitative relationships for predicting doses associated with chronic non-cancer toxicity levels in situations where there is an absence of chronic toxicity data, and to consider the applications of these findings to chemical substitution decisions. Data from National Toxicology Program (NTP) Technical Reports (TRs) (and where applicable Toxicity Reports), which detail the results of both short-term and chronic rodent toxicity tests, have been extracted and modeled using the Environmental Protection Agency’s (EPA’s) Benchmark Dose Software (BMDS). Best-fit minimum benchmark doses (BMDs) and benchmark dose lower limits (BMDL) were determined. Endpoints of interest included non-neoplastic lesions, final mean body weights and mean organ weights. All endpoints were identified by NTP Pathologists in the abstract of the TRs as either statistically or biologically significant. A total of 41 chemicals tested between 2000 and 2012 were included with over 1700 endpoints for short-term (13 week) and chronic (2 year) exposures.

Non-cancer endpoints were the focus of this research. Chronic rodent bioassays have been used by many methodologies in predicting the carcinogenic potential of chemicals in humans (1). However, there appears to be less emphasis on non-cancer endpoints. Further, it has been shown in the literature that there is little concordance in cancerous endpoints between humans and rodents (2). The first study, Quantitative Relationship of Non-Cancer Benchmark Doses in Short-Term and Chronic Rodent Bioassays (Chapter 2), investigated quantitative relationships between non-cancer chronic and short-term toxicity levels using best-fit modeling results and orthogonal regression techniques. The findings indicate that short-term toxicity studies reasonably provide a quantitative estimate of minimum (and median) chronic non-cancer BMDs and BMDLs.

The next study, Assessing Implicit Assumptions in Toxicity Testing Guidelines (Chapter 3) assessed the most sensitive species and species-sex combinations associated with the best-fit minimum BMDL10 for the 41 chemicals. The findings indicate that species and species-sex sensitivity for this group of chemicals is not uniform and that rats are significantly more sensitive than mice for non-cancerous outcomes. There are also indications that male rats may be more than the other species sex groups in certain instances.

The third and final study, Comparing Human Health Toxicity of Alternative Chemicals (Chapter 4), considered two pairs of target and alternative chemicals. A target is the chemical of concern and the alternative is the suggested substitution. The alternative chemical lacked chronic toxicity data, whereas the target had well studied non-cancer health effects. Using the quantitative relationships established in Chapter 2, Quantitative Relationship of Non-Cancer Benchmark Doses in Short-Term and Chronic Rodent Bioassays, chronic health effect levels were predicted for the alternative chemicals and compared to known points of departure (PODs) for the targets. The findings indicate some alternatives can lead to chemical exposures potentially more toxic than the target chemical.

APA, Harvard, Vancouver, ISO, and other styles
45

Chilvers, Alison H. "Managing long-term access to digital data objects : a metadata approach." Thesis, Loughborough University, 2000. https://dspace.lboro.ac.uk/2134/7239.

Full text
Abstract:
As society becomes increasingly reliant on information technology for data exchange and long-term data storage the need for a system of data management to document and provide access to the 'societal memory' is becoming imperative. An examination of both the literature and current 'best practice' underlines the absence to date of a proven universal conceptual basis to digital data preservation. The examination of differences in nature and sources of origin, between traditional 'print-based' and digital objects leads to a re-appraisal of current practices of data selection and preservation. The need to embrace past, present and future metadata developments in a rapidly changing environment is considered. Various hypotheses were formulated and supported regarding: the similarities and differences required in selection criteria for different types of Digital Data Objects (DDOs), the ability to define universal threshold standards for a framework of metadata for digital data preservation, and the role of selection criteria in such a framework. The research uses Soft Systems Methodology to investigate the potential of the metadata concept as the key to universal data management. Semi-structured interviews were conducted to explore the attitudes of information professionals in the United Kingdom towards the challenges facing information-dependent organisations attempting to preserve digital data over the long-term. In particular, the nature of DDOs being encountered by stakeholders, the reasons, policies, and procedures for preserving them, together with a range of specific issues such as: the role of metadata, access to, and rights management of DDOs. The societal need for selection to ensure efficient long-term access is considered. Drawing on - SSM modelling, this research develops a flexible, long-term management framework for digital data at a level higher than metadata, with selection as an essential component. The framework's conceptual feasibility has been examined from both financial and societal benefit perspectives, together with the recognition of constraints. The super-metadata framework provides a possible systematic approach to managing a wide range of digital data in a variety of formats, created/owned by a spectrum of information-dependent organisations.
APA, Harvard, Vancouver, ISO, and other styles
46

Singh, Akash. "Anomaly Detection for Temporal Data using Long Short-Term Memory (LSTM)." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215723.

Full text
Abstract:
We explore the use of Long short-term memory (LSTM) for anomaly detection in temporal data. Due to the challenges in obtaining labeled anomaly datasets, an unsupervised approach is employed. We train recurrent neural networks (RNNs) with LSTM units to learn the normal time series patterns and predict future values. The resulting prediction errors are modeled to give anomaly scores. We investigate different ways of maintaining LSTM state, and the effect of using a fixed number of time steps on LSTM prediction and detection performance. LSTMs are also compared to feed-forward neural networks with fixed size time windows over inputs. Our experiments, with three real-world datasets, show that while LSTM RNNs are suitable for general purpose time series modeling and anomaly detection, maintaining LSTM state is crucial for getting desired results. Moreover, LSTMs may not be required at all for simple time series.
Vi undersöker Long short-term memory (LSTM) för avvikelsedetektion i tidsseriedata. På grund av svårigheterna i att hitta data med etiketter så har ett oövervakat an-greppssätt använts. Vi tränar rekursiva neuronnät (RNN) med LSTM-noder för att lära modellen det normala tidsseriemönstret och prediktera framtida värden. Vi undersö-ker olika sätt av att behålla LSTM-tillståndet och effekter av att använda ett konstant antal tidssteg på LSTM-prediktionen och avvikelsedetektionsprestandan. LSTM är också jämförda med vanliga neuronnät med fasta tidsfönster över indata. Våra experiment med tre verkliga datasetvisar att även om LSTM RNN är tillämpbara för generell tidsseriemodellering och avvikelsedetektion så är det avgörande att behålla LSTM-tillståndet för att få de önskaderesultaten. Dessutom är det inte nödvändigt att använda LSTM för enkla tidsserier.
APA, Harvard, Vancouver, ISO, and other styles
47

Habarulema, John Bosco. "A contribution to TEC modelling over Southern Africa using GPS data." Thesis, Rhodes University, 2010. http://hdl.handle.net/10962/d1005241.

Full text
Abstract:
Modelling ionospheric total electron content (TEC) is an important area of interest for radio wave propagation, geodesy, surveying, the understanding of space weather dynamics and error correction in relation to Global Navigation Satellite Systems (GNNS) applications. With the utilisation of improved ionosonde technology coupled with the use of GNSS, the response of technological systems due to changes in the ionosphere during both quiet and disturbed conditions can be historically inferred. TEC values are usually derived from GNSS measurements using mathematically intensive algorithms. However, the techniques used to estimate these TEC values depend heavily on the availability of near-real time GNSS data, and therefore, are sometimes unable to generate complete datasets. This thesis investigated possibilities for the modelling of TEC values derived from the South African Global Positioning System (GPS)receiver network using linear regression methods and artificial neural networks (NNs). GPS TEC values were derived using the Adjusted Spherical Harmonic Analysis (ASHA) algorithm. Considering TEC and the factors that influence its variability as “dependent and independent variables” respectively, the capabilities of linear regression methods and NNs for TEC modelling were first investigated using a small dataset from two GPS receiver stations. NN and regression models were separately developed and used to reproduce TEC fluctuations at different stations not included in the models’ development. For this purpose, TEC was modelled as a function of diurnal variation, seasonal variation, solar and magnetic activities. Comparative analysis showed that NN models provide predictions of GPS TEC that were an improvement on those predicted by the regression models developed. A separate study to empirically investigate the effects of solar wind on GPS TEC was carried out. Quantitative results indicated that solar wind does not have a significant influence on TEC variability. The final TEC simulation model developed makes use of the NN technique to find the relationship between historical TEC data variations and factors that are known to influence TEC variability (such as solar and magnetic activities, diurnal and seasonal variations and the geographical locations of the respective GPS stations) for the purposes of regional TEC modelling and mapping. The NN technique in conjunction with interpolation and extrapolation methods makes it possible to construct ionospheric TEC maps and to analyse the spatial and temporal TEC behaviour over Southern Africa. For independent validation, modelled TEC values were compared to ionosonde TEC and the International Reference Ionosphere (IRI) generated TEC values during both quiet and disturbed conditions. This thesis provides a comprehensive guide on the development of TEC models for predicting ionospheric variability over the South African region, and forms a significant contribution to ionospheric modelling efforts in Africa.
APA, Harvard, Vancouver, ISO, and other styles
48

Pillu, Hugo. "Knowledge flows through patent citation data." Phd thesis, Ecole Centrale Paris, 2009. http://tel.archives-ouvertes.fr/tel-00458678.

Full text
Abstract:
Dans cette thèse, nous analysons les différents aspects des externalités de connaissance et la façon dont les citations de brevet peuvent être utilisées comme un indicateur de ces flux. La première partie de cette thèse examine la littérature traditionnelle sur les externalités de connaissance, et cela d'un point de vue à la fois qualitatif et quantitatif (la forme quantitative est réalisée grâce à une méta-analyse). Nous insistons sur les conséquences résultant de l'utilisation de différents canaux de mesure de ces externalités, précisément nous nous attardons sur les hypothèses sous-jacentes et sur leurs implications en termes d'estimations empiriques. Ce point est important car ces canaux sont la principale source d'hétérogénéité des résultats empiriques. Dans la seconde partie, nous explorons des données de brevets et de citations de brevet encore peu étudiées (ces données sont extraites de la base de données Patstat pour les offices de brevets du G5, de l'OEB et de l'OMPI). Cette analyse est à nouveau réalisée à la fois en termes qualitatifs et quantitatifs. La troisième partie, dans un premier temps, examine de façon empirique les caractéristiques des flux de connaissance entre et au sein des inventeurs des pays du G5 et cela pour 13 secteurs industriels. Dans un deuxième temps, cette partie propose et valide la création d'un indicateur de stocks de connaissance qui prend en compte les externalités de connaissance internationales. Cet indicateur se révèle particulièrement utile puisque les indicateurs traditionnels ne sont pas toujours disponibles (comme les indicateurs basés sur les dépenses de R&D). Enfin, l'indicateur précédemment créé sera appliqué à une étude de cas consacrée à l'analyse des déterminants de l'innovation pour les technologies énergétiques efficientes.
APA, Harvard, Vancouver, ISO, and other styles
49

Peng, Cheng-shuang 1963. "Dynamic operation of a reservoir system with discontinuous and short-term data." Diss., The University of Arizona, 1998. http://hdl.handle.net/10150/282798.

Full text
Abstract:
The objective of this study is to develop a practical mathematical model to determine optimal operating rules for the reservoir system of the West Branch Penobscot River in the State of Maine of the US. This system is composed of five major lakes and it has three objectives. The hydrological data are not available in winter in the upstream four lakes due to freeze and the length of flow data is less than 25 years. Dynamic programming (DP) has been used extensively for solving reservoir operation problems. One major drawback of DP for multiple reservoir operation is the "curse of dimensionality". Many variations of the original DP have been proposed to ease this problem, for example, incremental DP, discrete differential DP, differential DP, gradient DP, and spline DP. Instead of a DP approach, this study proposes using a nonlinear programming (NLP) approach to solve the multi-reservoir system. NLP has been developed extensively in the field of operations research but not yet widely used in reservoir operations. A distinct advantage of using an NLP model is that it can avoid the dimensionality problem because it solves directly the problem without discretizing the decision variables. To use the NLP approach, a real time operation model is specified at first. Then, a multivariate first-order autoregressive model is used to generate a large number of future inflow sequences. The MINOS software package is then used to optimize the problem with each inflow sequence. MINOS can be implemented seemly in the simulation process and can solve the problems without starting values of variables. The number of runs in a simulation is determined by a statistical model, which shows that 500 runs are sufficient. Finally, the expected values and standard deviations of decision variables are tabulated and the distributions of decision variables are plotted. The proposed real time operation model runs once every month. An information-updating scheme is embedded into the simulation and optimization models. For each month, the synthetic streamflows are updated to reflect the most recent hydrological conditions. Besides, the objective function and constraints can be modified if the situation of the system changes.
APA, Harvard, Vancouver, ISO, and other styles
50

Ye, Qing, and 叶青. "Short-term traffic speed forecasting based on data recorded at irregular intervals." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B47250732.

Full text
Abstract:
Efficient and comprehensive forecasting of information is of great importance to traffic management. Three types of forecasting methods based on irregularly spaced data—for situations when traffic detectors cannot be installed to generate regularly spaced data on all roads—are studied in this thesis, namely, the single segment forecasting method, multi-segment forecasting method and model-based forecasting method. The proposed models were tested using Global Positioning System (GPS) data from 400 Hong Kong taxis collected within a 2-kilometer section on Princess Margaret Road and Hong Chong Road, approaching the Cross Harbour Tunnel. The speed limit for the road is 70 km/h. It has flyovers and ramps, with a small number of merges and diverges. There is no signalized intersection along this road section. A total of 14 weeks of data were collected, in which the first 12 weeks of data were used to calibrate the models and the last two weeks of data were used for validation. The single-segment forecasting method for irregularly spaced data uses a neural network to aggregate the predicted speeds from the naive method, simple exponential smoothing method and Holt’s method, with explicit consideration of acceleration information. The proposed method shows a great improvement in accuracy compared with using the individual forecasting method separately. The acceleration information, which is viewed as an indicator of the phase-transition effect, is considered to be the main contribution to the improvement. The multi-segment forecasting method aggregates not only the information from the current forecasting segment, but also from adjacent segments. It adopts the same sub-methods as the single-segment forecasting method. The forecasting results from adjacent segments help to describe the phase-transition effect, so that the forecasting results from the multi-segment forecasting method are more accurate than those that are obtained from the single segment forecasting method. For one-second forecasting length, the correlation coefficient between the forecasts from the multi-segment forecasting method and observations is 0.9435, which implies a good consistency between the forecasts and observations. While the first two methods are based on pure data fitting techniques, the third method is based on traffic models and is called the model-based forecasting method. Although the accuracy of the one-second forecasting length of the model-based method lies between those of the single-segment and multi-segment forecasting methods, its accuracy outperforms the other two for longer forecasting steps, which offers a higher potential for practical applications.
published_or_final_version
Civil Engineering
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography