Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Accurate information.

Thèses sur le sujet « Accurate information »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Accurate information ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Icard, Phil F. Hamm Jill V. « Children's informant accuracy a social information processing approach to understanding factors affecting accurate social network recall / ». Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2007. http://dc.lib.unc.edu/u?/etd,1190.

Texte intégral
Résumé :
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2007.
Title from electronic title page (viewed Mar. 27, 2008). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Education School Psychology." Discipline: Education; Department/School: Education.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Charisse, Marc. « "A solid dose of accurate information" : America's unfilled drug advertising prescription / ». Thesis, Connect to this title online ; UW restricted, 1992. http://hdl.handle.net/1773/6147.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bhat, Goutam. « Accurate Tracking by Overlap Maximization ». Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154653.

Texte intégral
Résumé :
Visual object tracking is one of the fundamental problems in computer vision, with a wide number of practical applications in e.g.\ robotics, surveillance etc. Given a video sequence and the target bounding box in the first frame, a tracker is required to find the target in all subsequent frames. It is a challenging problem due to the limited training data available. An object tracker is generally evaluated using two criterias, namely robustness and accuracy. Robustness refers to the ability of a tracker to track for long durations, without losing the target. Accuracy, on the other hand, denotes how accurately a tracker can estimate the target bounding box. Recent years have seen significant improvement in tracking robustness. However, the problem of accurate tracking has seen less attention. Most current state-of-the-art trackers resort to a naive multi-scale search strategy which has fundamental limitations. Thus, in this thesis, we aim to develop a general target estimation component which can be used to determine accurate bounding box for tracking. We will investigate how bounding box estimators used in object detection can be modified to be used for object tracking. The key difference between detection and tracking is that in object detection, the classes to which the objects belong are known. However, in tracking, no prior information is available about the tracked object, other than a single image provided in the first frame. We will thus investigate different architectures to utilize the first frame information to provide target specific bounding box predictions. We will also investigate how the bounding box predictors can be integrated into a state-of-the-art tracking method to obtain robust as well as accurate tracking.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Vijayan, Balaji. « Accurate and efficient detection, prediction and exploitation of program phases ». Diss., Online access via UMI:, 2005.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sharma, Sagar. « Accurate Traffic Generation in the CheesePi Network ». Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204594.

Texte intégral
Résumé :
Generating traffic to characterize large capacity network links with high accuracy in transport backbones is important for Internet service providers. Using a network application (Iperf) to measure throughput (and indirectly congestion) on links within a network can be used to perform traffic engineering. Using short bursts at high utilisation can ascertain whether a connection can support capacity sensitive applications, such as video streaming. Repeating the process to capture day-night effects can categorise links for management decisions. CheesePi is an Internet quality monitoring system that aims to characterise the services obtained from the home Internet connections of the users and establish an open measurement infrastructure in Sweden. The thesis evaluates a study of performance and characterization of network links within the CheesePi network at high data rates (Gbits/sec) and will extend the work done in the CheesePi software base.
Att kunna generera trafik för karakterisering av stora nätförbindensler med hög noggrannhet är en mycket viktigt faktor för internetleverantörer. Med hjälp av en nätverksapplikation (Iperf) så kan man mäta bithastigheten samt den indirekta överbelastningen på länkar i ett nätverk. Man kan även mäta performansen av nätet för ytterligare analys. Genom att ha intensiv trafik i korta perioder så kan man få ett hum om en uppkoppling kan stödja kapacitets känsliga applikationer så som videouppspelning. I och med att internet användningen varierar så kan man även kategorisera länkarna ytterligare så som dag och natt performansen. CheesePi är ett övervakningssystem som försäkrar kvaliteten och karakteriserarde tjänster som erhållits för hemanslutningar samt infrastrukturs mätningar i Sverige. Examensarbetet utvärderar prestandan och karakteriseringen av nätförbindelser med hög datahastighet (Gigabit/sec) igenom att använda CheesePi programvaran.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Khorami, Elham. « Providing accurate time information to a radio base station via a GPS receiver emulator ». Thesis, KTH, Kommunikationssystem, CoS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-118070.

Texte intégral
Résumé :
In recent years, there has been a significant increase in the use of Global Positioning Satellite system (GPS) technology, consequently the usage of GPS receivers has increased. These GPS receivers can be used as a synchronization source for radio base stations by generating precise 1 pulse per second signals and providing National Marine Electronics Association (NMEA) data. The prototype developed in this thesis, implements a GPS receiver emulator to emulate a GPS receiver which is to be used for radio base station synchronization. Hardware and software have been used to generate the NMEA messages and to generate a precise 1 pulse per second signal. A graphical user interface (GUI) has been created in order to allow the operators of the emulator to input various parameters to the system used to emulate a GPS receiver. Using this emulator avoids the need for expensive GPS receivers and their connection to an antenna with a good view of the GPS satellites. More importantly, this GPS receiver emulator makes it easier to set up a lab environment for testing different situations with regard to signaling with NMEA data between the emulated GPS receiver and the radio base station equipment that is under test. For example, this allows tests involving incorrect NEMA messages or non once per second pulses.
De senaste åren har det skett en significant ökning av användandet av GPS teknik; följdaktligen har användandet av GPS mottagare ökat. GPS mottagare kan användas som en synkroniseringskälla för radiobasstationer genom att generera exakt 1 puls per sekund och därmed förse NMEA datan. Prototypen som utvecklats i detta examensarbete, implementerar en GPS emulator för att erbjuda en effektiv lösning till att emulera en GPS mottagare som används för synkronisering av basstationer. Olika hårdvara och mjukvara har använts för att simulera NMEA meddelanden och för att generera en precis 1 puls per sekund signal. Ett grafiskt gränssnitt (GUI) har utvecklats för att tillåta användaren av emulatorn att mata in olika parametrar till systemet som används för att emulera GPS mottagaren. Användandet av den här emulatorn tar bort behovet av dyra GPS mottagare, och gör det enklare att sätta upp en labbmiljö för testandet av olika situationer med hänsyn till signalering mellan 1 puls per sekund och NMEA datan av den simulerade GPS mottagaren och basstationshårdvaran som testas.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Tagoe, Naa Dedei. « Developing an accurate close-range photogrammetric technique for extracting 3D information from spherical panoramic images ». Doctoral thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/24932.

Texte intégral
Résumé :
Panoramic images (panoramas) are wide-angle images that provide fields of view of up to 360°. They are acquired with a specialised panoramic camera or by stitching a series of images captured with a conventional digital camera. Panoramas have widely been used to texture 3D models generated from laser scanning, for creating virtual reality tour applications, documenting landscape and cultural heritage sites, advertising real estates and recording crime scenes. The goal of this research was to develop an accurate close-range photogrammetric technique for the semi-automatic extraction of 3D information from spherical panoramas. This was achieved by developing a non-parametric method for the removal of distortions from images acquired from fisheye lenses as well as an algorithm, here referred to as the Minimum Ray Distance (MRD), for the fully automated approximate relative orientation of spherical panoramic images. The bundle adjustment algorithm was then applied to refine the orientation parameters of the panoramas; thus enabling accurate 3D point measurement. Finally, epipolar geometry theory was applied to the oriented panoramas to guide the interactive extraction of additional conjugate points. The MRD algorithm has been extended to laser scanning technology for the first approximations of laser scan setup positions and scan orientation prior to a leastsquares based registration. The determination of approximate scanner orientation and position parameters were accomplished using panoramic intensity images derived from full dome laser scans. Thus, a technique for the semi-automatic extraction of 3D measurements from panoramic images has been developed in this research. The technique is most appropriate for applications which do not require dense point clouds and in situations with limited access to funds or as a quick field method to document many features in a short time. This is because a single image orientation is required for several overlapping images as compared to the normal stereo or multi-image photogrammetric approach. It is not suggested that 3D reconstruction from spherical panoramic images should replace traditional close-range photogrammetry or laser scanning; rather, that the user of panoramic images will be offered supplementary information to the conventional and modern cultural heritage documentation approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Gutierrez, Arguedas Mauricio. « Accurate modeling of noise in quantum error correcting circuits ». Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54443.

Texte intégral
Résumé :
A universal, scalable quantum computer will require the use of quantum error correction in order to achieve fault tolerance. The assessment and comparison of error-correcting strategies is performed by classical simulation. However, due to the prohibitive exponential scaling of general quantum circuits, simulations are restrained to specific subsets of quantum operations. This creates a gap between accuracy and efficiency which is particularly problematic when modeling noise, because most realistic noise models are not efficiently simulable on a classical computer. We have introduced extensions to the Pauli channel, the traditional error channel employed to model noise in simulations of quantum circuits. These expanded error channels are still computationally tractable to simulate, but result in more accurate approximations to realistic error channels at the single qubit level. Using the Steane [[7,1,3]] code, we have also investigated the behavior of these expanded channels at the logical error-corrected level. We have found that it depends strongly on whether the error is incoherent or coherent. In general, the Pauli channel will be an excellent approximation to incoherent channels, but an unsatisfactory one for coherent channels, especially because it severely underestimates the magnitude of the error. Finally, we also studied the honesty and accuracy of the expanded channels at the logical level. Our results suggest that these measures can be employed to generate lower and upper bounds to a quantum code's threshold under the influence of a specific error channel.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Johansson, Ulf. « Obtaining Accurate and Comprehensible Data Mining Models : An Evolutionary Approach ». Doctoral thesis, Linköping : Department of Computer and Information Science, Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8881.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Montgomery, Peter Roland James. « Improving operations viability and reducing variety using A.D.I.S (Accurate drawing information system) : a multiview methodology of design ». Master's thesis, University of Cape Town, 1997. http://hdl.handle.net/11427/9497.

Texte intégral
Résumé :
Includes bibliographical references.
Gabriel S.A. is a South African shockabsorber manufacturing company which has undergone a strategic repositioning to become internationally competitive. This entailed a move away from the traditional hierarchical management structure and production line manufacturer, to a flatter structure with cross-functional Business Units. Each Business Unit is made up of self-contained, Manufacturing cells run by self-directed work teams. The objective of this change is to ensure that Gabriel S.A. becomes a world class manufacturer.. The company has gone a long way down this road in implementing World Class Manufacturing techniques through the Gabriel Total Quality Production System (GTQPS). However, problems still arise within the system, especially with regard to new product/component designs and changed designs reaching the shop floor timeously. This is aggravated by the necessity to penetrate new markets and retain existing ones successfully. The number of quotations to be prepared will increase. As will the subsequent number of required assembly and component drawings and modification to existing products. These, in turn, will involve revisions to current drawings. This is compounded by the fact that in the current business operations, there are already concerns regarding the routine drawing information requirements. This thesis investigates the affect of the drawing information system on the viability of the Manufacturing cells and documents the intervention of a socio-technical drawing information system.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Mishra, Abishek Kumar. « Trace-Driven WiFi Emulation : Accurate Record-and-Replay for WiFi ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-285517.

Texte intégral
Résumé :
Researchers and application designers need repeatable methods to evaluateapplications/systems over WiFi. It is hard to reproduce evaluations overWiFi because of rapidly changing wireless quality over time. In this degreeproject, we present NemFi, a trace-driven emulator for accurately recordingthe WiFi trac and later using it to emulate WiFi links in a repeatable fashion.First, we present the advantages of trace-driven emulation over simulationand experimentation. We capture the uctuating WiFi link conditionsin terms of capacity and losses over time and replay captured behavior forany application running in the emulator. Current record-and-replay techniquesfor web trac and cellular networks do not work for WiFi becauseof their inability to distinguish between WiFi losses and losses due to selfinducedcongestion. They are also lacking other WiFi specic features. Inthe absence of a trace-driven emulator for WiFi, NemFi is also equipped toavoid self-induced packet losses. It is thus capable of isolating WiFi relatedlosses which are then replayed by the NemFi's replay. NemFi's record alsoaddresses the frame aggregation and the eect it has on the actual datatransmission capability over the wireless link. NemFi can record frame aggregation,at all instants of the record phase and later accurately replays theaggregation.Experimental results demonstrate that NemFi is not only accurate inrecording the variable-rate WiFi link but also in capturing cross-trac. NemFialso replays the recorded conditions with considerable accuracy.
Forskare och applikationsdesigners behöver repeterbara metoder för att utvärderaapplikationer och system via WiFi. Det är svårt att reproducera utvärderingar genom WiFi eftersom den trådlösa kvalit´en snabbt förändras över tid. I denna rapport presenterar vi NemFi, en spårstyrd emulator för att noggrant registrera WiFi-trafiken och senare använda den för att emulera WiFi-länkar påett repeterbart sätt. Först presenterar vi fördelarna med spårstyrd emulering jämfört med simulering och experiment. Vi fångar de varierande WiFi förhållanden med avseende påkapacitet och förluster över tid och spelar upp fångat beteende för alla applikationer som körs i emulatorn. Nuvarande inspelning och uppspelningstekniker för webbtrafik och mobilnät fungerar inte för WiFi pågrund av deras oförmåga att skilja mellan WiFi-förluster ochförluster pågrund av självinducerad överbelastning. De saknar ocksåandraWiFi-specifika funktioner. I avsaknad av en spårdriven emulator för WiFi är NemFi ocksåutrustade för att undvika självinducerade paketförluster. Den kan alltsåisolera WiFi-relaterade förluster som sedan spelas upp igen av NemFi: s uppspelning. NemFi adresserar ocksåramaggregering och det är effekten påfaktiska dataöverföringsförmåga via den trådlösa länken. NemFi kan spela in ramsamling, vid alla ögonblick i inspelningsfasen och ersätter senare noggrant aggregeringen.Experimentella resultat visar att NemFi inte bara är användbart när det gäller att registrera WiFi-länken med variabel hastighet, utan ocksåför att fånga tvärgående trafik. NemFi ersätter ocksåde inspelade förhållandena medbetydande noggrannhet.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Smed, Karl-Oskar. « Efficient and Accurate Volume Rendering on Face-Centered and Body-Centered Cubic Grids ». Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-257177.

Texte intégral
Résumé :
The body centered cubic grid (BCC) and face centered cubic grid (FCC) offer improved sampling properties when compared to the cartesian grid. Despite this there is little software and hardware support for volume rendering of data stored in one of these grids. This project is a continuation of a project adding support for such grids to the volume rendering engine Voreen. This project has three aims. Firstly, to implement new interpolation methods capable of rendering at interactive frame rates. Secondly, to improve the software by adding an alternate volume storage format offering improved frame rates for BCC methods. And thirdly, because of the issues when comparing image quality between different grid types due to aliasing, to implement a method unbiased in terms of post-aliasing. The existing methods are compared to the newly implemented ones in terms of frame rate and image quality and the results show that the new volume format improve the frame rate significantly, that the new BCC interpolation method offers similar image quality at better performance compared to existing methods and that the unbiased method produces images of good quality at the expense of speed.
Styles APA, Harvard, Vancouver, ISO, etc.
13

McPherson, Christine Jane. « Evaluating end-of-life care : do berieved relatives provide accurate information on patients' experience of pain, anxiety and depression ? » Thesis, King's College London (University of London), 2003. https://kclpure.kcl.ac.uk/portal/en/theses/evaluating-endoflife-care--do-berieved-relatives-provide-accurate-information-on-patients-experience-of-pain-anxiety-and-depression(43b298e5-b0da-4daa-bd2b-4ea57c941c3d).html.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Abu-Mahfouz, Adnan Mohammed. « Accurate and efficient localisation in wireless sensor networks using a best-reference selection ». Thesis, University of Pretoria, 2011. http://hdl.handle.net/2263/28662.

Texte intégral
Résumé :
Many wireless sensor network (WSN) applications depend on knowing the position of nodes within the network if they are to function efficiently. Location information is used, for example, in item tracking, routing protocols and controlling node density. Configuring each node with its position manually is cumbersome, and not feasible in networks with mobile nodes or dynamic topologies. WSNs, therefore, rely on localisation algorithms for the sensor nodes to determine their own physical location. The basis of several localisation algorithms is the theory that the higher the number of reference nodes (called “references”) used, the greater the accuracy of the estimated position. However, this approach makes computation more complex and increases the likelihood that the location estimation may be inaccurate. Such inaccuracy in estimation could be due to including data from nodes with a large measurement error, or from nodes that intentionally aim to undermine the localisation process. This approach also has limited success in networks with sparse references, or where data cannot always be collected from many references (due for example to communication obstructions or bandwidth limitations). These situations require a method for achieving reliable and accurate localisation using a limited number of references. Designing a localisation algorithm that could estimate node position with high accuracy using a low number of references is not a trivial problem. As the number of references decreases, more statistical weight is attached to each reference’s location estimate. The overall localisation accuracy therefore greatly depends on the robustness of the selection method that is used to eliminate inaccurate references. Various localisation algorithms and their performance in WSNs were studied. Information-fusion theory was also investigated and a new technique, rooted in information-fusion theory, was proposed for defining the best criteria for the selection of references. The researcher chose selection criteria to identify only those references that would increase the overall localisation accuracy. Using these criteria also minimises the number of iterations needed to refine the accuracy of the estimated position. This reduces bandwidth requirements and the time required for a position estimation after any topology change (or even after initial network deployment). The resultant algorithm achieved two main goals simultaneously: accurate location discovery and information fusion. Moreover, the algorithm fulfils several secondary design objectives: self-organising nature, simplicity, robustness, localised processing and security. The proposed method was implemented and evaluated using a commercial network simulator. This evaluation of the proposed algorithm’s performance demonstrated that it is superior to other localisation algorithms evaluated; using fewer references, the algorithm performed better in terms of accuracy, robustness, security and energy efficiency. These results confirm that the proposed selection method and associated localisation algorithm allow for reliable and accurate location information to be gathered using a minimum number of references. This decreases the computational burden of gathering and analysing location data from the high number of references previously believed to be necessary.
Thesis (PhD(Eng))--University of Pretoria, 2011.
Electrical, Electronic and Computer Engineering
unrestricted
Styles APA, Harvard, Vancouver, ISO, etc.
15

Poursharif, Goudarz. « Investigating the ability of smart electricity meters to provide accurate low voltage network information to the UK distribution network operators ». Thesis, University of Sheffield, 2018. http://etheses.whiterose.ac.uk/20614/.

Texte intégral
Résumé :
This research presents a picture of the current status and the future developments of the LV electricity grid and the capabilities of the smart metering programme in the UK as well as investigating the major research trends and priorities in the field of Smart Grid. This work also extensively examines the literature on the crucial LV network performance indicators such as losses, voltage levels, and cable capacity percentages and the ways in which DNOs have been acquiring this knowledge as well the ways in which various LV network applications are carried out and rely on various sources of data. This work combines 2 new smart meter data sets with 5 established methods to predict a proportion of consumer’s data is not available using historical smart meter data from neighbouring smart meters. Our work shows that half-hourly smart meter data can successfully predict the missing general load shapes, but the prediction of peak demands proves to be a more challenging task. This work then investigates the impact of smart meter time resolution intervals and data aggregation levels in balanced and unbalanced three phase LV network models on the accuracy of critical LV network performance indicators and the way in which these inaccuracies affect major smart LV network application of the DNOs in the UK. This is a novel work that has not been carried out before and shows that using low time resolution and aggregated smart meter data in load flow analysis models can negatively affect the accuracy of critical low voltage network estimates.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Stingl, Dominik [Verfasser], Ralf [Akademischer Betreuer] Steinmetz et Michael [Akademischer Betreuer] Zink. « Decentralized Monitoring in Mobile Ad Hoc Networks - Provisioning of Accurate and Location-Aware Monitoring Information / Dominik Stingl. Betreuer : Ralf Steinmetz ; Michael Zink ». Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2014. http://d-nb.info/1111113025/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Thaiss, Laila Maria. « A comparison of the role of the frontal cortex and the anterior temporal lobe in source memory and in the accurate retrieval of episodic information / ». Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=38424.

Texte intégral
Résumé :
It has been argued that patients with frontal lobe lesions are impaired in temporal context memory and, more generally, in retrieving the source of one's knowledge or ideas. Furthermore, it has been speculated that a failure to retrieve source information may result in an increased susceptibility to distortions of episodic memories in patients with frontal lobe lesions. The precise role of the frontal cortex, however, in source or episodic retrieval is not clear. Does this region of cortex play a primary role or a secondary, executive role in the processing of such memories? Studies of patients with temporal lobe lesions have also shown impairments in episodic memory, including difficulties in the retrieval of source information. An important issue, therefore, is whether these two brain regions make different contributions to the processing of source information and to the retrieval of episodic memories.
In the present experiments, patients with unilateral excisions restricted to frontal cortex or to the anterior temporal lobe were compared on various tasks examining source memory performance and the accurate retrieval of episodic information. The results of these studies failed to support the general contention that patients with frontal cortex excisions have source (or temporal context) memory impairments. Instead, differences between these patients and normal control subjects appeared to be contingent on whether strategic organizational or control processes were necessary for efficient processing of episodic information. The memory of patients with left temporal lobe excisions, on the other hand, was significantly impaired for both content and source information in most tasks. Furthermore, these subjects showed high rates of inaccuracies and distortions of memory. The false memories of this patient group were attributed to a combination of their poor memory for the specific items of the task and their over-reliance on semantic "gist" or on inferential knowledge about the events. Patients with right temporal lobe excisions were generally less severely impaired on the verbal memory tasks compared with those with left-sided lesions, but were impaired in their memory for the contextual aspects of an event.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Alhamoud, Alaa [Verfasser], Ralf [Akademischer Betreuer] Steinmetz et Lars [Akademischer Betreuer] Wolf. « Fine-Granular Sensing of Power Consumption - A New Sensing Modality for an Accurate Detection, Prediction and Forecasting of Higher-Level Contextual Information in Smart Environments / Alaa Alhamoud ; Ralf Steinmetz, Lars Wolf ». Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2016. http://d-nb.info/1122286228/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Schneider, Simon. « High performance GeoSpatialapplication, using Twitter toincrease trace accuracy : Project for Ericsson ». Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-233223.

Texte intégral
Résumé :
The goal of the project is to research new technologies and ways to create high speed and high performance applications in order to deal with GeoSpatial data. The research data set used is GEO-located Tweets. Twitter is chosen for its high data amount, accessibility and because of its relevance to cellular network optimization. Beyond the research part of the project, the end product will be used to increase cellular trace accuracy through the use of the highly accurate GEO-located-Tweets.The result of the final product is showing great potential. An interactive web map rendering 700 million Tweets up to a 10 by 10 meter resolution within seconds, while also allowing the user to pan and navigate the map without any performance issues. The scope of the project includes the entire process, from aggregating Tweets with a Java program, storing them in a PostgreSQL+PostGIS database, creating maps and GIS-files with MapServer, to presenting Maps and serving downloadable files in a Django web application utilizing OpenLayers. The used procedures are explained and justified in great detail throughout this report. All products used in the project are licensed under the open source initiative and are therefore free to use by anyone. The report also includes suggestions, relevant products, research projects and algorithms for future development and research.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Dixon, Maureen. « The effect of exposure to orthographic information on spelling ». Thesis, University of Wolverhampton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389546.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Fallis, Don. « On Verifying the Accuracy of Information : Philosophical Perspectives ». University of Illinois, 2004. http://hdl.handle.net/10150/106211.

Texte intégral
Résumé :
How can one verify the accuracy of recorded information (e.g., information found in books, newspapers, and on Web sites)? In this paper, I argue that work in the epistemology of testimony (especially that of philosophers David Hume and Alvin Goldman) can help with this important practical problem in library and information science. This work suggests that there are four important areas to consider when verifying the accuracy of information: (i) authority, (ii) independent corroboration, (iii) plausibility and support, and (iv) presentation. I show how philosophical research in these areas can improve how information professionals go about teaching people how to evaluate information. Finally, I discuss several further techniques that information professionals can and should use to make it easier for people to verify the accuracy of information.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Moutzouris, Alexandros. « Accurate human pose tracking using efficient manifold searching ». Thesis, Kingston University, 2013. http://eprints.kingston.ac.uk/26599/.

Texte intégral
Résumé :
In this thesis we propose novel methods for accurate markerless 3D pose tracking. Training data are used to represent specific activities, using dimensionality reduction methods. The proposed methods attempt to keep the computational cost low, without sacrificing the accuracy of the final result. Also, we deal with the problem of stylistic variation between the motions seen in the training and the testing dataset. Solutions to address both single and multiple action scenarios are presented. Specifically, appropriate temporal non-linear dimensionality reduction methods are applied to learn compact manifolds that are suitable for fast exploration. Such manifolds are efficiently searched by a deterministic gradient-based method. In order to deal with stylistic differences of human actions, we represent human poses using multiple levels. Searching through multiple levels reduces the effect of being trapped in a local optimal and therefore leads to higher accuracy. An observation function controls the process to minimise the computational cost of the method. Finally, we propose a multi-activity pose tracking methods, which combines action recognition with single-action pose tracking. To achieve reliable online action recognition, the system is equipped with short memory. All methods are tested in publicly available datasets. Results demonstrate their high accuracy and relative low computational cost, in comparison to state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Gooze, Aaron Isaac. « Real-time transit information accuracy : impacts and proposed solutions ». Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47638.

Texte intégral
Résumé :
When presented in a practical format, real-time transit information can improve sustainable travel methods by enhancing the transit experience. Larger shifts towards public transportation have cascading effects on the environment, health and urban form. The research will identify the positive shift realized by the continued development of a set of real-time transit information tools, specifically in the Seattle region. In addition, it will analyze real-time prediction errors and their effects on the rider experience. Three years after the development of location-aware mobile applications for OneBusAway - a suite of real-time information tools - a survey of current users was conducted by the author in 2012 in order to compare the results to a 2009 study. The results show significant positive shifts in satisfaction with transit, perceptions of safety and ridership frequency as a result of the increased use of real-time arrival information. However, the research will also provide a perspective of the margin of error riders come to expect and the negative effects resulting from inaccuracies with the real-time data. While riders on average will ride less when they have experienced errors, a robust issue-reporting system as well as the resolution of the error can mitigate the initial negative effects. In response, the research provides a framework for a crowd-sourced error reporting process in order to improve the level of accuracy by means of a Transit Ambassador Program. Finally, a pilot program developed by the author is assessed against this framework and insight is provided within the context of the real-time information system.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Eldar, Insafutdinov [Verfasser]. « Towards accurate multi-person pose estimation in the wild / Insafutdinov Eldar ». Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1232240052/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Yang, Yichen. « High Accuracy Positioning ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286697.

Texte intégral
Résumé :
The thrive of big data, cloud computing, and 5G communication lays a solid foundation for smart transportation and autonomous driving. But in these areas, accurate positioning is also indispensable. Current high accuracy positioning services are mainly based on Network RTK. The PPP technology reaches the same level of final accuracy as RTK but requires less data transmission and computational power. This project focuses on PPP technology and explores its potential as a substitution of the current Network RTK. This thesis consists of three parts: theory, solution, and result. The theory part introduces basic mathematics and an overview of the GPS system. Then it analyses GNSS measurement errors and introduces augmentation technologies. The solution part introduces the measurement model and mainstream error correction methods. After that, it focuses on the ionospheric model and IC-PPP method. Based on that, we create the ’sTEC’ interface and enable the ’External TEC’ correction method. The result part presents a comparison between the PPP technology and different correction methods.
Utvecklingen av big data, molntjänster och 5G kommunikation ger en god grund för smarta transporter och självkörande fordon. För dessa tjänster är dock noggrann positionering oumbärligt. Nuvarande positioneringstjänster för hög noggrannhet är framför allt baserade på Network RTK. PPP-teknologi kan nå samma slutliga noggrannhet som RTK, men kräver mindre dataöverföring och beräkningskapacitet. Detta projekt fokuserar på PPP-teknologi och undersöker dess potential att ersätta nuvarande Network RTK. Avhandlingen består av tre delar — teori, lösningar och resultat. Teoridelen introducerar de matematiska grunderna och ger en översikt av GPS-systemet. Därefter analyseras källorna till mätfel i GNSS och olika tilläggsmetoder för att hantera dessa. Lösningsdelen introducerar mätmodellen och vanligt förekommande felkorrigeringsmetoder. Därefter, läggs tyngdpunkten på en jonosfärmodell och IC-PPPmetoden. Baserat på det, skapar vi ett sTEC-gränssnitt för att möjliggöra korrektionsmetoden External TEC. Resultatdelen innehåller en jämförelse mellan PPP-teknologi och olika korrektionsmetoder.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Fallis, Don, et Martin Fricke. « Indicators of Accuracy of Consumer Health Information on the Internet ». American Medical Informatics Association, 2002. http://hdl.handle.net/10150/105437.

Texte intégral
Résumé :
Objectives: To identify indicators of accuracy for consumer health information on the Internet. The results will help lay people distinguish accurate from inaccurate health information on the Internet. Design: Several popular search engines (Yahoo, AltaVista, and Google) were used to find Web pages on the treatment of fever in children. The accuracy and completeness of these Web pages was determined by comparing their content with that of an instrument developed from authoritative sources on treating fever in children. The presence on these Web pages of a number of proposed indicators of accuracy, taken from published guidelines for evaluating the quality of health information on the Internet, was noted. Main Outcome Measures: Correlation between the accuracy of Web pages on treating fever in children and the presence of proposed indicators of accuracy on these pages. Likelihood ratios for the presence (and absence) of these proposed indicators. Results: One hundred Web pages were identified and characterized as "more accurate" or "less accurate." Three indicators correlated with accuracy: displaying the HONcode logo, having an organization domain, and displaying a copyright. Many proposed indicators taken from published guidelines did not correlate with accuracy (e.g., the author being identified and the author having medical credentials) or inaccuracy (e.g., lack of currency and advertising). Conclusions: This method provides a systematic way of identifying indicators that are correlated with the accuracy (or inaccuracy) of health information on the Internet. Three such indicators have been identified in this study. Identifying such indicators and informing the providers and consumers of health information about them would be valuable for public health care.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Shatnawi, Amani "Mohammad Jum'h" Amin. « Estimating Accuracy of Personal Identifiable Information in Integrated Data Systems ». DigitalCommons@USU, 2017. https://digitalcommons.usu.edu/etd/6103.

Texte intégral
Résumé :
Without a valid assessment of accuracy there is a risk of data users coming to incorrect conclusions or making bad decision based on inaccurate data. This dissertation proposes a theoretical method for developing data-accuracy metrics specific for any given person-centric integrated system and how a data analyst can use these metrics to estimate the overall accuracy of person-centric data. Estimating the accuracy of Personal Identifiable Information (PII) creates a corresponding need to model and formalize PII for both the real-world and electronic data, in a way that supports rigorous reasoning relative to real-world facts, expert opinions, and aggregate knowledge. This research provides such a foundation by introducing a temporal first-order logic language (FOL), called Person Data First-order Logic (PDFOL). With its syntax and semantics formalized, PDFOL provides a mechanism for expressing data- accuracy metrics, computing measurements using these metrics on person-centric databases, and comparing those measurements with expected values from real-world populations. Specifically, it enables data analysts to model person attributes and inter-person relations from real-world population or database representations of such, as well as real-world facts, expert opinions, and aggregate knowledge. PDFOL builds on existing first-order logics with the addition of temporal predicated based on time intervals, aggregate functions, and tuple-set comparison operators. It adapts and extends the traditional aggregate functions in three ways: a) allowing any arbitrary number free variables in function statement, b) adding groupings, and c) defining new aggregate function. These features allow PDFOL to model person-centric databases, enabling formal and efficient reason about their accuracy. This dissertation also explains how data analysts can use PDFOL statements to formalize and develop formal accuracy metrics specific to a person-centric database, especially if it is an integrated person- centric database, which in turn can then be used to assess the accuracy of a database. Data analysts apply these metrics to person-centric data to compute the quality-assessment measurements, YD. After that, they use statistical methods to compare these measurements with the real-world measurements, YR. Compare YD and YR with the hypothesis that they should be very similar, if the person-centric data is an accurate and complete representations of the real-world population. Finally, I show that estimated accuracy using metrics based on PDFOL can be good predictors of database accuracy. Specifically, I evaluated the performance of selected accuracy metrics by applying them to a person-centric database, mutating the database in various ways to degrade its accuracy, and the re-apply the metrics to see if they reflect the expected degradation. This research will help data analyst to develop an accuracy metrics specific to their person-centric data. In addition, PDFOL can provide a foundation for future methods for reasoning about other quality dimensions of PII.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Hamy, V. « Improving accuracy of information extraction from quantitative magnetic resonance imaging ». Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1433683/.

Texte intégral
Résumé :
Quantitative MRI offers the possibility to produce objective measurements of tissue physiology at different scales. Such measurements are highly valuable in applications such as drug development, treatment monitoring or early diagnosis of cancer. From microstructural information in diffusion weighted imaging (DWI) or local perfusion and permeability in dynamic contrast (DCE-) MRI to more macroscopic observations of the local intestinal contraction, a number of aspects of quantitative MRI are considered in this thesis. The main objective of the presented work is to provide pre-processing techniques and model modification in order to improve the reliability of image analysis in quantitative MRI. Firstly, the challenge of clinical DWI signal modelling is investigated to overcome the biasing effect due to noise in the data. Several methods with increasing level of complexity are applied to simulations and a series of clinical datasets. Secondly, a novel Robust Data Decomposition Registration technique is introduced to tackle the problem of image registration in DCE-MRI. The technique allows the separation of tissue enhancement from motion effects so that the latter can be corrected independently. It is successfully applied to DCE-MRI datasets of different organs. This application is extended to the correction of respiratory motion in small bowel motility quantification in dynamic MRI data acquired during free breathing. Finally, a new local model for the arterial input function (AIF) is proposed. The estimation of the arterial blood contrast agent concentration in DCE-MRI is augmented using prior knowledge on local tissue structure from DWI. This work explores several types of imaging using MRI. It contributes to clinical quantitative MRI analysis providing practical solutions aimed at improving the accuracy and consistency of the parameters derived from image data.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Junghans, Martin [Verfasser], et R. [Akademischer Betreuer] Studer. « Methods for Efficient and Accurate Discovery of Services / Martin Junghans. Betreuer : R. Studer ». Karlsruhe : KIT-Bibliothek, 2014. http://d-nb.info/1051848172/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Al-Bado, Mustafa [Verfasser], et Anja [Akademischer Betreuer] Feldmann. « Realistic PHY Modeling for Accurate Wireless Simulations / Mustafa Al-Bado. Betreuer : Anja Feldmann ». Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2013. http://d-nb.info/1031511342/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Li, Yuhong. « Disruption Information, Network Topology and Supply Chain Resilience ». Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78352.

Texte intégral
Résumé :
This dissertation consists of three essays studying three closely related aspects of supply chain resilience. The first essay is "Value of Supply Disruption Information and Information Accuracy", in which we examine the factors that influence the value of supply disruption information, investigate how information accuracy influences this value, and provide managerial suggestions to practitioners. The study is motivated by the fact that fully accurate disruption information may be difficult and costly to obtain and inaccurate disruption information can decrease the financial benefit of prior knowledge and even lead to negative performance. We perform the analysis by adopting a newsvendor model. The results show that information accuracy, specifically information bias and information variance, plays an important role in determining the value of disruption information. However, this influence varies at different levels of disruption severity and resilience capacity. The second essay is "Quantifying Supply Chain Resilience: A Dynamic Approach", in which we provide a new type of quantitative framework for assessing network resilience. This framework includes three basic elements: robustness, recoverability and resilience, which can be assessed with respect to different performance measures. Then we present a comprehensive analysis on how network structure and other parameters influence these different elements. The results of this analysis clearly show that both researchers and practitioners should be aware of the possible tradeoffs among different aspects of supply chain resilience. The ability of the framework to support better decision making is then illustrated through a systemic analysis based on a real supply chain network. The third essay is "Network Characteristics and Supply Chain Disruption Resilience", in which we investigate the relationships between network characteristics and supply chain resilience. In this work, we first prove that investigating network characteristics can lead to a better understanding of supply chain resilience behaviors. Later we select key characteristics that play a critical role in determining network resilience. We then construct the regression and decision tree models of different supply chain resilience measures, which can be used to estimate supply chain network resilience given the key influential characteristics. Finally, we conduct a case study to examine the estimation accuracy.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Landsiedel, Olaf [Verfasser]. « Mechanisms, models and tools for flexible protocol development and accurate network experimentation / Olaf Landsiedel ». Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2011. http://d-nb.info/101594373X/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Nussbaum, Robin E. « Detection of Sexual Orientation : Accuracy of Judgments Based on Minimal Information ». Thesis, University of Hawaii at Manoa, 2002. http://hdl.handle.net/10125/7084.

Texte intégral
Résumé :
The ability to accurately detect sexual orientation from five-second silent video clips was investigated. The effects of attractiveness of the targets on judgments of sexual orientation were also examined. Participants were shown a video of 24 targets and asked to rate how likely it is that the individual in the video is gay. They were also asked to classify the targets as either "gay" or "heterosexual." Results indicated that gay targets were rated as more likely to be gay than heterosexual targets and that overall, targets were correctly classified 58% of the time, suggesting some accuracy in detection. Results also gave some support to the hypothesis that gay participants would be more accurate detectors than heterosexual participants, but not fully. Finally, the role of attractiveness of the target was evaluated. One of the three theories regarding the role of attractiveness was rejected (the "attraction effect") based on the results, but the other two were both supported (the "attention effect" and the "ugly lesbian, gay pretty boy" stereotype).
v, 44 leaves
Styles APA, Harvard, Vancouver, ISO, etc.
34

Van, Zyl Zoe. « The impact of recall bias on the accuracy of dietary information ». Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86429.

Texte intégral
Résumé :
Thesis (MNutr)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Background: A number of observational studies where information was obtained retrospectively have been used in the past to inform guidelines regarding allergy prevention. Studies looking at the causative/protective properties of infant dietary factors on diseases that occur later in life also rely on maternal recall many years later. It is unclear however what the effect of the recall bias was on the accuracy/quality of the information obtained. Objectives: The aim of the study was to determine the impact of recall bias 10 years retrospectively on the accuracy of dietary information in relation to breast feeding, weaning age and introduction of allergenic foods. A literature review was performed into studies assessing the accuracy of data obtained retrospectively and into studies using retrospective data to draw conclusions on the protective/causative factors of infant feeding in relation to food allergy. Methodology: An infant feeding questionnaire was developed from some of the same questions that were asked by mothers recruited into the FAIR study, a prospective birth cohort on the Isle of Wight. Families had been recruited and followed up since 2001/2002 and data has been gathered when the mothers were 36 weeks pregnant, and then when their child was 3, 6, 9 months and 1 and 2 years old. Mothers were asked in 2012, when their children were 10 years of age, to complete this questionnaire. Agreement of answers was computed using Kappa coefficients, Spearman’s correlation and percentage agreement. Results: One hundred and twenty five mothers completed the questionnaire. There was substantial agreement for recall of whether mothers breast fed, the duration of EBF and breast feeding 10 years earlier (k = 0.79, r = 0.70 and r = 0.84 respectively). Seven per cent (n = 9) of mothers however who did breast feed reported not to have. Eighty four per cent (n = 103) of mothers recorded correctly whether their child had a bottle of formula milk in hospital. Ninety four per cent (n = 116) of mothers recalled accurately that their child had received formula milk at some stage of their infancy. The exact age at which formula milk was first given to their child was answered accurately (r = 0.63). The brand of formula milk provided was poorly recalled. Answers to when mothers first introduced solid foods into their child’s diet were not accurate (r = 0.16). The age of introduction of peanuts was the only food allergen that mothers recalled accurately for when they first introduced this into their child’s diet (86% correct answers). Recall of whether peanuts were consumed during pregnancy was accurate after two years (k = 0.64) but not after 8 years (k = 0.39). Conclusion: The study highlights the importance of possible recall bias of infant feeding practices by mothers over a period of 10 years. Recall related to breast feeding and formula feeding were accurately recorded for, but not for age of introduction of solid foods and introduction of allergenic foods. Studies relying on maternal recall of weaning questions need to be cautious.
AFRIKAANSE OPSOMMING: Agtergrond: ’n Aantal waarnemingstudies waarin inligting op retrospektiewe wyse of terugwerkend bekom is, is in die verlede gebruik om riglyne oor die voorkoming van allergie neer te lê. Studies oor die veroorsakende/beskermende kenmerke wat kindervoedingsfaktore op latere siektes het, steun verder op die herinneringe wat die moeder baie jare later kan oproep. Dit is egter onduidelik watter uitwerking hierdie oproepvooroordeel op die akkuraatheid/gehalte van die versamelde inligting het. Oogmerke: Die oogmerk met die studie was om die impak te bepaal wat oproepvooroordeel met terugwerkende effek van 10 jaar op die akkuraatheid van voedingsinligting oor borsvoeding, speenouderdom en die insluiting van allergeniese voedselsoorte uitoefen. ’n Literatuuroorsig was onderneem van studies wat die akkuraatheid evalueer van data wat retrospektief bekom is, asook studies wat retrospektiewe data gebruik om gevolgtrekkings oor die beskermende/veroorsakende kenmerke van kindervoeding met betrekking tot voedselallergie te maak. Metodologie: ’n Kindervoedingsvraelys is saamgestel vanaf sommige van die vrae wat aan gewerfde moeders voorheen in die FAIR-studie, ’n voornemende geboortekohort op die eiland Wight, gestel is. Gesinne is in 2001/2002 gewerf en opgevolg, en data is versamel toe die moeders 36 weke swanger was; en weer toe hulle kinders die ouderdom van 3, 6, 9 maande en 1 en 2 jaar bereik het. In 2012, toe hulle kinders 10 jaar oud was, is die moeders weer versoek om hierdie vraelys in te vul. Ooreenstemming tussen antwoorde is bepaal deur Kappa koeffisiënte, Spearman korrelasies en persentasie ooreenstemming. Resultate: Eenhonderd vyf-en-twintig moeders het die vraelys ingevul. Daar was beduidende ooreenkoms in die moeders se oproep oor die vraag of hulle borsvoeding gegee het, hoe lank eksklusiewe borsvoeding (EBV) geduur het, asook borsvoeding 10 jaar vantevore (k = 0.79, r = 0.70 en r = 0.84 onderskeidelik). Sewe persent (n = 9) van die moeders wat wel borsvoeding gegee het, het egter geantwoord dat hulle dit nie gegee het nie. Vier-en-tagtig persent (n = 103) van die moeders het akkuraat geantwoord op die vraag of hulle kinders bottelvoeding met ’n melkformule in die hospitaal ontvang het. Vier-en-negentig persent (n =116) van die moeders kon akkuraat oproep dat hulle kinders in ’n sekere stadium van hulle kindertyd melkformule ontvang het. Die vraag oor presies hoe oud die kinders was toe hulle die eerste maal melkformule ontvang het, is akkuraat beantwoord (r = 0.63). Die handelsnaam van die melkformule kon nie goed herroep word nie. Antwoorde oor wanneer moeders die eerste maal vaste voedsel by hulle kinders se dieet ingesluit het, was nie baie akkuraat nie (r = 0.16). Die ouderdom waarop grondboontjies ingesluit is, was die enigste antwoord wat moeders akkuraat kon oproep (86% korrekte antwoorde) op die vraag wanneer hulle die eerste maal ’n voedselallergeen by hulle kinders se dieet ingesluit het. Die antwoord op die vraag of hulle tydens hul swangerskap grondboontjies geëet het, was akkuraat na twee jaar (k = 0.64), maar nie na agt jaar (k = 0.39) nie. Gevolgtrekking: Die studie onderstreep die belang van moontlike oproepvooroordeel rakende kindervoedingspraktyke by moeders oor ’n tydperk van 10 jaar. Die oproep oor borsvoeding en formulevoeding is korrek aangedui, maar nie vir die ouderdom waarop vaste voedselsoorte en allergeniese voedselsoorte ingesluit is nie. Studies wat op moederoproep oor speningsvrae staatmaak, moet omsigtig gedoen word.
Styles APA, Harvard, Vancouver, ISO, etc.
35

GAO, HONGLIANG. « IMPROVING BRANCH PREDICTION ACCURACY VIA EFFECTIVE SOURCE INFORMATION AND PREDICTION ALGORITHMS ». Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3286.

Texte intégral
Résumé :
Modern superscalar processors rely on branch predictors to sustain a high instruction fetch throughput. Given the trend of deep pipelines and large instruction windows, a branch misprediction will incur a large performance penalty and result in a significant amount of energy wasted by the instructions along wrong paths. With their critical role in high performance processors, there has been extensive research on branch predictors to improve the prediction accuracy. Conceptually a dynamic branch prediction scheme includes three major components: a source, an information processor, and a predictor. Traditional works mainly focus on the algorithm for the predictor. In this dissertation, besides novel prediction algorithms, we investigate other components and develop untraditional ways to improve the prediction accuracy. First, we propose an adaptive information processing method to dynamically extract the most effective inputs to maximize the correlation to be exploited by the predictor. Second, we propose a new prediction algorithm, which improves the Prediction by Partial Matching (PPM) algorithm by selectively combining multiple partial matches. The PPM algorithm was previously considered optimal and has been used to derive the upper limit of branch prediction accuracy. Our proposed algorithm achieves higher prediction accuracy than PPM and can be implemented in realistic hardware budget. Third, we discover a new locality existing between the address of producer loads and the outcomes of their consumer branches. We study this address-branch correlation in detail and propose a branch predictor to explore this correlation for long-latency and hard-to-predict branches, which existing branch predictors fail to predict accurately.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science PhD
Styles APA, Harvard, Vancouver, ISO, etc.
36

Anthony, Aaron M. « Assessing the Accuracy, Use, and Framing of College Net Pricing Information ». Thesis, University of Pittsburgh, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13819961.

Texte intégral
Résumé :

In this dissertation, I explore questions relating to estimating and framing college net pricing. In the first study, I measure variation in actual grant aid awards for students predicted by the federal template Net Price Calculator (NPC) to receive identical aid awards. Estimated aid derived from the federal template NPC accounts for 85 percent of the variation in actual grant aid received by students. I then consider simple modifications to the federal template NPC that explain more than half of the initially unexplained variation in actual grant aid awards across all institutional sectors. The second study explores perceptions of college net pricing and the resources families use to learn about college expenses. Students and parents show substantial variation in their perceptions of college price and ability to accurately estimate likely college expenses, even when prompted to seek pricing information online. While most participants were able to estimate net price within 25 percent of NPC estimates, others were inaccurate by as much as 250 percent, or nearly $30,000. I then propose possible explanations for more or less accurate estimates that consider parent education, student grade level, previous NPC use, and online college pricing search strategies. In the third study, I explore the potential for shifts in college spending preferences when equivalent college cost scenarios are framed in different ways. I exploit disparities between net price and total price to randomly present participants with one of three framing conditions: gain, loss, and full information. Participants are between five and six percentage points more likely to choose a college beyond their stated price preference when cost information is framed in such a way that emphasizes financial grant aid received as opposed to remaining costs to be paid or full cost information. The results of these studies suggest that clearly structured, simple to use informational resources can accurately and effectively communicate important college information. However, simply making resources available without consideration of accessibility or relevance may be insufficient. Policymakers and other hosts of college information resources should also carefully consider the ways that the presentation of college information might influence students’ decisions.

Styles APA, Harvard, Vancouver, ISO, etc.
37

Gori, Julien. « Modeling the speed-accuracy tradeoff using the tools of information theory ». Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT022/document.

Texte intégral
Résumé :
La loi de Fitts, qui relie le temps de mouvement MT dans une tache de pointage aux dimensions de la cible visée D et W est usuellement exprimée à partir d’une imitation de la formule de la capacité de Shannon MT = a + b log 2 (1 + D/W). Toutefois, l’analyse actuelle est insatisfaisante: elle provient d’une simple analogie entre la tache de pointage et la transmission d’un signal sur un canal bruité sans qu’il n’y ait de modèle explicite de communication.Je développe d’abord un modèle de transmission pour le pointage, où l’indice de difficulté ID = log 2 (1 + D/W) s’exprime aussi bien comme une entropie de source et une capacité de canal, permettant ainsi de réconcilier dans un premier temps l’approche de Fitts avec la théorie de l’information de Shannon. Ce modèle est ensuite exploité pour analyser des données de pointage récoltées lors d’expérimentations contrôlées mais aussi en conditions d’utilisations réelles.Je développe ensuite un second modèle, focalisé autour de la forte variabilité caractéristique du mouvement humain et qui prend en compte la forte diversité des mécanismes de contrôle du mouvement: avec ou sans voie de retour, par intermittence ou de manière continue. À partir d’une chronométrie de la variance positionnelle, évaluée à partir d’un ensemble de trajectoires, on remarque que le mouvement peut-être découpé en deux phases: une première où la variance augmente et une grande partie de la distance à couvrir est parcourue, est suivie d’une deuxième au cours de laquelle la variance diminue pour satisfaire les contraintes de précision requises par la tache.Dans la deuxième phase, le problème du pointage peut-être ramené à un problème de communication à la Shannon, où l’information est transmise d’une“source” (variance à la fin de la première phase) à une “destination” (extrémité du membre) à travers un canal Gaussien avec la présence d’une voie de retour.Je montre que la solution optimale à ce problème de transmission revient à considérer un schéma proposé par Elias. Je montre que la variance peut décroitre au mieux exponentiellement au cours de la deuxième phase, et que c’est ce résultat qui implique directement la loi de Fitts
Fitts’ law, which relates movement time MTin a pointing task to the target’s dimensions D and Wis usually expressed by mimicking Shannon’s capacityformula MT = a + b log 2 (1 + D/W). Yet, the currentlyreceived analysis is incomplete and unsatisfactory: itstems from a vague analogy and there is no explicitcommunication model for pointing.I first develop a transmission model for pointing taskswhere the index of difficulty ID = log 2 (1 + D/W) isthe expression of both a source entropy and a chan-nel capacity, thereby reconciling Shannon’s informa-tion theory with Fitts’ law. This model is then levera-ged to analyze pointing data gathered from controlledexperiments but also from field studies.I then develop a second model which builds on thevariability of human movements and accounts for thetremendous diversity displayed by movement control:with of without feedback, intermittent or continuous.From a chronometry of the positional variance, eva-luated from a set of trajectories, it is observed thatmovement can be separated into two phases: a firstwhere the variance increases over time and wheremost of the distance to the target is covered, follo-wed by a second phase where the variance decreasesuntil it satisfies accuracy constraints. During this se-cond phase, the problem of aiming can be reduced toa Shannon-like communication problem where infor-mation is transmitted from a “source” (variance at theend of the first phase), to a “destination” (the limb ex-tremity) over a “channel” perturbed by Gaussian noisewith a feedback link. I show that the optimal solution tothis transmission problem amounts to a scheme firstsuggested by Elias. I show that the variance can de-crease at best exponentially during the second phase,and that this result induces Fitts’ law
Styles APA, Harvard, Vancouver, ISO, etc.
38

Adolfsson, Rickard, et Eric Andersson. « Improving sales forecast accuracy for restaurants ». Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165034.

Texte intégral
Résumé :
Data mining and machine learning techniques are becoming more popular in helping companies with decision-making, due to these processes’ ability to automatically search through very large amounts of data and discover patterns that can be hard to see with human eyes. Onslip is one of the companies looking to achieve more value from its data. They provide a cloud-based cash register to small businesses, with a primary focus on restaurants. Restaurants are heavily affected by variations in sales. They sell products with short expiration dates, low profit margins and much of their expenses are tied to personnel. By predicting future demand, it is possible to plan inventory levels and make more effective employee schedules, thus reducing food waste and putting less stress on workers. The project described in this report, examines how sales forecasts can be improved by incorporating factors known to affect sales in the training of machine learning models. Several different models are trained to predict the future sales of 130 different restaurants, using varying amounts of additional information. The accuracy of the predictions are then compared against each other. Factors known to impact sales have been chosen and categorized into restaurant information, sales history, calendar data and weather information. The results show that, by providing additional information, the vast majority of forecasts could be improved significantly. In 7 of 8 examined cases, the addition of more sales factors had an average positive effect on the predictions. The average improvement was 6.88% for product sales predictions, and 26.62% for total sales. The sales history information was most important to the models’ decisions, followed by the calendar category. It also became evident that not every factor that impacts sales had been captured, and further improvement is possible by examining each company individually.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Fuchs, Patrick [Verfasser], et Christoph [Akademischer Betreuer] Garbe. « Efficient and Accurate Segmentation of Defects in Industrial CT Scans / Patrick Fuchs ; Betreuer : Christoph Garbe ». Heidelberg : Universitätsbibliothek Heidelberg, 2021. http://d-nb.info/1230475885/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Binun, Alexander [Verfasser]. « High Accuracy Design Pattern Detection / Alexander Binun ». Bonn : Universitäts- und Landesbibliothek Bonn, 2012. http://d-nb.info/1043911294/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Wasenmüller, Oliver [Verfasser]. « Towards an Accurate RGB-D Benchmark, Mapping and Odometry as well as their Applications / Oliver Wasenmüller ». München : Verlag Dr. Hut, 2017. http://d-nb.info/1147674469/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Fürsattel, Peter [Verfasser], Andreas [Akademischer Betreuer] Maier et Andreas [Gutachter] Maier. « Accurate Measurements with Off-the-Shelf Range Cameras / Peter Fürsattel ; Gutachter : Andreas Maier ; Betreuer : Andreas Maier ». Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2018. http://d-nb.info/116118435X/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Brunner, René. « Trade-off among timeliness, messages and accuracy for large-Ssale information management ». Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/80541.

Texte intégral
Résumé :
The increasing amount of data and the number of nodes in large-scale environments require new techniques for information management. Examples of such environments are the decentralized infrastructures of Computational Grid and Computational Cloud applications. These large-scale applications need different kinds of aggregated information such as resource monitoring, resource discovery or economic information. The challenge of providing timely and accurate information in large scale environments arise from the distribution of the information. Reasons for delays in distributed information system are a long information transmission time due to the distribution, churn and failures. A problem of large applications such as peer-to-peer (P2P) systems is the increasing retrieval time of the information due to the decentralization of the data and the failure proneness. However, many applications need a timely information provision. Another problem is an increasing network consumption when the application scales to millions of users and data. Using approximation techniques allows reducing the retrieval time and the network consumption. However, the usage of approximation techniques decreases the accuracy of the results. Thus, the remaining problem is to offer a trade-off in order to solve the conflicting requirements of fast information retrieval, accurate results and low messaging cost. Our goal is to reach a self-adaptive decision mechanism to offer a trade-off among the retrieval time, the network consumption and the accuracy of the result. Self-adaption enables distributed software to modify its behavior based on changes in the operating environment. In large-scale information systems that use hierarchical data aggregation, we apply self-adaptation to control the approximation used for the information retrieval and reduces the network consumption and the retrieval time. The hypothesis of the thesis is that approximation techniquescan reduce the retrieval time and the network consumption while guaranteeing an accuracy of the results, while considering user’s defined priorities. First, this presented research addresses the problem of a trade-off among a timely information retrieval, accurate results and low messaging cost by proposing a summarization algorithm for resource discovery in P2P-content networks. After identifying how summarization can improve the discovery process, we propose an algorithm which uses a precision-recall metric to compare the accuracy and to offer a user-driven trade-off. Second, we propose an algorithm that applies a self-adaptive decision making on each node. The decision is about the pruning of the query and returning the result instead of continuing the query. The pruning reduces the retrieval time and the network consumption at the cost of a lower accuracy in contrast to continuing the query. The algorithm uses an analytic hierarchy process to assess the user’s priorities and to propose a trade-off in order to satisfy the accuracy requirements with a low message cost and a short delay. A quantitative analysis evaluates our presented algorithms with a simulator, which is fed with real data of a network topology and the nodes’ attributes. The usage of a simulator instead of the prototype allows the evaluation in a large scale of several thousands of nodes. The algorithm for content summarization is evaluated with half a million of resources and with different query types. The selfadaptive algorithm is evaluated with a simulator of several thousands of nodes that are created from real data. A qualitative analysis addresses the integration of the simulator’s components in existing market frameworks for Computational Grid and Cloud applications. The proposed content summarization algorithm reduces the information retrieval time from a logarithmic increase to a constant factor. Furthermore, the message size is reduced significantly by applying the summarization technique. For the user, a precision-recall metric allows defining the relation between the retrieval time and the accuracy. The self-adaptive algorithm reduces the number of messages needed from an exponential increase to a constant factor. At the same time, the retrieval time is reduced to a constant factor under an increasing number of nodes. Finally, the algorithm delivers the data with the required accuracy adjusting the depth of the query according to the network conditions.
La gestió de la informació exigeix noves tècniques que tractin amb la creixent quantitat de dades i nodes en entorns a gran escala. Alguns exemples d’aquests entorns són les infraestructures descentralitzades de Computacional Grid i Cloud. Les aplicacions a gran escala necessiten diferents classes d’informació agregada com monitorització de recursos i informació econòmica. El desafiament de proporcionar una provisió ràpida i acurada d’informació en ambients de grans escala sorgeix de la distribució de la informació. Una raó és que el sistema d’informació ha de tractar amb l’adaptabilitat i fracassos d’aquests ambients. Un problema amb aplicacions molt grans com en sistemes peer-to-peer (P2P) és el creixent temps de recuperació de l’informació a causa de la descentralització de les dades i la facilitat al fracàs. No obstant això, moltes aplicacions necessiten una provisió d’informació puntual. A més, alguns usuaris i aplicacions accepten inexactituds dels resultats si la informació es reparteix a temps. A més i més, el consum de xarxa creixent fa que sorgeixi un altre problema per l’escalabilitat del sistema. La utilització de tècniques d’aproximació permet reduir el temps de recuperació i el consum de xarxa. No obstant això, l’ús de tècniques d’aproximació disminueix la precisió dels resultats. Així, el problema restant és oferir un compromís per resoldre els requisits en conflicte d’extracció de la informació ràpida, resultats acurats i cost d’enviament baix. El nostre objectiu és obtenir un mecanisme de decisió completament autoadaptatiu per tal d’oferir el compromís entre temps de recuperació, consum de xarxa i precisió del resultat. Autoadaptacío permet al programari distribuït modificar el seu comportament en funció dels canvis a l’entorn d’operació. En sistemes d’informació de gran escala que utilitzen agregació de dades jeràrquica, l’auto-adaptació permet controlar l’aproximació utilitzada per a l’extracció de la informació i redueixen el consum de xarxa i el temps de recuperació. La hipòtesi principal d’aquesta tesi és que els tècniques d’aproximació permeten reduir el temps de recuperació i el consum de xarxa mentre es garanteix una precisió adequada definida per l’usari. La recerca que es presenta, introdueix un algoritme de sumarització de continguts per a la descoberta de recursos a xarxes de contingut P2P. Després d’identificar com sumarització pot millorar el procés de descoberta, proposem una mètrica que s’utilitza per comparar la precisió i oferir un compromís definit per l’usuari. Després, introduïm un algoritme nou que aplica l’auto-adaptació a un ordre per satisfer els requisits de precisió amb un cost de missatge baix i un retard curt. Basat en les prioritats d’usuari, l’algoritme troba automàticament un compromís. L’anàlisi quantitativa avalua els algoritmes presentats amb un simulador per permetre l’evacuació d’uns quants milers de nodes. El simulador s’alimenta amb dades d’una topologia de xarxa i uns atributs dels nodes reals. L’algoritme de sumarització de contingut s’avalua amb mig milió de recursos i amb diferents tipus de sol·licituds. L’anàlisi qualitativa avalua la integració del components del simulador en estructures de mercat existents per a aplicacions de Computacional Grid i Cloud. Així, la funcionalitat implementada del simulador (com el procés d’agregació i la query language) és comprovada per la integració de prototips. L’algoritme de sumarització de contingut proposat redueix el temps d’extracció de l’informació d’un augment logarítmic a un factor constant. A més, també permet que la mida del missatge es redueix significativament. Per a l’usuari, una precision-recall mètric permet definir la relació entre el nivell de precisió i el temps d’extracció de la informació. Alhora, el temps de recuperació es redueix a un factor constant sota un nombre creixent de nodes. Finalment, l’algoritme reparteix les dades amb la precisió exigida i ajusta la profunditat de la sol·licitud segons les condicions de xarxa. Els algoritmes introduïts són prometedors per ser utilitzats per l’agregació d’informació en nous sistemes de gestió de la informació de gran escala en el futur.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Do, Changhee. « Improvement in accuracy using records lacking sire information in the animal model ». Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/39430.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Gordon, Colin Cedric. « The influence of age and gender on information processing rate and accuracy / ». The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488199501405738.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Răzman, Diana Cristina. « Replaying history : Accuracy and authenticity in historical video game narratives ». Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18962.

Texte intégral
Résumé :
In this research paper, I develop a conceptual framework through which I identify two ways in which historical practices, events, and spaces are represented and engaged with in video games. The concepts I propose are historical accuracy to reflect well-established narratives and a high fidelity to factual data, and historical authenticity to reflect lesser known narratives and a more complex and sometimes abstract interpretation of history. The research concentrates on the modalities in which history is represented in mainstream video games, what similarities or dissimilarities can be drawn from the analysis of various historical digital games, and how can these games be designed to foster diversity and fair representation.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Ferreira, Filipa. « Automatic and accurate segmentation of thoratic aortic aneurysms from X-ray CT angiography ». Thesis, Kingston University, 2012. http://eprints.kingston.ac.uk/26293/.

Texte intégral
Résumé :
The scoped of this dissertation is to procure and propose a novel fully automated computer aided detection and measurement (CAD/CAM) system of thoracic aortic aneurysms. More explicitly, the objective of the algorithm is to facilitate the segmentation of the thoracic aorta, as accurately as possible and detection of possible existing aneurysms in the Computer Tomography Angiography (CT) images. In biomedical imaging, the manual examination and analysis of aortic aneurysms is a particularly laborious and time-consuming undertaking. Humans are susceptible to committing errors and their analysis is usually subjective and qualitative due to the inter- and intra-observer variability issue. Objective and quantitative analysis facilitated by the application developed in this project leads to a more accurate diagnostic decision by the physician. In this context, the project is concerned with the automatic analysis of thoracic aneurysms from CTA images. The project initially examines the theoretical background of the anatomy of the aorta and aneurysms. The concepts of image segmentation and, in particular, segmentation of vessels methods are reviewed. An algorithm is then developed and implemented, such that it will conform to the requirements put forth in the stated objectives. For purposes of testing the proposed approach, a significant amount of 3D, clinical CTA datasets of thoracic aorta form the framework of the CAD/CAM system. It is followed by presentation and discussion of the results. The system has been validated on a clinical dataset of30 eTA scans of which 28 eTA scans contained aneurysms. There were 30 eTA scans used as training dataset for parameter selection and another 30 eTA scans uses as a test dataset, in total 60 for clinical evaluation. The radiologist visually inspected the CAD and CAM component results and confirmed it correctly detected and segmented the T AA on all datasets, proving to have 100% sensitivity. We were able to conclude that there is distinct potential for.use of our fully automated CAD/CAM system in a real clinical setting. Although other CAD/CAM systems have been developed for other organ detection and even small sections of the thoracic aorta, to this date no fully automated CAD/CAM of the entire thoracic aorta has been developed hence its novelty. To facilitate the proposed CAD/CAM system is integrated in a Medical Images Processing, Seamless and Secure Sharing Platform (MIPS3) which is a friendly user interface that has been developed alongside with this project.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Wolters, Dominik [Verfasser]. « Robust and Accurate Detection of Mid-level Primitives for 3D Reconstruction in Man-Made Environments / Dominik Wolters ». Kiel : Universitätsbibliothek Kiel, 2019. http://d-nb.info/1175507792/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Hyllienmark, Erik. « Evaluation of two vulnerability scanners accuracy and consistency in a cyber range ». Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160092.

Texte intégral
Résumé :
One challenge when conducting exercises in a cyber range is to know what applications and vulnerabilities are present on deployed computers. In this paper, the reliability of application-and vulnerability reporting by two vulnerability scanners, OpenVas and Nexpose, have been evaluated based on their accuracy and consistency. Followed by an experiment, the configurations on two virtual computers were varied in order to identify where each scanner gathers information. Accuracy was evaluated with the f1-score, which combines the precision and recall metric into a single number. Precision and recall values were calculated by comparing installed ap-plications and vulnerabilities on virtual computers with the scanning reports. Consistency was evaluated by quantifying how similar the reporting of applications and vulnerabilities between multiple vulnerability scans were into a number between 0 and 1. The vulnerabilities reported by both scanners were also combined with their union and intersection to increase the accuracy. The evaluation reveal that neither Nexpose or OpenVas accurately and consistently report installed applications and vulnerabilities. Nexpose reported vulnerabilities better than OpenVas with an accuracy of 0.78. Nexpose also reported applications more accurately with an accuracy of 0.96. None of the scanners reported both applications and vulnerabilities consistently over three vulnerability scans. By taking the union of the reported vulnerabilities by both scanners, the accuracy increased by 8 percent compared with the accuracy of Nexpose alone. However, our conclusion is that the scanners’ reporting does not perform well enough to be used for a reliable inventory of applications and vulnerabilities in a cyber range.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Zakos, John, et n/a. « A Novel Concept and Context-Based Approach for Web Information Retrieval ». Griffith University. School of Information and Communication Technology, 2005. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20060303.104937.

Texte intégral
Résumé :
Web information retrieval is a relatively new research area that has attracted a significant amount of interest from researchers around the world since the emergence of the World Wide Web in the early 1990s. The problems facing successful web information retrieval are a combination of challenges that stem from traditional information retrieval and challenges characterised by the nature of the World Wide Web. The goal of any information retrieval system is to provide an information need fulfilment in response to an information need. In a web setting, this means retrieving as many relevant web documents as possible in response to an inputted query that is typically limited to only containing a few terms expressive of the user's information need. This thesis is primarily concerned with firstly reviewing pertinent literature related to various aspects of web information retrieval research and secondly proposing and investigating a novel concept and context-based approach. The approach consists of techniques that can be used together or independently and aim to provide an improvement in retrieval accuracy over other approaches. A novel concept-based term weighting technique is proposed as a new method of deriving query term significance from ontologies that can be used for the weighting of inputted queries. A technique that dynamically determines the significance of terms occurring in documents based on the matching of contexts is also proposed. Other contributions of this research include techniques for the combination of document and query term weights for the ranking of retrieved documents. All techniques were implemented and tested on benchmark data. This provides a basis for performing comparison with previous top performing web information retrieval systems. High retrieval accuracy is reported as a result of utilising the proposed approach. This is supported through comprehensive experimental evidence and favourable comparisons against previously published results.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie