Siga este enlace para ver otros tipos de publicaciones sobre el tema: Commodity hardware.

Tesis sobre el tema "Commodity hardware"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 16 mejores tesis para su investigación sobre el tema "Commodity hardware".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Holstius, David. "Monitoring Particulate Matter with Commodity Hardware". Thesis, University of California, Berkeley, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3640465.

Texto completo
Resumen

Health effects attributed to outdoor fine particulate matter (PM 2.5) rank it among the risk factors with the highest health burdens in the world, annually accounting for over 3.2 million premature deaths and over 76 million lost disability-adjusted life years. Existing PM2.5 monitoring infrastructure cannot, however, be used to resolve variations in ambient PM2.5 concentrations with adequate spatial and temporal density, or with adequate coverage of human time-activity patterns, such that the needs of modern exposure science and control can be met. Small, inexpensive, and portable devices, relying on newly available off-the-shelf sensors, may facilitate the creation of PM2.5 datasets with improved resolution and coverage, especially if many such devices can be deployed concurrently with low system cost.

Datasets generated with such technology could be used to overcome many important problems associated with exposure misclassification in air pollution epidemiology. Chapter 2 presents an epidemiological study of PM2.5 that used data from ambient monitoring stations in the Los Angeles basin to observe a decrease of 6.1 g (95% CI: 3.5, 8.7) in population mean birthweight following in utero exposure to the Southern California wildfires of 2003, but was otherwise limited by the sparsity of the empirical basis for exposure assessment. Chapter 3 demonstrates technical potential for remedying PM2.5 monitoring deficiencies, beginning with the generation of low-cost yet useful estimates of hourly and daily PM2.5 concentrations at a regulatory monitoring site. The context (an urban neighborhood proximate to a major goods-movement corridor) and the method (an off-the-shelf sensor costing approximately USD $10, combined with other low-cost, open-source, readily available hardware) were selected to have special significance among researchers and practitioners affiliated with contemporary communities of practice in public health and citizen science. As operationalized by correlation with 1h data from a Federal Equivalent Method (FEM) β-attenuation data, prototype instruments performed as well as commercially available equipment costing considerably more, and as well as another reference instrument under similar conditions at the same timescale (R2 = 0.6). Correlations were stronger when 24 h integrating times were used instead (R2 = 0.72).

Chapter 4 replicates and extends the results of Chapter 3, showing that similar calibrations may be reasonably exchangeable between near-roadway and background monitoring sites. Chapter 4 also employs triplicate sensors to obtain data consistent with near-field (< 50 m) observations of plumes from a major highway (I-880). At 1 minute timescales, maximum PM2.5 concentrations on the order of 100 μg m–3 to 200 μg m–3 were observed, commensurate with the magnitude of plumes from wildfires on longer timescales, as well as the magnitude of plumes that might be expected near other major highways on the same timescale. Finally, Chapter 4 quantifies variance among calibration parameters for a large sample of the sensors, as well as the error associated with the remote transfer of calibrations between two sufficiently large sets (± 10 % for n = 12). These findings suggest that datasets generated with similar sensors could also improve upstream scientific understandings of fluxes resulting from indoor and outdoor emissions, atmospheric transformations, and transport, and may also facilitate timely and empirical verification of interventions to reduce emissions and exposures, in many important contexts (e.g., the provision of improved cookstoves; congestion pricing; mitigation policies attached to infill development; etc.). They also demonstrate that calibrations against continuous reference monitoring equipment could be remotely transferred, within practical tolerances, to reasonably sized and adequately resourced participatory monitoring campaigns, with minimal risk of disruption to existing monitoring infrastructure (i.e., established monitoring sites). Given a collaborator with a short window of access to a reference monitoring site, this would overcome a nominally important barrier associated with non-gravimetric, in-situ calibration of continuous PM2.5 monitors. Progressive and disruptive prospects linked to a proliferation of comparable sensing technologies based on commodity hardware are discussed in Chapter 5.

Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Beltrán, Querol Vicenç. "Improving web server efficiency on commodity hardware". Doctoral thesis, Universitat Politècnica de Catalunya, 2008. http://hdl.handle.net/10803/6024.

Texto completo
Resumen
El ràpid creixement de la Web requereix una gran quantitat de recursos computacionals que han de ser utilitzats eficientment. Avui en dia, els servidors basats en hardware estendard son les plataformes preferides per executar els servidors web, ja que són les plataformes amb millor relació rendiment/cost. El treball presentat en aquesta tesi esta dirigit a millorar la eficàcia en la gestió de recursos dels servidors web actuals. Per assolir els objectius d'aquesta tesis s'ha caracteritzat el funcionament dels servidors web en diverses entorns representatius, per tal de identificar el problemes i coll d'ampolla que limiten el rendiment del servidor web. Amb l'estudi dels servidors web s'ha identificat dos problemes principals que disminueixen l'eficiència dels servidors web en la utilització dels recursos hardware disponibles. El primer problema identificat és la evolució del protocol HTTP per incorporar connexions persistents i seguretat, que disminueix el rendiment e incrementa la complexitat de configuració dels servidors web. El segon problema és la naturalesa de algunes aplicacions web, les quals estan limitades per la memòria física o l'ample de banda amb el disc, que impedeix la correcta utilització dels recursos presents en les maquines multiprocessadors. Per solucionar aquests dos problemes dels servidors web hem proposat dues tècniques. En primer lloc, l'arquitectura hibrida, una evolució de l'arquitectura multi-threaded que es pot implementar fàcilment el els servidor web actuals i que millora notablement la gestió de les connexions i redueix la complexitat de configuració de tot el sistema. En segon lloc, hem implementat en el kernel del sistema operatiu Linux un comprensió de memòria principal per millorar el rendiment de les aplicacions que tenen la memòria com ha coll d'ampolla, millorant així la utilització dels recursos disponibles. Els resultats d'aquesta tesis estan avalats per una avaluació experimental exhaustiva que ha provat la efectivitat i viabilitat de les nostres propostes. Cal destacar que l'arquitectura de servidor web hybrida proposada en aquesta tesis ha estat implementada recentment per coneguts servidors web com és el cas de Apache, Tomcat i Glassfish.
The unstoppable growth of the World Wide Web requires a huge amount of computational resources that must be used efficiently. Nowadays, commodity hardware is the preferred platform to run web server systems because it is the most cost-effective solution. The work presented in this thesis aims to improve the efficiency of current web server systems, allowing the web servers to make the most of hardware resources. To this end, we first characterize current web server system and identify the problems that hinder web servers from providing an efficient utilization of resources. From the study of web servers in a wide range of situations and environments, we have identified two main issues that prevents web servers systems from efficiently using current hardware resources. The first is the extension of the HTTP protocol to include connection persistence and security, which dramatically impacts the performance and configuration complexity of traditional multi-threaded web servers. The second is the memory-bounded or disk-bounded nature of some web workloads that prevents the full utilization of the abundant CPU resources available on current commodity hardware. We propose two novel techniques to overcome the main problems with current web server systems. Firstly, we propose a Hybrid web server
architecture which can be easily implemented in any multi-threaded web server to improve CPU utilization so as to provide better management of client connections. And secondly, we describe a main memory compression technique implemented in the Linux operating system that makes optimum use of current multiprocessor's hardware, in order to improve the performance of memory bound web applications. The thesis is supported by an exhaustive experimental evaluation that proves the effectiveness and feasibility of our proposals for current systems. It is worth noting that the main concepts behind the Hybrid architecture have recently been implemented in popular web servers like Apache, Tomcat and Glassfish.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Egi, Norbert. "Software virtual routers on commodity hardware architectures". Thesis, Lancaster University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.539674.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Botnen, Martin y Harald Ueland. "Using Commodity Graphics Hardware for Medical Image Segmentation". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9191.

Texto completo
Resumen

Modern graphics processing units (GPUs) have evolved into high-performance processors with fully programmable vertex and fragment stages. As their functionality and performance are still increasing, more programmers are appealed by their computational power. This has led to an extensive usage of the GPU as a computational resource in general-purpose computing, and not just within applications of the entertainment business and computer games. Large volume data sets are involved when it comes to medical image segmentation. It is a time consuming task, but is important in the process of detection and identification of special structures and objects. In this thesis we investigate the possibility of using commodity graphics hardware for medical image segmentation. By using a high-level shading language, and utilizing state of the art technolgy like the framebuffer object (FBO) extension and a modern programmable GPU, we perform seeded region growing (SRG) on medical volume data. We also implement two pre-processing filters on the GPU; a median filter and a nonlinear anisotropic diffusion filter, along with a volume visualizer that renders volume data. In our work, we managed to port the Seeded Region Growing (SRG) algorithm from the CPU programming model onto the GPU programming model. The GPU implementation was successful, but we did not, however, get the desired reduction in time consume. In comparison with an equivalent CPU implementation, we found that the GPU version is outperformed. This is most likely due to the overhead associated with the setup of shaders and render-targets (FBO) while running the SRG. The algorithm has low computational costs, and if a more complex and sophisticated method is implemented on the GPU, the computational capacity and the parallelism of the of the GPU may be more utilized. Hence, a speed-up in computational time is then more likely to occur compared to a CPU implementation. Our work involving a 3D nonlinear anisotropic diffusion filter strongly suggests this.

Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Tong. "Designing Practical Software Bug Detectors Using Commodity Hardware and Common Programming Patterns". Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/96422.

Texto completo
Resumen
Software bugs can cost millions and affect people's daily lives. However, many bug detection tools are not always practical in reality, which hinders their wide adoption. There are three main concerns regarding existing bug detectors: 1) run-time overhead in dynamic bug detectors, 2) space overhead in dynamic bug detectors, and 3) scalability and precision issues in static bug detectors. With those in mind, we propose to: 1) leverage commodity hardware to reduce run-time overhead, 2) reuse metadata maintained by one bug detector to detect other types of bugs, reducing space overhead, and 3) apply programming idioms to static analyses, improving scalability and precision. We demonstrate the effectiveness of three approaches using data race bugs, memory safety bugs, and permission check bugs, respectively. First, we leverage the commodity hardware transactional memory (HTM) selectively to use the dynamic data race detector only if necessary, thereby reducing the overhead from 11.68x to 4.65x. We then present a production-ready data race detector, which only incurs a 2.6% run-time overhead, by using performance monitoring units (PMUs) for online memory access sampling and offline unsampled memory access reconstruction. Second, for memory safety bugs, which are more common than data races, we provide practical temporal memory safety on top of the spatial memory safety of the Intel MPX in a memory-efficient manner without additional hardware support. We achieve this by reusing the existing metadata and checks already available in the Intel MPX-instrumented applications, thereby offering full memory safety at only 36% memory overhead. Finally, we design a scalable and precise function pointer analysis tool leveraging indirect call usage patterns in the Linux kernel. We applied the tool to the detection of permission check bugs; the detector found 14 previously unknown bugs within a limited time budget.
Doctor of Philosophy
Software bugs have caused many real-world problems, e.g., the 2003 Northeast blackout and the Facebook stock price mismatch. Finding bugs is critical to solving those problems. Unfortunately, many existing bug detectors suffer from high run-time and space overheads as well as scalability and precision issues. In this dissertation, we address the limitations of bug detectors by leveraging commodity hardware and common programming patterns. Particularly, we focus on improving the run-time overhead of dynamic data race detectors, the space overhead of a memory safety bug detector, and the scalability and precision of the Linux kernel permission check bug detector. We first present a data race detector built upon commodity hardware transactional memory that can achieve 7x overhead reduction compared to the state-of-the-art solution (Google's TSAN). We then present a very lightweight sampling-based data race detector which re-purposes performance monitoring hardware features for lightweight sampling and uses a novel offline analysis for better race detection capability. Our result highlights very low overhead (2.6%) with 27.5% detection probability with a sampling period of 10,000. Next, we present a space-efficient temporal memory safety bug detector for a hardware spatial memory safety bug detector, without additional hardware support. According to experimental results, our full memory safety solution incurs only a 36% memory overhead with a 60% run-time overhead. Finally, we present a permission check bug detector for the Linux kernel. This bug detector leverages indirect call usage patterns in the Linux kernel for scalable and precise analysis. As a result, within a limited time budget (scalable), the detector discovered 14 previously unknown bugs (precise).
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sengupta, Aritra. "Efficient Compiler and Runtime Support for Serializability and Strong Semantics on Commodity Hardware". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu149269601946527.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Somers, Robert Edward. "FlexRender: A distributed rendering architecture for ray tracing huge scenes on commodity hardware". DigitalCommons@CalPoly, 2012. https://digitalcommons.calpoly.edu/theses/812.

Texto completo
Resumen
As the quest for more realistic computer graphics marches steadily on, the demand for rich and detailed imagery is greater than ever. However, the current "sweet spot" in terms of price, power consumption, and performance is in commodity hardware. If we desire to render scenes with tens or hundreds of millions of polygons as cheaply as possible, we need a way of doing so that maximizes the use of the commodity hardware we already have at our disposal. Techniques such as normal mapping and level of detail have attempted to address the problem by reducing the amount of geometry in a scene. This is problematic for applications that desire or demand access to the scene's full geometric complexity at render time. More recently, out-of-core techniques have provided methods for rendering large scenes when the working set is larger than the available system memory. We propose a distributed rendering architecture based on message-passing that is designed to partition scene geometry across a cluster of commodity machines in a spatially coherent way, allowing the entire scene to remain in-core and enabling the construction of hierarchical spatial acceleration structures in parallel. The results of our implementation show roughly an order of magnitude speedup in rendering time compared to the traditional approach, while keeping memory overhead for message queuing around 1%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Alberts, Andreas Jacobus. "Building a scalable virtual community on commodity hardware and open source software / by Andreas Alberts". Thesis, North-West University, 2008. http://hdl.handle.net/10394/4162.

Texto completo
Resumen
The information era has brought upon us waves of change that brings affordable technology Within the reach of the average person. Computers connected to the Internet, part of our daily living, have led to the formation of online communities. In the spirit of communal efforts, a community cannot be controlled or managed into a specific form or direction. A community has a need to concentrate its efforts towards a common goal or vision, therefore sufficiently nonrestrictive infrastructure is needed to enable the community members to contribute towards their goal. We design and build infrastructure to support a virtual community, according to the needs of the community. Community members can easily locate and exchange files among each other, interact in private and public chat rooms by means of instant text messages, as well as make announcements and participate in group discussions in a Web based environment. Additional needs are identified and tended to by means of various value adding services. We also formulate a management strategy to lead the community towards self-sustenance
Thesis (M. Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2009.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Popov, Stefan Verfasser] y Philipp [Akademischer Betreuer] [Slusallek. "Algorithms and data structures for interactive ray tracing on commodity hardware / Stefan Popov. Betreuer: Philipp Slusallek". Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2012. http://d-nb.info/1052550002/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Popov, Stefan [Verfasser] y Philipp [Akademischer Betreuer] Slusallek. "Algorithms and data structures for interactive ray tracing on commodity hardware / Stefan Popov. Betreuer: Philipp Slusallek". Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2012. http://d-nb.info/1052550002/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Kilpatrick, Stephen, Philip M. Westhart y Ben A. Abbott. "AN OPEN, SCALABLE APPROACH TO EFFICIENT DATA PROCESSING". International Foundation for Telemetering, 2016. http://hdl.handle.net/10150/624226.

Texto completo
Resumen
The growth of network-based systems in flight test will present performance problems within the community. Legacy instrumentation systems are not capable of meeting the high-bandwidth, low latency data processing requirements of these next generation data acquisition systems. Ongoing research at Southwest Research Institute is exploring the use of a variety of commodity components, such as Graphics Processing Units (GPUs) and multicore Central Processing Units (CPUs), in ways that can be applied to both the small embedded components as well as the larger ground systems. This paper will explore an open, scalable Commercial-Off-The-Shelf (COTS) approach to bridge the gap and minimize changes to the legacy systems. Current results from this approach will be presented at the conference.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Fletcher, Jordan L. "Real-time GPS-alternative navigation using commodity hardware". 2007. http://handle.dtic.mil/100.2/ADA471064.

Texto completo
Resumen
Thesis (M.S.)--Air Force Institute of Technology, 2007.
"AFIT/GCS/ENG/07-02." Title from PDF title screen. Includes bibliographical references (p. 104-115). Also available online via the Defense Technical Information Center website (http://www.dtic.mil/).
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Robinson, Robert. "An Architecture for Reliable Encapsulation Endpoints using Commodity Hardware". Thesis, 2011. http://hdl.handle.net/10012/5840.

Texto completo
Resumen
Customized hardware is expensive and making software reliable is difficult to achieve as complexity increases. Recent trends towards computing in the cloud have highlighted the importance of being able to operate continuously in the presence of unreliable hardware and, as services continue to grow in complexity, it is necessary to build systems that are able to operate not only in the presence of unreliable hardware but also failure-vulnerable software. This thesis describes a newly developed approach for building networking software that exposes a reliable encapsulation service to clients and runs on unreliable, commodity hardware without substantially increasing the implementation complexity. The proposal was implemented in an existing encapsulation system, and experimental analysis has shown that packets are lost for between 200 ms and 1 second during a failover, and that a failover adds less than 5 seconds to the total download time of several sizes of files. The approach described in this thesis demonstrates the viability of building high availability systems using commodity components and failure-vulnerable server software.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Graham, Jason. "Implementation of an interactive volume visualization system using commodity hardware". 2009. http://digital.library.okstate.edu/etd/Graham_okstate_0664M_10470.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Hirsch, P. "Towards improved insect monitoring systems using UHF RFID and other passive asymmetric digital radio technologies". Thesis, 2019. https://eprints.utas.edu.au/31724/1/Hirsch_whole_thesis_ex_pub_mat.pdf.

Texto completo
Resumen
Radio technology has been used as a tool to gain insights into animal behaviour since the 1960s when it was first used to monitor animal locations using animal-mounted transmitters. Since then, ongoing miniaturisation accompanied by cost reduction has enabled new applications in this area, e.g. detecting large numbers of individually tagged insects in a few selected locations using RFID technology or tracking the location of a small number of tagged insects (typically less than 10) over distances of more than 100m using harmonic radar. It is however still impossible to track a large number of individually tagged small animals over distances exceeding a few cm. Yet exactly this combination would be required to gain more insight into the behaviour of honey bees (Apis mellifera), which are crucial for human food production but whose populations have declined steeply in many areas of the world. Presented in two parts, this thesis investigates different aspects of using asymmetric digital radio technology for automatically monitoring a large number of individually tagged small social animals such as honey bees (Apis mellifera). The first part focuses on the prospects and challenges of using UHF RFID as a cost effective automatic monitoring technology. The second part addresses the limitations in detection range inherent to UHF RFID as demonstrated in the first part. It presents a roadmap to developing a new class of digitally modulating passive radio tags combining ideas from harmonic radar and RFID, which allows to suppress unwanted reflections of the interrogation signal from the environment (also called ‘clutter’), aiming to increase tag detection range. Below, I describe both parts in more detail. In the first part, as part of the research a very affordable first prototype monitoring system based on a compact off-the-shelf USB RFID reader module equipped with a single internal antenna was developed and built. Field trials testing this prototype on honey bees showed that this system is able to capture RFID tag detection data which allows to detect temporal variations in hive activity levels. This data also provides some information about tag recapture and tag reading longevity rates. However, it quickly became apparent that the data quality achievable by this system was limited. For example, with just a single antenna the system could not detect whether the tagged bees were entering or leaving the hive. This problem was addressed by another field trial using two of these RFID reader modules in tandem connected to a single control computer, operating alternatingly. Analysis of this data revealed that the reader modules only achieved low detection rates. Unfortunately, both trials frequently suffered from system failures due to overheating and some unknown technical issues, further reducing data quality. To better understand the low detection rates observed in the field experiments, a lab-based robotic measurement system to scan the spatial structure of the detectability range of our tags in the near field of a reader antenna was developed. Measurements performed with this system revealed that the detection range of the RFID reader modules in combination with our tags was limited to less than 10mm. These insights lead to the development of an improved detection system based on a more capable industrial RFID reader module supporting up to 4 antennas which addresses the needs of CSIRO’s Global Initiative for Honeybee Health. Based on detection range measurements and electromagnetic simulations, an optimized arrangement of four commercially available RFID antennas was devised. Consisting of two opposing pairs of compact ceramic patch antennas, this arrangement lead to dramatically improved detection rates which were confirmed in further field trials using this new system in the course of an honours thesis within our group. The second part addresses the tight detection range limits inherent to UHF RFID while maintaining the ability to distinguish a large number of individual tag IDs by developing ideas for a new concept for passive transponders combining concepts from RFID technology and harmonic radar. As a first step towards this new development, a compact dual band parasitic dipole antenna was developed using electromagnetic simulations, manufactured as a prototype and tested in the laboratory.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Baranawal, Animesh. "Optimizing the Interval-centric Distributed Computing Model for Temporal Graph Algorithms". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5721.

Texto completo
Resumen
Graphs with temporal characteristics are increasingly becoming prominent. Their vertices, edges and attributes are annotated with a lifespan, allowing one to add or remove vertices and edges. Such graphs can grow to millions of vertices, billions of edges, and have months or years of data. Time-dependent algorithms such as temporal reachability and shortest paths are designed over such materialised graphs. These algorithms find important use-cases in digital contact tracing, optimising transit routes, and analysing information diffusion over temporal graphs. Interval-centric Computing Model (ICM) is a recent abstraction over temporal graphs, enabling intuitive development of temporal graph algorithms while ensuring efficient computation and communication. It uses a bulk-synchronous parallel model of execution with data-parallel computation on interval-vertices and message passing at superstep boundaries. To ease the design of temporal algorithms, ICM introduces a novel TimeWarp phase for temporally aligning messages and grouping them against vertex states. However, this warp operator is super-linear in time complexity with the number of messages received at a vertex. It also has additional overheads in the form of message replications. Further, in pipelining the computation and communication phases, ICM may create stale or redundant messages. This thesis primarily attempts to design techniques to mitigate these performance limitations of ICM, and also extends ICM toward incremental graph processing. We propose three different techniques to accelerate the execution model of ICM: Local Warp Unrolling (LU), Deferred Message Scatter (DS) and Windowed ICM (WICM). LU unrolls the messages processed in the TimeWarp phase to reduce the time complexity of the warp operator. DS results in lazy scatter operations that reduce redundant calls to messaging. WICM partitions the temporal graph along the temporal dimension and processes the sub-interval graphs in parts, ensuring proper carryover of vertex states. While LU and DS apply locally to each vertex, WICM is applicable at the global interval graph level and can be coupled with the other two techniques. While developing these techniques, we identify necessary constraints identifying the algorithms that can be modelled using the optimisations. Further, we also prove the equivalence of the new execution model to ICM's execution for a large class of temporal traversal algorithms. For WICM, not all temporal partitioning strategies give the same execution performance. Hence, we also develop heuristics that use statistics on the global graph topology with an analytical modelling of TimeWarp to determine the interval partitioning used with WICM. We extensively evaluate these optimisations for six large temporal graphs with up to 133M vertices, 5.5B edges and 365 snapshots, and six graph algorithms on an 10-node commodity cluster. LU+DS reduce the runtime of ICM by an average of 56%; WICM reduces the runtime by 48% on average over native ICM, and combining these techniques offers an average reduction of 61%. We also conduct experiments to confirm the effectiveness of the heuristic partitioning technique. We also present preliminary results on extending the WICM model to operate over a graph that arrives incrementally, by batching the incoming updates and forming a window out of them to be executed using the WICM. This also has the benefit of reducing the memory footprint since the entire historic graph does not need to be retained in memory.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía