To see the other types of publications on this topic, follow the link: Buffer storage (Computer science).

Dissertations / Theses on the topic 'Buffer storage (Computer science)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Buffer storage (Computer science).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

McNamee, Dylan James. "Virtual memory alternatives for transaction buffer management in a single-level store /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/6961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Connor, John. "The RIT IEEE-488 Buffer design /." Online version of thesis, 1992. http://hdl.handle.net/1850/11259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ghemawat, Sanjay. "The modified object buffer : a storage management technique for object-oriented databases." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/37012.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 113-117).
by Sanjay Ghemawat.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
4

Romer, Theodore H. "Using virtual memory to improve cache and TLB performance /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/6913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dong, Xiaomin. "D-Buffer : a new hidden-line algorithm in image-space /." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0035/MQ47446.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rydberg, Ray Robert. "Gasping for harmony communication between arbitrary clock domains with multiple voltage domains using a locally-clocked, linear dual-clock FIFO scheme /." Pullman, Wash. : Washington State University, 2009. http://www.dissertations.wsu.edu/Dissertations/Spring2009/r_rydberg_042209.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Washington State University, May 2009.
Title from PDF title page (viewed on June 19, 2009). "School of Electrical Engineering and Computer Science." Includes bibliographical references (p. 117-125).
APA, Harvard, Vancouver, ISO, and other styles
7

Rydberg, Ray Robert. "A gasp of fresh air a high speed distributed FIFO scheme for managing interconnect parasitics /." Online access for everyone, 2005. http://www.dissertations.wsu.edu/Thesis/Spring2005/R%5FRydberg%5F050505.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cheng, Kin On. "A multi-stage optical switch with output buffer using WDM for delay lines sharing /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20CHENG.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 77-79). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
9

Torabkhani, Nima. "Modeling and analysis of the performance of networks in finite-buffer regime." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51810.

Full text
Abstract:
In networks, using large buffers tend to increase end-to-end packet delay and its deviations, conflicting with real-time applications such as online gaming, audio-video services, IPTV, and VoIP. Further, large buffers complicate the design of high speed routers, leading to more power consumption and board space. According to Moore's law, switching speeds double every 18 months while memory access speeds double only every 10 years. Hence, as memory requirements increasingly become a limiting aspect of router design, studying networks in finite-buffer regime seems necessary for network engineers. This work focuses on both practical and theoretical aspects of finite-buffer networks. In Chapters 1-7, we investigate the effects of finite buffer sizes on the throughput and packet delay in different networks. These performance measures are shown to be linked to the stationary distribution of an underlying irreducible Markov chain that exactly models the changes in the network. An iterative scheme is proposed to approximate the steady-state distribution of buffer occupancies by decoupling the exact chain to smaller chains. These approximate solutions are used to analytically characterize network throughput and packet delay, and are also applied to some network performance optimization problems. Further, using simulations, it is confirmed that the proposed framework yields accurate estimates of the throughput and delay performance measures and captures the vital trends and tradeoffs in these networks. In Chapters 8-10, we address the problem of modeling and analysis of the performance of finite-memory random linear network coding in erasure networks. When using random linear network coding, the content of buffers creates dependencies which cannot be captured directly using the classical queueing theoretical models. A careful derivation of the buffer occupancy states and their transition rules are presented as well as decodability conditions when random linear network coding is performed on a stream of arriving packets.
APA, Harvard, Vancouver, ISO, and other styles
10

Ye, Lei. "Energy Management for Virtual Machines." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/283603.

Full text
Abstract:
Current computing infrastructures use virtualization to increase resource utilization by deploying multiple virtual machines on the same hardware. Virtualization is particularly attractive for data center, cloud computing, and hosting services; in these environments computer systems are typically configured to have fast processors, large physical memory and huge storage capable of supporting concurrent execution of virtual machines. Subsequently, this high demand for resources is directly translating into higher energy consumption and monetary costs. Increasingly managing energy consumption of virtual machines is becoming critical. However, virtual machines make the energy management more challenging because a layer of virtualization separates hardware from the guest operating system executing inside a virtual machine. This dissertation addresses the challenge of designing energy-efficient storage, memory and buffer cache for virtual machines by exploring innovative mechanisms as well as existing approaches. We analyze the architecture of an open-source virtual machine platform Xen and address energy management on each subsystem. For storage system, we study the I/O behavior of the virtual machine systems. We address the isolation between virtual machine monitor and virtual machines, and increase the burstiness of disk accesses to improve energy efficiency. In addition, we propose a transparent energy management on main memory for any types of guest operating systems running inside virtual machines. Furthermore, we design a dedicated mechanism for the buffer cache based on the fact that data-intensive applications heavily rely on a large buffer cache that occupies a majority of physical memory. We also propose a novel hybrid mechanism that is able to improve energy efficiency for any memory access. All the mechanisms achieve significant energy savings while lowering the impact on performance for virtual machines.
APA, Harvard, Vancouver, ISO, and other styles
11

Prasad, Ravi S. "An evolutionary approach to improve end-to-end performance in TCP/IP networks." Diss., Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22543.

Full text
Abstract:
Despite the persistent change and growth that characterizes the Internet, the Transmission Control Protocol (TCP) still dominates at the transport layer, carrying more than 90\% of the global traffic. Despite its astonishing success, it has been observed that TCP can cause poor end-to-end performance, especially for large transfers and in network paths with high bandwidth-delay product. In this thesis, we focus on mechanisms that can address key problems in TCP performance, without any modification in the protocol itself. This evolutionary approach is important in practice, as the deployment of clean-slate transport protocols in the Internet has been proved to be extremely difficult. Specifically, we identify a number of TCP-related problems that can cause poor end-to-end performance. These problems include poorly dimensioned socket buffer sizes at the end-hosts, suboptimal buffer sizing at routers and switches, and congestion unresponsive TCP traffic aggregates. We propose solutions that can address these issues, without any modification to TCP.

In network paths with significant available bandwidth, increasing the TCP window till observing loss can result in much lower throughput than the path's available bandwidth. We show that changes in TCP are {em not required} to utilize all the available bandwidth, and propose the application-layer SOcket Buffer Auto-Sizing (SOBAS) mechanism to achieve this goal. SOBAS relies on run-time estimation of the round trip time (RTT) and receive rate, and limits its socket buffer size when the receive rate approaches the path's available bandwidth. In a congested network, SOBAS does not limit its socket buffer size. Our experiment results show that SOBAS improves TCP throughput in uncongested network without hurting TCP performance in congested networks.

Improper router buffer sizing can also result in poor TCP throughput. Previous research in router buffer sizing focused on network performance metrics such as link utilization or loss rate. Instead, we focus on the impact of buffer sizing on end-to-end TCP performance. We find that the router buffer size that optimizes TCP throughput is largely determined by the link's output to input capacity ratio. If that ratio is larger than one, the loss rate drops exponentially with the buffer size and the optimal buffer size is close to zero. Otherwise, if the output to input capacity ratio is lower than one, the loss rate follows a power-law reduction with the buffer size and significant buffering is needed. The amount of buffering required in this case depends on whether most flows end in the slow-start phase or in the congestion avoidance phase.

TCP throughput also depends on whether the cross-traffic reduces its send rate upon congestion. We define this cross-traffic property as {em congestion responsiveness}. Since the majority of Internet traffic uses TCP, which reduces its send rate upon congestion, an aggregate of many TCP flows is believed to be congestion responsive. Here, we show that the congestion responsiveness of aggregate traffic also depends on the flow arrival process. If the flow arrival process follows an open-loop model, then even if the traffic consists exclusively of TCP transfers, the aggregate traffic can still be unresponsive to congestion. TCP flows that arrive in the network in a closed-loop manner are always congestion responsive, on the other hand. We also propose a scheme to estimate the fraction of traffic that follows the closed-loop model in a given link, and give practical guidelines to increase that fraction with simple application-layer modifications.
APA, Harvard, Vancouver, ISO, and other styles
12

Uppal, Amit. "Increasing the efficiency of network interface card." Master's thesis, Mississippi State : Mississippi State University, 2007. http://library.msstate.edu/etd/show.asp?etd=etd-10282007-162402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Grigorescu, Eduard. "Reducing internet latency for thin-stream applications over reliable transport with active queue management." Thesis, University of Aberdeen, 2018. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=236098.

Full text
Abstract:
An increasing number of network applications use reliable transport protocols. Applications with constant data transmission recover from loss without major performance disruption, however, applications that send data sporadically, in small packets, also called thin-streams, experience frequently high latencies due to 'Bufferbloat', that reduce the application performance. Active Queue Management mechanisms were proposed to dynamically manage the queues in routers by dropping packets early and reduce these, hence reducing latency. While their deployment to the internet remains an open issue, the proper investigation into how their functioning mechanism impacts latency is the main focus of this work and research questions have been devised to investigate the AQM impact on latency. A range of AQM mechanisms has been evaluated by the research, exploring performance of the methods for latency sensitive network applications. This has explored new single queue AQM mechanisms such as Controlled Delay (CODEL) and Proportional Integral Enhanced (PIE) and Adaptive RED (ARED). The evaluation has shown great improvements in queuing latency when AQM are used over a range of network scenarios. Scheduling AQM algorithms such as FlowQueue CODEL (FQ-CODEL) isolates traffic and minimises the impact of Bufferbloat on flows. The core components of FQ-CODEL, still widely misunderstood at the time of its inception, have been explained in depth by this study and their contribution to reducing latency have been evaluated. The results show significant reductions in queuing latency for thin streams using FQ-CODEL. When TCP is used for thin streams, high application latencies can arise when there are retransmissions, for example after dropping packets by an AQM mechanism. This delay is a result of TCP's loss-based congestion control mechanism that controls sender transmission rate following packet loss. ECN, a marking sender-side improvement to TCP reduces applicationlayer latency without disrupting the overall network performance. The thesis evaluated the benefit of using ECN using a wide range of experiments. The findings show that FQ-CODEL with ECN provides a substantial reduction of application latency compared to a drop-based AQM. Moreover, this study recommends the combination of FQ-CODEL with other mechanisms, to reduce application latency. Mechanisms such as ABE, have been shown to increase aggregate throughput and reduce application latency for thin-stream applications.
APA, Harvard, Vancouver, ISO, and other styles
14

Thuppal, Rajagopalan. "On pipelined multistage interconnection networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0002/MQ36185.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Yeo, Yong-Kee. "Dynamically Reconfigurable Optical Buffer and Multicast-Enabled Switch Fabric for Optical Packet Switching." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14615.

Full text
Abstract:
Optical packet switching (OPS) is one of the more promising solutions for meeting the diverse needs of broadband networking applications of the future. By virtue of its small data traffic granularity as well as its nanoseconds switching speed, OPS can be used to provide connection-oriented or connectionless services for different groups of users with very different networking requirements. The optical buffer and the switch fabric are two of the most important components in an OPS router. In this research, novel designs for the optical buffer and switch fabric are proposed and experimentally demonstrated. In particular, an optical buffer that is based on a folded-path delay-line tree architecture will be discussed. This buffer is the most compact non-recirculating optical delay line buffer to date, and it uses an array of high-speed ON-OFF optical reflectors to dynamically reconfigure its delay within several nanoseconds. A major part of this research is devoted to the design and performance optimization of these high-speed reflectors. Simulations and measurements are used to compare different reflector designs as well as to determine their optimal operating conditions. Another important component in the OPS router is the switch fabric, and it is used to perform space switching for the optical packets. Optical switch fabrics are used to overcome the limitations imposed by conventional electronic switch fabrics: high power consumption and dependency on the modulation format and bit-rate of the signals. Currently, only those fabrics that are based on the broadcast-and-select architecture can provide truly non-blocking multicast services to all input ports. However, a major drawback of these fabrics is that they are implemented using a large number of optical gates based on semiconductor optical amplifiers (SOA). This results in large component count and high energy consumption. In this research, a new multicast-capable switch fabric which does not require any SOA gates is proposed. This fabric relies on a passive all-optical gate that is based on the Four-wave mixing (FWM) wavelength conversion process in a highly-nonlinear fiber. By using this new switch architecture, a significant reduction in component count can be expected.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Peng Frank. "Jitter buffer management algorithms for voice communication." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6345.

Full text
Abstract:
This thesis studies some jitter management algorithms for real-time applications. These algorithms are executed at a destination node, and assume no knowledge of the source characteristic or the impact of the network path characteristic. The work mainly focuses on prediction algorithms that make use of the information of the packets received in the past, and adjust buffer parameters in order to maintain certain level of quality of service (QoS). Two algorithms are proposed, first, to apply the least mean square method to predict the future packet interarrival time so that the buffer parameters can be dynamically changed in order to adapt to the bursty network traffic; second, to apply the fuzzy logic method on the buffer management to maintain the gap probability within acceptable level while keep the latency as low as possible. These two new algorithms have been evaluated using OPNET simulation and compared with some other algorithms such as the I-policy and the E-policy. We studied and discussed the tradeoff among the gap probability, average display latency, and packet loss probability. Towards the end, we have also made some design recommendations.
APA, Harvard, Vancouver, ISO, and other styles
17

Schor, James E. (James Edward). "Efficient algorithms for buffer allocation." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11458.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (leaves 127-131).
by James E. Schor.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
18

Nagpal, Radhika. "Store Buffers : implementing single cycle store instructions in write-through, write-back and set associative caches." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/36678.

Full text
Abstract:
Thesis (B.S. and M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 87).
This thesis proposes a new mechanism, called Store Buffers, for implementing single cycle store instructions in a pipelined processor. Single cycle store instructions are difficult to implement because in most cases the tag check must be performed before the data can be written into the data cache. Store buffers allow a store instruction to read the cache tag as it. passes through the pipe while keeping the store instruction data buffered in a backup register until the data cache is free. This strategy guarantees single cycle store execution without increasing the hit access time or degrading the performance of the data cache for simple direct-mapped caches, as well as for more complex set associative and write-back caches. As larger caches are incorporated on-chip, the speed of store instructions becomes an increasingly important part of the overall performance. The first part of the thesis describes the design and implementation of store buffers in write through, write-back, direct-mapped and set associative caches. The second part describes the implementation and simulation of store buffers in a 6-stage pipeline with a direct mapped write-through pipelined cache. The performance of this method is compared to other cache write techniques. Preliminary results show that store buffers perform better than other store strategies under high IO latencies and cache thrashing. With as few as three buffers, they significantly reduce the number of cycles per instruction.
by Radhika Nagpal.
B.S.and M.S.
APA, Harvard, Vancouver, ISO, and other styles
19

Dong, Chen. "Buffer-aided multihop wireless communications." Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/364737/.

Full text
Abstract:
In this thesis, we propose a suite of buffer-aided transmission schemes designed for a multihop link or for a three-node network by exploiting the characteristics of buffer-aided transmissions. Our objective is to improve the end-to-end BER, outage probability, throughput and energy dissipation. Specifically, we firstly proposed and studied a buffer-aided multihop link (MHL), where all the relay nodes (RNs) are assumed to have buffers for temporarily storing their received packets. Hence, the RNs are operated under the so-called store-and-forward (SF) relaying scheme. As a benefit of storing packets at the RNs, during each time-slot (TS), the best hop having the highest signal-to-noise ratio (SNR) can be activated from the set of those hops that have packets awaiting transmission in the buffer. A packet is then transmitted over the best hop. This hop-selection procedure is reminiscent of selection (SC) diversity, which is referred to here as multi-hop diversity (MHD), when assuming that each hop experiences both propagation pathloss and independent identically distributed (i.i.d) flat Rayleigh fading. In order to make the channel activation practical, a MAC layer implementation is proposed and several closed-form formulas are derived for its characterization. Then we studied the buffer-aided multihop link, when assuming that each hop experiences both propagation pathloss and independent non-identically distributed (i.n.i.d) at Nakagami-m fading. Both BPSK as well as M-ary quadrature amplitude modulation (MQAM) are employed. During each TS, the MHD scheme activates the specific hop's transmission, whose signal-to-noise ratio (SNR) cumulative distribution function (CDF) gives the highest ordinate value amongst all the available hops. The next packet is then transmitted over the selected hop. This CDF-aware MHD scheme is suitable for operation in the scenarios, where the different hops may have different length, hence resulting in different average SNRs, and/or experience different types of fading. This MHD scheme is also capable of achieving the maximum attainable diversity gain provided by the independent fading experienced by the different hops. Then the benefits of adaptive modulation are exploited, where the number of bits transmitted in each TS is affected both by the channel quality and the buffer fullness. During each TS, the criterion used for activating a specific hop is that of transmitting the highest number of bits (packets). When more than one hops are capable of transmitting the same number of bits, the particular hop having the highest channel quality (reliability) is activated. Hence we refer to this regime as the Maximum Throughput Adaptive Rate Transmission (MTART) scheme. Additionally, a new MAC layer protocol is proposed for implementing our MTART management. Finally, we propose and study a routing scheme, namely the Buffer-aided Opportunistic Routing (BOR) scheme, which combines the benefits both opportunistic routing and MHD transmission. It is conceived for the transmission of information in a Buffer-aided Three-node Network (B3NN) composed of a Source Node (SN), a buffer-aided Relay Node (RN) and a Destination Node (DN). When applying opportunistic routing, each packet is transmitted from SN to DN either directly or indirectly via a RN based on the instantaneous channel quality. When applying MHD transmission, the RN is capable of temporarily storing the received packets, which facilitates transmission over three links, namely the SR, RD and SD links. In this network, the three channels define a 3D channel probability space (CPS), which is divided into four regions representing the activation-region of the three channels and an outage region. Then the instantaneous channel quality values map to a specific point in this 3D channel space. The BOR scheme relies on the position of this point to select the most appropriate channel in the 3D CPS for its transmission. In comparison to the benchmark schemes considered in the literature, the BER, the OP, throughput and/or energy dissipation of our proposed systems are significantly improved.
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Minjie Ph D. Massachusetts Institute of Technology. "Stacked switched capacitor energy buffer architecture." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/73699.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 133-134).
Electrolytic capacitors are often used for energy buffering applications, including buffering between single-phase ac and dc. While these capacitors have high energy density compared to film and ceramic capacitors, their life is limited and their reliability is a major concern. This thesis presents a series of stacked switched capacitor (SSC) energy buffer architectures which overcome this limitation while achieving comparable effective energy density without electrolytic capacitors. The architectural approach is introduced along with design and control techniques which enable this energy buffer to interface with other circuits. A prototype SSC energy buffer using film capacitors, designed for a 320 V dc bus and able to support a 135 W load has been built and tested with a power factor correction circuit. This thesis starts with a detailed comparative study of electrolytic, film, and ceramic capacitors, then introduces the principles of SSC energy buffer architectures, and finally designs and explains the design methodologies of a prototype circuit. The experimental results successfully demonstrate the effectiveness of the approach.
by Minjie Chen.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Yang. "Simulating depth of field using per-pixel linked list buffer." Thesis, Purdue University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1598036.

Full text
Abstract:

In this thesis, I present a method for simulating three characteristics of depth of field image: partial occlusion, bokeh and blur. Retrieving color from occluded surfaces is achieved by constructing a per-pixel linked list buffer, which only requires two render passes. Additionally, per-pixel linked list buffer eliminates the memory overhead of empty pixels in depth layers. Bokeh and blur effect are accomplished by image-space point splatting (Lee 2008). I demonstrate how point splatting can be used to account for the effect of aperture shape and intensity distribution on bokeh. Spherical aberration and chromatic aberration can be approximated using a custom pre-built sprite. Together as a package, this method is capable matching the realism of multi-perspective methods and layered methods.

APA, Harvard, Vancouver, ISO, and other styles
22

Kim, Byungsub 1978. "An efficient buffer management methods for Cachet." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28720.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (leaves 83-85).
(cont.) running simulation. Third, since BCachet is almost linear under the assumption of a reasonable number of sites, BCachet is very scalable. Therefore, it can be used to for very large scale multiprocessor systems.
We propose an efficient buffer management method for Cachet [7], called BCachet. Cachet is an adaptive cache coherence protocol based on a mechanism-oriented memory model called Commit-Reconcile & Fences (CRF) [1]. Although Cachet is theoretically proved to be sound and live, a direct implementation of Cachet is not feasible because it requires too expensive hardware. We greatly reduced the hardware cost for buffer management in BCachet without changing the memory model and the adaptive nature of Cachet. Hardware cost for the incoming message buffer of the memory site is greatly reduced from PxN FIFOs to two FIFOs in BCachet where P is the number of sites and N is the number of address lines in a memory unit. We also reduced the minimum size of suspended message buffer per memory site from (log₂ P+V) xPx(rq[max]), to log₂ P where V is the size of a memory block in terms of bits and rqma is the maximum number of request messages per cache. BCachet has three architectural merits. First, BCachet separates buffer management units for deadlock avoidance and those units for livelock avoidance so that a designer has an option in the liveness level and the corresponding hardware cost: (1) allows a livelock with an extremely low probability and saves hardware cost for fairness control. (2) does not allow a livelock at all and accept hardware cost for fairness control. Second, a designer can easily parameterize the sizes of buffer units to explore the cost-performance curves without affecting the soundness and the liveness. Because usual sizes of buffer management units are much larger than the minimum sizes of those units that guarantee the liveness and soundness of the system, a designer can easily find optimum trade-off point for those units by changing size parameters and
by Byungsub Kim.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
23

Gutierrez, Manuel S. M. Massachusetts Institute of Technology. "An energy buffer for constant power loads." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111914.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 111-113).
Constant power loads (CPLs) are a class of loads steadily increasing in use. They are present whenever a load is regulated to maintain constant output power, such as with LED drivers in high quality lighting that is impervious to input fluctuations. Because CPLs exhibit a negative incremental input impedance, they pose stability concerns in DC and AC systems. This thesis presents a power converter for a constant power LED bulb that presents a favorable input impedance to the grid. The use of an energy buffer allows the converter to draw variable power in order to resemble a resistive load, while the output consumes constant power. A switched-mode power supply consisting of a cascaded boost and buck converter accomplishes this by storing energy in the boost stage output capacitor. Experimental results demonstrate that the converter exhibits a resistive input impedance at frequencies over 0.5 Hz while maintaining constant power to the LED load.
by Manuel Gutierrez.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
24

Nelson, Jude Christopher. "Wide-Area Software-Defined Storage." Thesis, Princeton University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10824272.

Full text
Abstract:

The proliferation of commodity cloud services helps developers build wide-area "system-of-systems" applications by harnessing cloud storage, CDNs, and public datasets as reusable building blocks. But to do so, developers must contend with two long-term challenges. First, whenever developers change storage providers, they must work to preserve the application's expected storage semantics, i.e. the rules governing how the application expects the storage provider to handle its reads and writes. Today, changing storage providers is costly, because developers need to patch the application to make it compatible with the new provider's data consistency model, access controls, replica placement strategies, and so on. At the same time, users have certain expectations about how their data will be used, which the application must meet. For example, depending on the application, users may expect that their data will be kept private from other users, that their data will be exportable to other applications, that accesses to their data will be logged in an auditable way, and so on. In the limit, each user's expectations represent an implicit policy constraining how their data can be stored. Honoring these policies is difficult for developers who rely on third-party storage providers because the storage provider is often unaware of them. This thesis addresses these challenges with a new wide-area storage paradigm, called "software-defined storage" (SDS), that runs in-between applications and cloud services. SDS-enabled applications do not host data, but instead let users bring their preferred cloud services to the application. By taking a user-centric approach to hosting data, users are empowered to programmatically specify their policies independent of their applications and select services that will honor them. To support this approach and to tolerate service provider changes, SDS empowers developers to programmatically specify their application's storage semantics independent of storage providers. This thesis presents the design principles for SDS, and validates their real-world applicability with two SDS implementations and several non-trivial applications built on top of them. Most of these applications are used in production today. This thesis presents microbenchmarks of the SDS implementations and uses real-world experiences to show how to make the most of SDS.

APA, Harvard, Vancouver, ISO, and other styles
25

Chung, Chanwoo S. M. Massachusetts Institute of Technology. "NOHOST : a new storage architecture for distributed storage systems." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107295.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 53-55).
This thesis introduces a new NAND flash-based storage architecture, NOHOST, for distributed storage systems. A conventional flash-based storage system is composed of a number of high-performance x86 Xeon servers, and each server hosts 10 to 30 solid state drives (SSDs) that use NAND flash memory. This setup not only consumes considerable power due to the nature of Xeon processors, but it also occupies a huge physical space compared to small flash drives. By eliminating costly host servers, the suggested architecture uses NOHOST nodes instead, each of which is a low-power embedded system that forms a cluster of distributed key-value store. This is done by refactoring deep I/O layers in the current design so that refactored layers are light-weight enough to run seamlessly on resource constrained environments. The NOHOST node is a full-fledged storage node, composed of a distributed service frontend, key-value store engine, device driver, hardware flash translation layer, flash controller and NAND flash chips. To prove the concept of this idea, a prototype of two NOHOST nodes has been implemented on Xilinx Zynq ZC706 boards and custom flash boards in this work. NOHOST is expected to use half the power and one-third the physical space as compared to a Xeon-based system. NOHOST is expected to support the through of 2.8 GB/s which is comparable to contemporary storage architectures.
by Chanwoo Chung.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
26

Ku, Fei Yen. "Towards Automatic Initial Buffer Configuration." Thesis, University of Waterloo, 2003. http://hdl.handle.net/10012/1078.

Full text
Abstract:
Buffer pools are blocks of memory used in database systems to retain frequently referenced pages. Configuring the buffer pools is a difficult and manual task that involves determining the amount of memory to devote to the buffer pools, the number of buffer pools to use, their sizes, and the database objects assigned to each buffer pool. A good buffer configuration improves query response times and system throughput by reducing the number of disk accesses. Determining a good buffer configuration requires knowledge of the database workload. Empirical studies have shown that optimizing the initial buffer configuration (determined at database design time) can improve system throughput. A good initial configuration can also provide a faster convergence towards a favorable dynamic buffer allocation. Previous studies have not considered automating the buffer pool configuration process. This thesis presents two techniques that facilitate the initial buffer configuration task. First, we develop an analytic model of the GCLOCK buffer replacement policy that can be used to evaluate the effectiveness of a particular buffer configuration for a given workload. Second, to obtain the necessary model parameters, we propose a workload characterization scheme that extracts workload parameters, describing the query reference patterns, from the query access plans. In addition, we extend an existing multifractal model and present a multifractal skew model to represent query access skew. Our buffer model has been validated against measurements of the buffer manager of a commercial database system. The model has also been compared to an alternative GCLOCK buffer model. Our results show that our proposed model closely predicts the actual physical read rates and recognizes favourable buffer configurations. This work provides a foundation for the development of an automated buffer configuration tool.
APA, Harvard, Vancouver, ISO, and other styles
27

Viking, Pontus. "Comparison of Dynamic Buffer Overflow Protection Tools." Thesis, Linköping University, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6008.

Full text
Abstract:

As intrusion attacks on systems become more and more complex, the tools trying to stop these attacks must follow. This thesis has developed a testbed to test and evaluated three freely available protection tools for the GNU/Linux platform to see how they fare against attacks.

APA, Harvard, Vancouver, ISO, and other styles
28

Miao, Yi. "M-Buffer, a practice of object-oriented computer graphics with UML." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0017/MQ55525.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Tarlescu, Maria-Dana. "The Elastic History Buffer, a multi-hybrid branch prediction scheme using static classification." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0025/MQ50893.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zhivich, Michael A. "Detecting buffer overflows using testcase synthesis and code instrumentation." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32112.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 143-146).
The research presented in this thesis aims to improve existing approaches to dynamic buffer overflow detection by developing a system that utilizes code instrumentation and adaptive test case synthesis to find buffer overflows and corresponding failure-inducing inputs automatically. An evaluation of seven modern dynamic buffer overflow detection tools determined that C Range Error Detector (CRED) is capable of providing fine-grained buffer access information necessary for the creation of this system. CRED was also selected because of its ability to provide comprehensive error reports and compile complex programs with reasonable performance overhead. CRED was extended to provide appropriate code instrumentation for the adaptive testing system, which also includes a test case synthesizer that uses data perturbation techniques on legal inputs to produce new test cases, and an analytical module that evaluates the effectiveness of these test cases. Using information provided by code instrumentation in further test case generation creates a feedback loop that enables a focused exploration of the input space and faster buffer overflow detection. Applying the adaptive testing system to jabberd, a Jabber Instant Messaging server, demonstrates its effectiveness in finding buffer overflows and its advantages over existing dynamic testing systems.
(cont.) Adaptive test case synthesis using CRED to provide buffer access information for feedback discovered 6 buffer overflows in jabberd using only 53 messages, while dynamic testing using random messages generated from a protocol description found only 4 overflows after sending 10,000 messages.
by Michael A. Zhivich.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
31

Boo, Hyun Ho. "Virtual ground reference buffer technique in switched-capacitor circuits." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99812.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 121-125).
The performance of switched-capacitor circuits depends highly on the op-amp specifications. In conventional designs, trade-offs in speed, noise, and settling accuracy make it difficult to implement power-efficient switched-capacitor circuits. The problem originates from the inverse relationship between the feedback factor and the signal gain. This thesis proposes the virtual ground reference buffer technique that enhances performance by improving the feedback factor of the op-amp without affecting signal gain. A key concept in the technique is the bootstrapping action of level-shifting buffers. It exploits op-amp-based circuits whose principles are very well understood and the design techniques are mature. The solution ultimately relaxes the required op-amp requirements including unity-gain bandwidth, noise, offset voltage and open-loop gain that would otherwise result in complex design and high power consumption. The concept is demonstrated in a 12-b 250MS/s pipelined ADC.
by Hyun Ho Boo.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Koletka, Robert. "An architecture for secure searchable cloud storage." Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/12099.

Full text
Abstract:
Includes abstract.
Includes bibliographical references.
Cloud Computing is a relatively new and appealing concept; however, users may not fully trust Cloud Providers with their data and can be reluctant to store their files on Cloud Storage Services. The problem is that Cloud Providers allow users to store their information on the provider's infrastructure with compliance to their terms and conditions, however all security is handled by the provider and generally the details of how this is done are not disclosed. This thesis describes a solution that allows users to securely store data all a public cloud, while also providing a mechanism to allow for searchability through their encrypted data. Users are able to submit encrypted keyword queries and, through a symmetric searchable encryption scheme, the system retrieves a list of files with such keywords contained within the cloud storage medium.
APA, Harvard, Vancouver, ISO, and other styles
33

Perumal, Sameshan. "Simulating raid storage systems for performance analysis." Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/6435.

Full text
Abstract:
Includes bibliographical references (p. 126-131).
Redundant Array of Independent Disks (RAID) provides an inexpensive, fault-tolerant storage solution using commodity hard-drives. RAID storage systems have recently surged in popularity amongst enterprise clients, as they provide economical, scalable, high-performance solutions for their storage requirements. The performance of RAID systems is negatively affected by the overhead required to manage and access multiple drives, and multiple disk failures can result in data loss. As RAID has developed, various improvements have been devised by both academia and business to address these shortcomings. These improvements have suggested improved architectures to increase performance and new coding techniques to protect against data loss in the case of drive failure. Evaluating the effect on performance of these improvements is greatly simplified by the use of discrete-event, software simulations. The RAID Operations Simulator for Testing Implementations (RÖSTI) was developed to support such simulation experiments.
APA, Harvard, Vancouver, ISO, and other styles
34

Strauss, Jacob A. (Jacob Alo) 1979. "Device-transparent personal storage." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62459.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 83-87).
Users increasingly store data collections such as digital photographs on multiple personal devices, each of which typically presents the user with a storage management interface isolated from the contents of all other devices. The result is that collections easily become disorganized and drift out of sync. This thesis presents Eyo, a novel personal storage system that provides device transparency: a user can think in terms of "file X", rather than "file X on device Y", and will see the same set of files on all personal devices. Eyo allows a user to view and manage the entire collection of objects from any of their devices, even from disconnected devices and devices with too little storage to hold all the object content. Eyo separates metadata (application-specific attributes of objects) from the content of objects, allowing even storage-limited devices to store all metadata and thus provide device transparency. Fully replicated metadata allows any set of Eyo devices to efficiently synchronize updates. Applications can specify flexible placement rules to guide Eyo's partial replication of object contents across devices. Eyo's application interface provides first-class access to object version history. If multiple disconnected devices update an object concurrently, Eyo preserves each resulting divergent version of that object. Applications can then examine the history and either coalesce the conflicting versions without user direction, or incorporate these versions naturally into their existing user interfaces. Experiments using Eyo for storage in several example applications-media players, a photo editor, podcast manager, and an email interface-show that device transparency can be had with minor application changes, and within the storage and bandwidth capabilities of typical portable devices.
by Jacob Alo Strauss.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
35

Falk, Matthew D. "Cryptographic cloud storage framework." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85417.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 59).
The cloud prevents cheap and convenient ways to create shared remote repositories. One concern when creating systems that provide security is if the system will be able to remain secure when new attacks are developed. As tools and techniques for breaking security systems advance, new ideas are required to provide the security guarantees that may have been exploited. This project presents a framework which can handle the ever growing need for new security defenses. This thesis describes the Key Derivation Module that I have constructed, including many new Key Derivation Functions, that is used in our system.
by Matthew D. Falk.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, An. "Buffer-efficient RTA algorithms in optical TDM networks /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?ECED%202007%20CHENA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Baroody, Ramzi. "Nested relations and object-orientation on secondary storage." Thesis, McGill University, 1993. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=69697.

Full text
Abstract:
This thesis discusses the implementation of inheritance and nesting in a relational database. The purpose of this is to integrate object-oriented concepts into the relational model.
RELIX is a database progamming system based on the relational model. Inheritance and nesting are two important features which are desirable to have in a database. Therefore, our aim was to incorporate these features in RELIX. The implementation was done using RELIX's existing relational functionality, without any modification. Inheritance and nesting were implemented using the natural join, showing that they could be implemented using relational operations. New syntax was added to RELIX to enable the user to take advantage of inheritance and nesting, thus giving the programmer an object-oriented view on a relational database. We also took advantage of the dependence of inheritance and nesting on the natural join, and implemented an alternative algorithm for it based on the concept of a join index. This algorithm improves the performance of natural joins for low activity operations such as those associated with inheritance and nesting.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Xiaoyan 1966. "Trie methods for structured data on secondary storage." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=36855.

Full text
Abstract:
This thesis presents trie organizations for one-dimensional and multidimensional structured data on secondary storage. The new trie structures have several distinctive features: (1) they provide significant storage compression by sharing common paths near the root; (2) they are partitioned into pages and are suitable for secondary storage; (3) they are capable of dynamic insertions and deletions of records; (4) they support efficient multidimensional variable-resolution queries by storing the most significant bits near the root.
We apply the trie structures to indexing, storing and querying structured data on secondary storage. We are interested in the storage compactness, the I/O efficiency, the order-preserving properties, the general orthogonal range queries and the exact match queries for very large files and databases. We also apply the trie structures to relational joins (set operations).
We compare trie structures to various data structures on secondary storage: multipaging and grid files in the direct access method category, R-trees/R*-trees and X-trees in the logarithmic access cost category, as well as some representative join algorithms for performing join operations. Our results show that range queries by trie method are superior to these competitors in search cost when queries return more than a few records and are competitive to direct access methods for exact match queries. Furthermore, as the trie structure compresses data, it is the winner in terms of storage compared to all other methods mentioned above.
We also present a new tidy function for order-preserving key-to-address transformation. Our tidy function is easy to construct and cheaper in access time and storage cost compared to its closest competitor.
APA, Harvard, Vancouver, ISO, and other styles
39

Johnston, Reece G. "Secure storage via information dispersal across network overlays." Thesis, The University of Alabama in Huntsville, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10157562.

Full text
Abstract:

In this paper, we describe a secure distributed storage model to be used especially with untrusted devices, most notably cloud storage devices. The model does so through a peer-to-peer overlay and storage protocol designed to run on existing networked systems. We utilize a structured overlay that is organized in a layered, hierarchical manner based on the underlying network structure. These layers are used as storage sites for pieces of data near the layer at which that data is needed. This data is generated and distributed via a technique called an information dispersal algorithm (IDA) which utilizes an erasure code such as Cauchy Reed-Solomon (RS). Through the use of this IDA, the data pieces are organized across neighboring layers to maximize locality and prevent a compromise within one layer from compromising the data of that layer. Speci?cally, for a single datum to become compromised, a minimum of two layers would have to become compromised. As a result, security, survivability, and availability of the data is improved compared to other distributed storage systems. We present signi?cant background in this area followed by an analysis of similar distributed storage systems. Then, an overview of our proposed model is given along with an in-depth analysis, including both experimental results and theoretical analysis. The recorded overhead (encoding/decoding times and associated data sizes) shows that such a scheme can be utilized with little increase in overall latency. Making the proposed model an ideal choice for any distributed storage needs.

APA, Harvard, Vancouver, ISO, and other styles
40

Funck, Johan. "Distributed Database Storage Solution in Java." Thesis, Umeå University, Department of Computing Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-34185.

Full text
Abstract:

Car sales companies have in the last couple of years discovered that there is a big market in storing their customer's summer and winter tires for a small fee. For the customers it is very convenient to get rid of the all known storage problem with season tires. Burlin Motor Umeå is one of these companies and they are offering seasonal storage and change of tires in autumn and spring as well as washing of tires.The main problem for this kind of storage is how to make the storage easy to overview and how to keep track of all tires. This paper is a report on a distributed storage solution in Java for summer and winter tires based on criteria from Burlin Motor Umeå.

APA, Harvard, Vancouver, ISO, and other styles
41

McDonald, Eric Lawrence. "A video controller and distributed frame buffer for the J-machine." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/35021.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 201-202).
by Eric Lawrence McDonald.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
42

McNamara, Thomas William. "Nonvolatile hologram storage in BaTiO₃." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40993.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 145-152).
by Thomas William McNamara.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
43

Seering, Adam B. "Efficient storage of versioned matrices." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66705.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 95-96).
Versioned-matrix storage is increasingly important in scientific applications. Various computer-based scientific research, from astronomy observations to weather predictions to mechanical finite-element analyses, results in the generation of large matrices that must be stored and retrieved. Such matrices are often versioned; an initial matrix is stored, then a subsequent matrix based on the first is produced, then another subsequent matrix after that. For large databases of matrices, available disk storage can be a substantial constraint. I propose a framework and programming interface for storing such versioned matrices, and consider a variety of intra-matrix and inter-matrix approaches to data storage and compression, taking into account disk-space usage, performance for inserting data, and performance for retrieving data from the database. For inter-matrix "delta" compression, I explore and compare several differencing algorithms, and several means of selecting which arrays are differenced against each other, with the aim of optimizing both disk-space usage and insert and retrieve performance. This work shows that substantial disk-space savings and performance improvements can be achieved by judicious use of these techniques. In particular, a combination of Lempel-Ziv compression and a proposed form of delta compression, it is possible to both decrease disk usage by a factor of 10 and increase query performance for a factor of two or more, for particular data sets and query workloads. Various other strategies can dramatically improve query performance in particular edge cases; for example, a technique called "chunking", where a matrix is broken up and saved as several files on disk, can cause query runtime to be approximately linear in the amount of data requested rather than the size of the raw matrix on disk.
by Adam B. Seering.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
44

Chan, Wilson John. "The Alewife secondary storage subsystem." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/34095.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 41-42).
by Wilson John Chan.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
45

Greco, Nicola S. M. Massachusetts Institute of Technology. "Decentralized infrastructure for file storage." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113976.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 99-103).
How might we incentivize a peer-to-peer network to store users' files? The purpose of this research is to combine ideas from existing peer-to-peer file sharing systems, blockchain technology and Proofs-of-Storage to create an incentivized decentralized storage network, where every participant can earn a reward for storing and serving files or pay the network to store or retrieve their own. More broadly, in this thesis we present the elementary components for building decentralized infrastructure, culminating in a protocol for incentivizing file storage.
by Nicola Greco.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
46

Marmol, Leonardo. "Customized Interfaces for Modern Storage Devices." FIU Digital Commons, 2017. http://digitalcommons.fiu.edu/etd/3165.

Full text
Abstract:
In the past decade, we have seen two major evolutions on storage technologies: flash storage and non-volatile memory. These storage technologies are both vastly different in their properties and implementations than the disk-based storage devices that current soft- ware stacks and applications have been built for and optimized over several decades. The second major trend that the industry has been witnessing is new classes of applications that are moving away from the conventional ACID (SQL) database access to storage. The resulting new class of NoSQL and in-memory storage applications consume storage using entirely new application programmer interfaces than their predecessors. The most significant outcome given these trends is that there is a great mismatch in terms of both application access interfaces and implementations of storage stacks when consuming these new technologies. In this work, we study the unique, intrinsic properties of current and next-generation storage technologies and propose new interfaces that allow application developers to get the most out of these storage technologies without having to become storage experts them- selves. We first build a new type of NoSQL key-value (KV) store that is FTL-aware rather than flash optimized. Our novel FTL cooperative design for KV store proofed to simplify development and outperformed state of the art KV stores, while reducing write amplification. Next, to address the growing relevance of byte-addressable persistent memory, we build a new type of KV store that is customized and optimized for persistent memory. The resulting KV store illustrates how to program persistent effectively while exposing a simpler interface and performing better than more general solutions. As the final component of the thesis, we build a generic, native storage solution for byte-addressable persistent memory. This new solution provides the most generic interface to applications, allow- ing applications to store and manipulate arbitrarily structured data with strong durability and consistency properties. With this new solution, existing applications as well as new “green field” applications will get to experience native performance and interfaces that are customized for the next storage technology evolution.
APA, Harvard, Vancouver, ISO, and other styles
47

Lu, Kan. "Lattice-matched (In,Ga)P buffer layers for ZnSe based visible emitters." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/37509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Brasca, Claudio M. E. "5GHz CMOS resonant clock buffer with quadrature generation for fiber optic applications." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/30370.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaves 102-104).
Clock buffers constitute a major source of power dissipation in VLSI circuits. In CMOS the load is primarily capacitive and hence an inductive shunt can reduce real power needs. This almost-adiabatic topology is referred to as a resonant buffer. Two resonant buffers can be actively controlled by additional variable capacitance, to deliver quadrature signals from a single incoming clock. The cost of this quadrature generation is added complexity of control algorithm and the advantage is 85% less power than alternate methods. This topology is used to create quadrature signals and drive the clock inputs of a bang-bang half-rate phase detector in a 10GBit/sec Clock and Data Recovery Circuit. The 0.13um CMOS implementation shows significant power savings. A useful closed form expression for jitter transfer characteristic of generic linear-time-invariant filters is derived and applied to the proposed buffer to show it can be transparently integrated in existing CDR architectures. The work for this thesis was conducted in part at Analog Devices Inc.
by Claudio M.E. Brasca.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
49

Ankireddipally, L. R. "Formalization of storage considerations in software design." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=548.

Full text
Abstract:
Thesis (Ph. D.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains vii, 110 p. : ill. Includes abstract. Includes bibliographical references (p. 95-97).
APA, Harvard, Vancouver, ISO, and other styles
50

Sikalinda, Paul. "Analyzing Storage System Workloads." Thesis, University of Cape Town, 2006. http://pubs.cs.uct.ac.za/archive/00000410/.

Full text
Abstract:
Analysis of storage system workloads is important for a number of reasons. The analysis might be performed to understand the usage patterns of existing storage systems. It is very important for the architects to understand the usage patterns when designing and developing a new, or improving upon the existing design of a storage system. It is also important for a system administrator to understand the usage patterns when configuring and tuning a storage system. The analysis might also be performed to determine the relationship between any two given workloads. Before a decision is taken to pool storage resources to increase the throughput, there is need to establish whether the different workloads involved are correlated or not. Furthermore, the analysis of storage system workloads can be done to monitor the usage and to understand the storage requirements and behavior of system and application software. Another very important reason for analyzing storage system workloads, is the need to come up with correct workload models for storage system evaluation. For the evaluation, based on simulations or otherwise, to be reliable, one has to analyze, understand and correctly model the workloads. In our work we have developed a general tool, called ESSWA (Enterprize Storage System Workload Analyzer) for analyzing storage system workloads, which has a number of advantages over other storage system workload analyzers described in literature. Given a storage system workload in the form of an I/O trace file containing data for the workload parameters, ESSWA gives statistics of the data. From the statistics one can derive mathematical models in the form of probability distribution functions for the workload parameters. The statistics and mathematical models describe only the particular workload for which they are produced. This is because storage system workload characteristics are sensitive to the file system and buffer pool design and implementation, so that the results of any analysis are less broadly applicable. We experimented with ESSWA by analyzing storage system workloads represented by three sets of I/O traces at our disposal. Our results, among other things show that: I/O request sizes are influenced by the operating system in use; the start addresses of I/O requests are somewhat influenced by the application; and the exponential probability density function, which is often used in simulation of storage systems to generate inter-arrival times of I/O requests, is not the best model for that purpose in the workloads that we analyzed. We found the Weibull, lognormal and beta probability density functions to be better models.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography