To see the other types of publications on this topic, follow the link: Distributed storage networks.

Dissertations / Theses on the topic 'Distributed storage networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 41 dissertations / theses for your research on the topic 'Distributed storage networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Silva, Tarciana Dias da. "DDAN: A distributed directory for ambient networks." Universidade Federal de Pernambuco, 2008. https://repositorio.ufpe.br/handle/123456789/2130.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:54:45Z (GMT). No. of bitstreams: 2 arquivo2010_1.pdf: 5876165 bytes, checksum: 604cf585e6b37b930842351c554aa528 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Dias da Silva, Tarciana; Fawzi Hadj Sadok, Djamel. DDAN: A distributed directory for ambient networks. 2008. Dissertação (Mestrado). Programa de Pós-Graduação em Ciência da Computação, Universidade Federal de Pernambuco, Recife, 2008.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Xiaodong. "RDSS ; a reliable and efficient distributed storage system." Ohio University / OhioLINK, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1103127547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tauber, Markus. "Autonomic management in a distributed storage system." Thesis, St Andrews, 2010. http://hdl.handle.net/10023/926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kanjani, Khushboo. "Supporting fault-tolerant communication in networks." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ten, Chui Fen. "Loss of mains detection and amelioration on electrical distribution networks." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/loss-of-mains-detection-and-amelioration-on-electrical-distribution-networks(b7680a62-7caa-4fd3-89d4-d45e649f8ef9).html.

Full text
Abstract:
Power system islanding is gaining increasing interest as a way to maintain power supply continuity. However, before this operation become viable, the technical challenges associated with its operation must first be addressed. A possible solution to one of these challenges, out-of synchronism reclosure, is by running the islanded system in synchronism with the mains whilst not being electrically connected. This concept, known as 'synchronous islanded operation' avoids the danger of out-of-synchronism reclosure of the islanded system onto the mains. The research in this thesis was based on the concepts presented in [1-3] and specifically applied to multiple-DG island scenarios. The additional control challenges associated with this scenario are identified and an appropriate control scheme, more suited for the operation of multiple-DG synchronous islands, is proposed. The results suggest that multiple-DG synchronous islanded operation is feasible, but a supervisory controller is necessary to facilitate the information exchange within the islanded system and enable stable operation.For maximum flexibility, the synchronous island must be capable of operating with a diversity of generation. The difficulties become further complicated when some or all of the generation consists of intermittent sources. The performance of the proposed control scheme in the presence of a significant contribution of renewable sources within the island is investigated. Two types of wind technologies were developed in PSCAD/EMTDC for this purpose, they are a fixed speed induction generator (FSIG) based wind farm and a doubly-fed induction generator (DFIG) based wind farm. The results show that although synchronous islanded operation is still achievable, the intermittent output has an adverse effect on the control performance, and in particular limits the magnitude of disturbances that can happen in the island without going beyond the relaxed synchronisation limits of ±60o.Energy storage is proposed as a way to reduce the wind farm power variation and improve phase controller response. A supplementary control is also proposed such that DFIG contributes to the inertial response. The potential of the proposed scheme (energy storage + supplementary control) is evaluated using case studies. The results show massive improvement to the load acceptance limits, even beyond the case where no wind farm is connected. The benefit of the proposed scheme is even more apparent as the share of wind generated energy in the island grows.
APA, Harvard, Vancouver, ISO, and other styles
6

Pagonis, Meletios. "Electrical power aspects of distributed propulsion systems in turbo-electric powered aircraft." Thesis, Cranfield University, 2015. http://dspace.lib.cranfield.ac.uk/handle/1826/9873.

Full text
Abstract:
The aerospace industry is currently looking at options for fulfilling the technological development targets set for the next aircraft generations. Conventional engines and aircraft architectures are now at a maturity level which makes the realisation of these targets extremely problematic. Radical solutions seem to be necessary and Electric Distributed Propulsion is the most promising concept for future aviation. Several studies showed that the viability of this novel concept depends on the implementation of a superconducting power network. The particularities of a superconducting power network are described in this study where novel components and new design conditions of these networks are highlighted. Simulink models to estimate the weight of fully superconducting machines have been developed in this research work producing a relatively conservative prediction model compared to the NASA figures which are the only reference available in the literature. A conceptual aircraft design architecture implementing a superconducting secondary electrical power system is also proposed. Depending on the size of the aircraft, and hence the electric load demand, the proposed superconducting architecture proved to be up to three times lighter than the current more electric configurations. The selection of such a configuration will also align with the general tendency towards a superconducting network for the proposed electric distributed propulsion concept. In addition, the hybrid nature of these configurations has also been explored and the potential enhanced role of energy storage mechanisms has been further investigated leading to almost weight neutral but far more flexible aircraft solutions. For the forecast timeframe battery technology seems the only viable choice in terms of energy storage options. The anticipated weight of the Lithium sulphur technology is the most promising for the proposed architectures and for the timeframe under investigation. The whole study is based on products and technologies which are expected to be available on the 2035 timeframe. However, future radical changes in energy storage technologies may be possible but the approach used in this study can be readily adapted to meet such changes.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Wei. "Maximising renewable hosting capacity in electricity networks." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10483.

Full text
Abstract:
The electricity network is undergoing significant changes in the transition to a low carbon system. The growth of renewable distributed generation (DG) creates a number of technical and economic challenges in the electricity network. While the development of the smart grid promises alternative ways to manage network constraints, their impact on the ability of the network to accommodate DG – the ‘hosting capacity’- is not fully understood. It is of significance for both DNOs and DGs developers to quantify the hosting capacity according to given technical or commercial objectives while subject to a set of predefined limits. The combinational nature of the hosting capacity problem, together with the intermittent nature of renewable generation and the complex actions of smart control systems, means evaluation of hosting capacity requires appropriate optimisation techniques. This thesis extends the knowledge of hosting capacity. Three specific but related areas are examined to fill the gaps identified in existing knowledge. New evaluation methods are developed that allow the study of hosting capacity (1) under different curtailment priority rules, (2) with harmonic distortion limits, and (3) alongside energy storage systems. These works together improve DG planning in two directions: demonstrating the benefit provided by a range of smart grid solutions; and evaluating extensive impacts to ensure compliance with all relevant planning standards and grid codes. As an outcome, the methods developed can help both DNOs and DG developers make sound and practical decisions, facilitating the integration of renewable DG in a more cost-effective way.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Gong. "Data and application migration in cloud based data centers --architectures and techniques." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41078.

Full text
Abstract:
Computing and communication have continued to impact on the way we run business, the way we learn, and the way we live. The rapid technology evolution of computing has also expedited the growth of digital data, the workload of services, and the complexity of applications. Today, the cost of managing storage hardware ranges from two to ten times the acquisition cost of the storage hardware. We see an increasing demand on technologies for transferring management burden from humans to software. Data migration and application migration are one of popular technologies that enable computing and data storage management to be autonomic and self-managing. In this dissertation, we examine important issues in designing and developing scalable architectures and techniques for efficient and effective data migration and application migration. The first contribution we have made is to investigate the opportunity of automated data migration across multi-tier storage systems. The significant IO improvement in Solid State Disks (SSD) over traditional rotational hard disks (HDD) motivates the integration of SSD into existing storage hierarchy for enhanced performance. We developed adaptive look-ahead data migration approach to effectively integrate SSD into the multi-tiered storage architecture. When using the fast and expensive SSD tier to store the high temperature data (hot data) while placing the relatively low temperature data (low data) in the HDD tier, one of the important functionality is to manage the migration of data as their access patterns are changed from hot to cold and vice versa. For example, workloads during day time in typical banking applications can be dramatically different from those during night time. We designed and implemented an adaptive lookahead data migration model. A unique feature of our automated migration approach is its ability to dynamically adapt the data migration schedule to achieve the optimal migration effectiveness by taking into account of application specific characteristics and I/O profiles as well as workload deadlines. Our experiments running over the real system trace show that the basic look-ahead data migration model is effective in improving system resource utilization and the adaptive look-ahead migration model is more efficient for continuously improving and tuning of the performance and scalability of multi-tier storage systems. The second main contribution we have made in this dissertation research is to address the challenge of ensuring reliability and balancing loads across a network of computing nodes, managed in a decentralized service computing system. Considering providing location based services for geographically distributed mobile users, the continuous and massive service request workloads pose significant technical challenges for the system to guarantee scalable and reliable service provision. We design and develop a decentralized service computing architecture, called Reliable GeoGrid, with two unique features. First, we develop a distributed workload migration scheme with controlled replication, which utilizes a shortcut-based optimization to increase the resilience of the system against various node failures and network partition failures. Second, we devise a dynamic load balancing technique to scale the system in anticipation of unexpected workload changes. Our experimental results show that the Reliable GeoGrid architecture is highly scalable under changing service workloads with moving hotspots and highly reliable in the presence of massive node failures. The third research thrust in this dissertation research is focused on study the process of migrating applications from local physical data centers to Cloud. We design migration experiments and study the error types and further build the error model. Based on the analysis and observations in migration experiments, we propose the CloudMig system which provides both configuration validation and installation automation which effectively reduces the configuration errors and installation complexity. In this dissertation, I will provide an in-depth discussion of the principles of migration and its applications in improving data storage performance, balancing service workloads and adapting to cloud platform.
APA, Harvard, Vancouver, ISO, and other styles
9

Nae, Yakov. "Modelo distribuído para agregação de armazenamento em redes de sensores sem fio=Distributed model for storage aggregation in wireless sensor networks." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260109.

Full text
Abstract:
Orientador: Lee Luan Ling
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-20T02:24:35Z (GMT). No. of bitstreams: 1 YakovNae_M.pdf: 7990917 bytes, checksum: 122c511d9ba839a2f1464fbe7fca09b4 (MD5) Previous issue date: 2011
Resumo: Gerência de armazenamento em Redes de Sensores Sem Fio (RSSF) é uma questão muito crítica. Além da RSSFs conter uma vasta quantidade de armazenamento agregada, ela não pode ser usada inteiramente. Portanto, o sistema inteiro falha quando o primeiro sensor tem sua capacidade de armazenamento esgotada, deixando uma grande capacidade de armazenamento inutilizada. Sugere-se que os sensores devem-se ser capazes de detectar as capacidades de armazenamentos inutilizadas, para prolongar as suas funcionalidades. Entretanto, em RSSF de larga escala isso pode ser muito difícil uma vez que os sensores podem não ter conhecimento da existência dos outros. Neste trabalho apresenta-se duas principais contribuições: otimização da capacidade total de armazenamento para RSSF em grande escala e uma nova abordagem de roteamento - Deterministic "Random" Walk (Passeio "Aleatório" Determinístico). Apresenta-se um novo modelo de armazenamento via construção "sob demanda" de Cadeias de Armazenamento Distribuídas ( Distributed Storage Chains (DSC). Estas cadeias representam parcerias entrem os sensores que podem compartilhar suas capacidades de armazenamento. Resultando, os sensores não estão sujeitos às suas limitações de armazenamento, mas para à capacidade total de armazenamento disponível no sistema. Constrói-se estas cadeia via passeio determinístico sobre a topologia sugerida. Todavia, mostra-se que estes passeios apresentam um comportamento aleatório que é muito eficiente em termos de localização de capacidade de armazenamento disponível
Abstract: Storage management of Wireless Sensor Networks (WSN) is a very critical issue in terms of system's lifetime. While WSNs host a vast storage capacity on the aggregate, that capacity cannot be used entirely. Eventually, the entire network may fail when the first sensor has its own storage capacity depleted, leaving behind a large amount of unutilized storage capacity. We suggest that sensors should be able to detect unutilized storage capacity in order to prolong their functionality. However, for large scale WSNs this can be a difficult task, since sensors may not be aware of the existence of others. This work has two main contributions: an optimization of the overall storage capacity for large scale WSNs and a novel routing approach of deterministic "random" walk. We present a new storage model by building "on - demand" Distributed Storage Chains (DSC). These chains represent partnership between sensors that share their storage capacity. As a result, sensors are no longer subjected to their own storage limitations but to the total amount of available storage in the WSN. We construct these chains via deterministic walks over our suggested topology. However, we show that these walks resemble the behavior of random walks and are therefore highly efficient in terms of locating available storage
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
10

Conte, Simone Ivan. "The Sea of Stuff : a model to manage shared mutable data in a distributed environment." Thesis, University of St Andrews, 2019. http://hdl.handle.net/10023/16827.

Full text
Abstract:
Managing data is one of the main challenges in distributed systems and computer science in general. Data is created, shared, and managed across heterogeneous distributed systems of users, services, applications, and devices without a clear and comprehensive data model. This technological fragmentation and lack of a common data model result in a poor understanding of what data is, how it evolves over time, how it should be managed in a distributed system, and how it should be protected and shared. From a user perspective, for example, backing up data over multiple devices is a hard and error-prone process, or synchronising data with a cloud storage service can result in conflicts and unpredictable behaviours. This thesis identifies three challenges in data management: (1) how to extend the current data abstractions so that content, for example, is accessible irrespective of its location, versionable, and easy to distribute; (2) how to enable transparent data storage relative to locations, users, applications, and services; and (3) how to allow data owners to protect data against malicious users and automatically control content over a distributed system. These challenges are studied in detail in relation to the current state of the art and addressed throughout the rest of the thesis. The artefact of this work is the Sea of Stuff (SOS), a generic data model of immutable self-describing location-independent entities that allow the construction of a distributed system where data is accessible and organised irrespective of its location, easy to protect, and can be automatically managed according to a set of user-defined rules. The evaluation of this thesis demonstrates the viability of the SOS model for managing data in a distributed system and using user-defined rules to automatically manage data across multiple nodes.
APA, Harvard, Vancouver, ISO, and other styles
11

Koch, Douglas J. "Positioning the Reserve Headquarters Support (RHS) system for multi-layered enterprise use." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FKoch.pdf.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, September 2009.
Thesis Advisor(s): Cook, Glenn. "September 2009." Description based on title screen as viewed on 6 November 2009. Author(s) subject terms: Enterprise architecture, project management, business process transformation, operating model, IT governance, IT systems, data quality, data migration, business operating model, personnel IT systems, HRM, ERP. Includes bibliographical references (p. 89-92). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
12

Kumar, Akshay. "Efficient Resource Allocation Schemes for Wireless Networks with with Diverse Quality-of-Service Requirements." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/87529.

Full text
Abstract:
Quality-of-Service (QoS) to users is a critical requirement of resource allocation in wireless networks and has drawn significant research attention over a long time. However, the QoS requirements differ vastly based on the wireless network paradigm. At one extreme, we have a millimeter wave small-cell network for streaming data that requires very high throughput and low latency. At the other end, we have Machine-to-Machine (M2M) uplink traffic with low throughput and low latency. In this dissertation, we investigate and solve QoS-aware resource allocation problems for diverse wireless paradigms. We first study cross-layer dynamic spectrum allocation in a LTE macro-cellular network with fractional frequency reuse to improve the spectral efficiency for cell-edge users. We show that the resultant optimization problem is NP-hard and propose a low-complexity layered spectrum allocation heuristic that strikes a balance between rate maximization and fairness of allocation. Next, we develop an energy efficient downlink power control scheme in a energy harvesting small-cell base station equipped with local cache and wireless backhaul. We also study the tradeoff between the cache size and the energy harvesting capabilities. We next analyzed the file read latency in Distributed Storage Systems (DSS). We propose a heterogeneous DSS model wherein the stored data is categorized into multiple classes based on arrival rate of read requests, fault-tolerance for storage etc. Using a queuing theoretic approach, we establish bounds on the average read latency for different scheduling policies. We also show that erasure coding in DSS serves the dual purpose of reducing read latency and increasing the energy efficiency. Lastly, we investigate the problem of delay-efficient packet scheduling in M2M uplink with heterogeneous traffic characteristics. We classify the uplink traffic into multiple classes and propose a proportionally-fair delay-efficient heuristic packet scheduler. Using a queuing theoretic approach, we next develop a delay optimal multiclass packet scheduler and later extend it to joint medium access control and packet scheduling for M2M uplink. Using extensive simulations, we show that the proposed schedulers perform better than state-of-the-art schedulers in terms of average delay and packet delay jitter.
PHD
APA, Harvard, Vancouver, ISO, and other styles
13

Valvo, Daniel William. "Repairing Cartesian Codes with Linear Exact Repair Schemes." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98818.

Full text
Abstract:
In this paper, we develop a scheme to recover a single erasure when using a Cartesian code,in the context of a distributed storage system. Particularly, we develop a scheme withconsiderations to minimize the associated bandwidth and maximize the associateddimension. The problem of recovering a missing node's data exactly in a distributedstorage system is known as theexact repair problem. Previous research has studied theexact repair problem for Reed-Solomon codes. We focus on Cartesian codes, and show wecan enact the recovery using a linear exact repair scheme framework, similar to the oneoutlined by Guruswami and Wooters in 2017.
Master of Science
Distributed storage systems are systems which store a single data file over multiple storage nodes. Each storage node has a certain storage efficiency, the "space" required to store the information on that node. The value of these systems, is their ability to safely store data for extended periods of time. We want to design distributed storage systems such that if one storage node fails, we can recover it from the data in the remaining nodes. Recovering a node from the data stored in the other nodes requires the nodes to communicate data with each other. Ideally, these systems are designed to minimize the bandwidth, the inter-nodal communication required to recover a lost node, as well as maximize the storage efficiency of each node. A great mathematical framework to build these distributed storage systems on is erasure codes. In this paper, we will specifically develop distributed storage systems that use Cartesian codes. We will show that in the right setting, these systems can have a very similar bandwidth to systems build from Reed-Solomon codes, without much loss in storage efficiency.
APA, Harvard, Vancouver, ISO, and other styles
14

Homem, Irvin. "LEIA: The Live Evidence Information Aggregator : A Scalable Distributed Hypervisor‐based Peer‐2‐Peer Aggregator of Information for Cyber‐Law Enforcement I." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177902.

Full text
Abstract:
The Internet in its most basic form is a complex information sharing organism. There are billions of interconnected elements with varying capabilities that work together supporting numerous activities (services) through this information sharing. In recent times, these elements have become portable, mobile, highly computationally capable and more than ever intertwined with human controllers and their activities. They are also rapidly being embedded into other everyday objects and sharing more and more information in order to facilitate automation, signaling that the rise of the Internet of Things is imminent. In every human society there are always miscreants who prefer to drive against the common good and engage in illicit activity. It is no different within the society interconnected by the Internet (The Internet Society). Law enforcement in every society attempts to curb perpetrators of such activities. However, it is immensely difficult when the Internet is the playing field. The amount of information that investigators must sift through is incredibly massive and prosecution timelines stated by law are prohibitively narrow. The main solution towards this Big Data problem is seen to be the automation of the Digital Investigation process. This encompasses the entire process: From the detection of malevolent activity, seizure/collection of evidence, analysis of the evidentiary data collected and finally to the presentation of valid postulates. This paper focuses mainly on the automation of the evidence capture process in an Internet of Things environment. However, in order to comprehensively achieve this, the subsequent and consequent procedures of detection of malevolent activity and analysis of the evidentiary data collected, respectively, are also touched upon. To this effect we propose the Live Evidence Information Aggregator (LEIA) architecture that aims to be a comprehensive automated digital investigation tool. LEIA is in essence a collaborative framework that hinges upon interactivity and sharing of resources and information among participating devices in order to achieve the necessary efficiency in data collection in the event of a security incident. Its ingenuity makes use of a variety of technologies to achieve its goals. This is seen in the use of crowdsourcing among devices in order to achieve more accurate malicious event detection; Hypervisors with inbuilt intrusion detection capabilities to facilitate efficient data capture; Peer to Peer networks to facilitate rapid transfer of evidentiary data to a centralized data store; Cloud Storage to facilitate storage of massive amounts of data; and the Resource Description Framework from Semantic Web Technologies to facilitate the interoperability of data storage formats among the heterogeneous devices. Within the description of the LEIA architecture, a peer to peer protocol based on the Bittorrent protocol is proposed, corresponding data storage and transfer formats are developed, and network security protocols are also taken into consideration. In order to demonstrate the LEIA architecture developed in this study, a small scale prototype with limited capabilities has been built and tested. The prototype functionality focuses only on the secure, remote acquisition of the hard disk of an embedded Linux device over the Internet and its subsequent storage on a cloud infrastructure. The successful implementation of this prototype goes to show that the architecture is feasible and that the automation of the evidence seizure process makes the otherwise arduous process easy and quick to perform.
APA, Harvard, Vancouver, ISO, and other styles
15

Lo, Sai-Lai. "A modular and extensible network storage architecture." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Chou, Tahsin. "Storage Reduction for Distributed disk-Based Backup in Storage Area Network." NSUWorks, 2006. http://nsuworks.nova.edu/gscis_etd/452.

Full text
Abstract:
For many organizations today, data is their most important asset. How to safeguard the data in this dynamic environment is an important issue in any organization. The backup process is cumbersome in large organizations. Typically, the backup saves files from a network client to a remote storage location on a daily basis. This means that the same file, often in multiple versions, is saved and stored many times, resulting in excessive storage. A distributed disk-based backup system collects the data to be backed up from network clients and stores files remotely on multiple storage locations in the network. In recent years, Storage Area Network (SAN) has become a popular solution to effectively store or access huge amounts of enterprise information. A SAN is a dedicated storage network designed specifically to connect storage, backup devices, and servers. By consolidating storage in one location, customers benefit from efficiencies of management, utilization, and reliability. Since there is no end in sight to the exponential growth of enterprise data, storage reduction technology plays an important role in enterprise backup. The goal of this study is to investigate how to effectively reduce storage usage through a distributed disk-based backup system in a SAN. The working name for the distributed backup system used for this study is SAN-Backup system. In SAN-Backup system, the backup storage reduction can be made at file level, block level, and backup catalog level. The design and development of this distributed disk-based backup system utilized a phased approach. The prototype of SAN-Backup system was validated through enterprise backup environments. The validation process included backup storage usage and backup performance comparisons between a backup application with an embedded storage reduction technology and a backup application without an embedded storage reduction technology both running in a SAN. The test results showed that SAN-Backup system reduced storage usage by 55.9% when compared to Backup Express Enterprise backup system, and also improved the overall backup performance by 43.74%. The test results indicated that SAN-Backup system did significantly reduce the backup storage usage and also improved the backup performance.
APA, Harvard, Vancouver, ISO, and other styles
17

Kim, Eunsam. "Enhanced distributed multimedia services using networked storage systems." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0015690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

PHUTATHUM, AWASSADA. "Implementing Distributed Storage Systemsby Network Coding and ConsideringComplexity of Decoding." Thesis, KTH, Kommunikationsteori, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-103607.

Full text
Abstract:
Recently, network coding for distributed storage system has become a popular field due to increasing applications such as video, VoIP or mail. There are lots of theoretical works in this field, yet not enough practical study. In this thesis we implement a distributed storage system using network coding. In our implementation, three strategies of coding applied to this system: replication, regenerating code and regenerating code with repair by transfer. To study advantageous or disadvantageous of these strategies, we measure probability of successful downloading, repair time and processing time after implementation. We further study regenerating code with different finite field. Moreover we propose a method for low complexity of decoding algorithm. It is to assign different number of connected storage node which a receiver uses to reconstruct an original file. Our results show that the regenerating code with repair by transfer is an optimal network code for the distributed storage system when comparing to other strategies when working in small finite field size. In particular, in GF(2), the code only uses exclusive-OR to encode and decode data. In addition when finite field is large, the probability of successful downloading increases with the cost of higher complexity comparing to network code with small finite field size. To work in small finite field and consequently reducing complexity in decoding, we show by increasing number of connected node the probability of successful downloading improves. Thus we conclude that the regenerating code with repair by transfer is optimal implementation within system. However if we only consider the regenerating code with different number of connected storage node retrieving the original file, higher number of connected storage node is better than lower number of storage node connected.
APA, Harvard, Vancouver, ISO, and other styles
19

Broman, Rickard. "A Practical Study of Network Coding in Distributed Storage Systems." Thesis, KTH, Kommunikationsteori, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-136360.

Full text
Abstract:
Highly increased data traffic over the last few years has led to a need to improve the networkefficiency. One way to achieve this is by network coding. In this thesis two codes, namelyreplication code and regenerating codes, have been examined. Most other works in this area hasbeen theoretical, so we created a testbed to perform practical tests. Then these practical resultsare compared to the theoretical results with varying finite field size. It will be shown that thepractical studies verify the theoretical work. Furthermore, we observe the probability ofsuccessful repair after several stages of repair. More so, the achievability of exact repair of a failed node in a tandem network has beenexamined. This has been proven possible, and also the required finite field size is presented.Another issue at focus is the number of transfers required to achieve exact repair in such anetwork. The results show that 2*k transfers is required, which is comparable to functionalrepair.
APA, Harvard, Vancouver, ISO, and other styles
20

Chareonvisal, Tanakorn. "ImplementingDistributed Storage System by Network Coding in Presence of Link Failure." Thesis, KTH, Kommunikationsteori, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-103606.

Full text
Abstract:
Nowadays increasing multimedia applications e.g., video and voice over IP, social networks and emails poses higher demands for sever storages and bandwidth in the networks. There is a concern that existing resource may not able to support higher demands and reliability. Network coding was introduced to improve distributed storage system. This thesis proposes the way to improve distributed storage system such as increase a chance to recover data in case there is a fail storage node or link fail in a network. In this thesis, we study the concept of network coding in distributed storage systems. We start our description from easy code which is replication coding then follow with higher complex code such as erasure coding. After that we implement these concepts in our test bed and measure performance by the probability of success in download and repair criteria. Moreover we compare success probability for reconstruction of original data between minimum storage regenerating (MSR) and minimum bandwidth regenerating (MBR) method. We also increase field size to increase probability of success. Finally, link failure was added in the test bed for measure reliability in a network. The results are analyzed and it shows that using maximum distance separable and increasing field size can improve the performance of a network. Moreover it also improves reliability of network in case there is a link failure in the repair process.
APA, Harvard, Vancouver, ISO, and other styles
21

Alnaser, Sahban Wa'el Saeed. "Control of distributed generation and storage : operation and planning perspectives." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/control-of-distributed-generation-and-storage-operation-and-planning-perspectives(a937e071-4e6b-4a07-a196-031c3b23655f).html.

Full text
Abstract:
Transition towards low-carbon energy systems requires an increase in the volume of renewable Distributed Generation (DG), particularly wind and photovoltaic, connected to distribution networks. To facilitate the connection of renewable DG without the need for expensive and time-consuming network reinforcements, distribution networks should move from passive to active methods of operation, whereby technical network constraints are actively managed in real time. This requires the deployment of control solutions that manage network constraints and, crucially, ensure adequate levels of energy curtailment from DG plants by using other controllable elements to solve network issues rather than resorting to generation curtailment only. This thesis proposes a deterministic distribution Network Management System (NMS) to facilitate the connections of renewable DG plants (specifically wind) by actively managing network voltages and congestion in real time through the optimal control of on-load tap changers (OLTCs), DG power factor and, then, generation curtailment as a last resort. The set points for the controllable elements are found using an AC Optimal Power Flow (OPF). The proposed NMS considers the realistic modelling of control by adopting one-minute resolution time-series data. To decrease the volumes of control actions from DG plants and OLTCs, the proposed approach departs from multi-second control cycles to multi-minute control cycles. To achieve this, the decision-making algorithm is further improved into a risk-based one to handle the uncertainties in wind power throughout the multi-minute control cycles. The performance of the deterministic and the risk-based NMS are compared using a 33 kV UK distribution network for different control cycles. The results show that the risk-based approach can effectively manage network constraints better than the deterministic approach, particularly for multi-minute control cycles, reducing also the number of control actions but at the expense of higher levels of curtailment. This thesis also proposes energy storage sizing framework to find the minimum power rating and energy capacity of multiple storage facilities to reduce curtailment from DG plants. A two-stage iterative process is adopted in this framework. The first stage uses a multi-period AC OPF across the studied horizon to obtain initial storage sizes considering hourly wind and load profiles. The second stage adopts a high granularity minute-by-minute control driven by a mono-period bi-level AC OPF to tune the first-stage storage sizes according to the actual curtailment. The application of the proposed planning framework to a 33 kV UK distribution network demonstrates the importance of embedding real-time control aspects into the planning framework so as to accurately size storage facilities. By using reactive power capabilities of storage facilities it is possible to reduce storage sizes. The combined active management of OLTCs and power factor of DG plants resulted in the most significant benefits in terms of the required storage sizes.
APA, Harvard, Vancouver, ISO, and other styles
22

Bilek, Zinar. "Performance assessment in district cooling networks using distributed cold storages : A case study." Thesis, KTH, Energiteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281239.

Full text
Abstract:
District cooling is a technology that has been gaining traction lately due to increased demand from the commercial and industrial sectors, especially in dense urban areas such as Stockholm. A literature study found that customers such as hospitals, offices, malls and data centres all depend on both comfort cooling and process cooling. District cooling networks operate by producing centralized cooling energy and distributing it to consumers through underground pipes. The produced cold therein is transferred via the district cooling network as chilled water, which is pumped through the heat exchangers located in the consumer facilities which enables maintaining the desired temperatures of the consumers’ intended facilities, by removing the additional heat. The literature review showed that cold storages, which are thermal energy storages, are used to peak shave and to help reduce the output of expensive chillers and heat pumps during peak demand hours. The aim of this project is to evaluate the possibilities of using distributed cold storages in district cooling network as a means to reduce effects of distribution limitations the bottlenecks and increase distribution capacity. This project defines distribution limitations as areas with specifically low differential pressures. Additionally, the objective is to compare the costs between the scenarios. In this project, Norrenergi AB’s district cooling network is used as a case study. Norrenergi AB is an energy company located in Solna that supplies district heating and cooling to customers, mainly in the Solna and Sundbyberg region. The company delivers roughly 1 000 GWh of district heating and 70 GWh of district cooling annually. Three scenarios with various configurations of storage size and location are developed and calculated in the network simulation software NetSim, which is a software that allows complex, dynamic simulations of energy networks. According to Norrenergi AB, the criteria for acceptable network operation is that the differential pressure is required to stay between 100-800 kPa. In Scenario 1 & 2, a 15 MW cold storage is implemented in Sundbyberg and Frösunda, respectively. In Scenario 3, two smaller storages with a capacity of 3 MW each are installed in both Sundbyberg and Frösunda. For all scenarios, the energy need to fully charge the storages is calculated along with the charging/discharging profiles of the storages, which are later used as input in NetSim. In all scenarios, the storages charge during the night-time and discharge during peak hours. The main results that can be concluded from this thesis is that all scenarios led to cost savings in terms of daily production cost. The daily cost savings for each of the scenarios were 2.7%, 4.8% and 4.3%, respectively. In addition to this, the effects of distribution limitations in the network are analysed with regards to the differential pressures. The results indicate that although Scenario 3 displayed only the second lowest production cost, it greatly reduced the effects of distribution limitations in key areas compared to that of Scenario 2 which showed abnormally low differential pressures during peak hours, leading to cooling not being delivered. With these aspects in mind, the deduction is that a combination of the capacity size similar to those of scenarios 1 & 2, combined with the capacity distribution in Scenario 3 should be the optimal setup in the future. Furthermore, cold storages can help reduce the use of chillers and thus help reduce the use of harmful refrigerants in the system. Future iterations of this model should consider the possibilities of including new consumers and optimized charging/discharging profiles of the storages. Variations of the temperature difference should be included as well since an increase/reduction of the temperature difference can directly affect storage capacities.
Fjärrkyla är en teknik som på senare år har ökat markant då behoven från de kommersiella och industriella sektorerna har ökat, speciellt i tätbebyggda områden som Stockholm. En litteraturstudie fann att kundbasen omfattas av exempelvis sjukhus, kontor, köpcenter och serverhallar och behovet för både process- och komfortkyla är högt. Fjärrkyla fungerar genom att producera kyla centralt och leverera detta till konsumenter som är kopplade till nätet genom distributionsrör under jorden. Den producerade kylan tar sedan upp överskottsvärme från konsumenter genom värmeväxlare som är belägna i konsumenternas byggnader vilket möjliggör bibehållning av en bekvämlig inomhusmiljö. Litteraturstudien visade också att kyllager är termiska energilager som används för att jämna ut topplaster under dygnet samt för att reducera fjärrkylaproduktion som kommer direkt från dyra kylmaskiner och värmepumpar. Målet med detta projekt är att undersöka möjligheterna att använda distribuerade kyllager i fjärrkylanätet med syftet att minska effekterna av de distributionsbegränsingar som uppstår vid drift, samt öka leveranskapaciteten. Distributionsbegränsningarna definieras i detta projekt som låga differenstryck. Utöver detta är också målsättningen att jämföra kostnaderna mellan de olika scenariona. I denna uppsats används Norrenergi AB:s fjärrkylanät som fallstudie. Norrenergi AB är ett energibolag som producerar och levererar både fjärrvärme och fjärrkyla till kunder i områden som Solna och Sundbyberg med omnejd. Företaget levererar årligen cirka 1 000 GWh fjärrvärme och 70 GWh fjärrkyla. Tre scenarion med varierande konfigurationer med hänsyn till storlek och plats har utvecklats och beräknats på i nätberäkningsprogrammet NetSim som används för komplexa och dynamiska beräkningar av olika energinät. Kravet för att nätverket ska kunna säkerställa leveranser, enligt Norrenergi AB, är att differenstrycket håller sig inom intervallet 100–800 kPa. I Scenario 1 & 2 installeras ett 15 MW kyllager i Sundbyberg respektive Frösunda. I Scenario 3 installeras två mindre kyllager á 3 MW i både Sundbyberg och Frösunda. För alla scenarion beräknas den totala energimängd som krävs för att fylla kyllagren och deras laddning- och urladdningsprofiler som sedan används som indata i NetSim. I alla scenarion laddas kyllagren under natten och laddar ur under dagen då behovet är som högst. De viktigaste resultaten som kan sammanställas ur denna uppsats är att alla scenarion leder till kostnadsbesparingar vad gäller den dagliga produktionskostnaden. Dessa kostnadsbesparingar är 2.7%, 4.8% respektive 4.3%. Dessutom undersöktes flaskhalsarna i fjärrkylanätet där den huvudsakliga parametern var differenstrycket. Resultaten från den analysen påvisar att trots lite högre produktionskostnad än Scenario 1 & 2, hjälper Scenario 3 till att minska de flaskhalsar som uppstår i nyckelområden i jämförelse med exempelvis Scenario 2 som visade extremt låga differenstryck under höglasttimmar, vilket ledde till att en andel av kylan inte levererades till konsumenterna. Givet dessa faktorer är slutsatsen att en kombination av den ungefärliga lagerstorleken från Scenario 1 & 2 samt kapacitetsdistributionen från Scenario 3 bör vara det bästa alternativet i framtiden. Vidare kan kyllager hjälpa till att reducera användandet av kylmaskiner och således minska förbrukningen av skadliga köldmedier i systemet. Framtida arbeten på denna modell bör överväga möjligheten att inkludera nya konsumenter i modellen samt optimerade laddning- och urladdningsprofiler för kyllagren. Temperaturdifferensen i detta projekt har antagits vara konstant men bör ändras för att få med ytterligare variationer då en ökning/minskning av temperaturdifferensen kan direkt påverka kapaciteten i kyllagren.
APA, Harvard, Vancouver, ISO, and other styles
23

Ruty, Guillaume. "Towards more scalability and flexibility for distributed storage systems." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT006/document.

Full text
Abstract:
Les besoins en terme de stockage, en augmentation exponentielle, sont difficilement satisfaits par les systèmes de stockage distribué traditionnels. Alors que les performances des disques ont ratrappé celles des cartes réseau en terme d'ordre de grandeur, leur capacité ne croit pas à la même vitesse que l'ensemble des données requérant d'êtres stockées, notamment à cause de l'avènement des applications de big data. Par ailleurs, l'équilibre de performances entre disques, cartes réseau et processeurs a changé et les états de fait sur lesquels se basent la plupart des systèmes de stockage distribué actuels ne sont plus vrais. Cette dissertation explique de quelle manière certains aspects de tels systèmes de stockages peuvent être modifiés et repensés pour faire une utilisation plus efficace des ressources qui les composent. Elle présente une architecture de stockage nouvelle qui se base sur une couche de métadonnées distribuée afin de fournir du stockage d'objet de manière flexible tout en passant à l'échelle. Elle détaille ensuite un algorithme d'ordonnancement des requêtes permettant a un système de stockage générique de traiter les requêtes de clients en parallèle de manière plus équitable. Enfin, elle décrit comment améliorer le cache générique du système de fichier dans le contexte de systèmes de stockage distribué basés sur des codes correcteurs avant de présenter des contributions effectuées dans le cadre de courts projets de recherche
The exponentially growing demand for storage puts a huge stress on traditionnal distributed storage systems. While storage devices' performance have caught up with network devices in the last decade, their capacity do not grow as fast as the rate of data growth, especially with the rise of cloud big data applications. Furthermore, the performance balance between storage, network and compute devices has shifted and the assumptions that are the foundation for most distributed storage systems are not true anymore. This dissertation explains how several aspects of such storage systems can be modified and rethought to make a more efficient use of the resource at their disposal. It presents an original architecture that uses a distributed layer of metadata to provide flexible and scalable object-level storage, then proposes a scheduling algorithm improving how a generic storage system handles concurrent requests. Finally, it describes how to improve legacy filesystem-level caching for erasure-code-based distributed storage systems, before presenting a few other contributions made in the context of short research projects
APA, Harvard, Vancouver, ISO, and other styles
24

Arghandeh, Jouneghani Reza. "Distributed Energy Storage Systems: Microgrid Application, Market-Based Optimal Operation and Harmonic Analysis." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/50603.

Full text
Abstract:
The need for modern electricity infrastructures and more capable grid components brings attention to distributed energy storage systems because of their bidirectional power flow capability. This dissertation focuses on three different aspects of distributed energy storage system applications in distribution networks. It starts with flywheel energy storage system modeling and analysis for application in microgrid facilities. Then, a market-based optimal controller is proposed to enhance the operational profit of distributed energy storage devices in distribution networks. Finally, impact of multiple distributed energy storage devices on harmonic propagation in distribution networks is investigated.
This dissertation provides a comparison between batteries and flywheels for the ride-through application in critical microgrid facilities like data centers. In comparison with batteries, the application of FES for power security is new. This limits the availability of experimental data. The software tool developed in this dissertation enables analysis of short-term, ride-through applications of FES during an islanded operation of a facility microgrid. As a result, it can provide a guideline for facility engineers in data centers or other types of facility microgrids to design backup power systems based on FES technology.
This dissertation also presents a real-time control scheme that maximizes the revenue attainable by distributed energy storage systems without sacrificing the benefits related to improvements in reliability and reduction in peak feeder loading. This optimal control algorithm provides a means for realizing additional benefits by utilities by taking advantage of the fluctuating cost of energy in competitive energy markets. The key drivers of the economic optimization problem for distributed energy storage systems are discussed.
In this dissertation, the impact of distribution network topology on harmonic propagation due to the interaction of multiple harmonic sources is investigated. Understanding how multiple harmonic sources interact to increase or decrease the harmonic distortion is crucial in distribution networks with a large number of Distributed Energy Resources. A new index, Index of Phasor Harmonics (IPH), is proposed for harmonic quantization in multiple harmonic source cases. The proposed IPH index presents more information than commonly used indices. With the help of the detailed distribution network model, topological impacts of harmonic propagation are investigated. In particular, effects of mutual coupling, phase balance, three phase harmonic sources, and single phase harmonic sources are considered.

Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Sultan, Alexis. "Méthodes et outils d'analyse de données de signalisation mobile pour l'étude de la mobilité humaine." Thesis, Evry, Institut national des télécommunications, 2016. http://www.theses.fr/2016TELE0018/document.

Full text
Abstract:
Cette thèse a pour but d’étudier les activités humaines à travers l’analyse du flux de signalisation du réseau cellulaire de données (GTP). Pour ce faire, nous avons mis en place un ensemble d’outils nous permettant de collecter, stocker et analyser ces données de signalisation. Ceci en se basant sur une architecture indépendante au maximum des constructeurs de matériel. À partir des données extraites par cette plateforme nous avons fait trois contributions.Dans une première contribution, nous présentons l’architecture de la plateforme de capture et d’analyse de la signalisation GTP dans un réseau d’opérateur. Ce travail a pour but de faire l’inventaire des différents éléments déclenchant des mises à jour et aussi d’estimer la précision temporelle et spatiale des données collectées. Ensuite, nous présentons une série de mesures, mettant en avant les caractéristiques principales de la mobilité humaine observées au travers de la signalisation mobile (le temps inter-arrivées des messages de mise à jour, la distance observée des sauts entre cellules lors des déplacements des clients). Finalement, nous présentons l’analyse des compromis qui ont été faits entre la rapidité d’écriture/de lecture et la facilité d’usage du format de fichier utilisé lors de l’échange d’informations entre les sondes de capture et le système stockage. Deuxièmement, nous avons été capables de mettre en place un algorithme de reconstitution de trajets. Cet algorithme permet, à partir de données éparses issues du réseau cellulaire, de forger des trajets sur les voies de transport. Il se base sur les données des trajets sous-échantillonnées et en déduit les positions du client sur les voies de communication. Nous avons mis en place un graphe de transport intermodal. Celui-ci porte sur le métro, le train et le réseau routier. Il connecte les différents points entre eux dans chacune des couches de transport et interconnecte les modes de transport entre eux, aux intersections. Notre algorithme se base sur un modèle de chaîne de Markov cachée pour placer sur le graphe les positions probables des individus entre les différentes observations. L’apport de ce travail est l’utilisation des propriétés topologiques du réseau de transport afin de renseigner les probabilités d’émission et de transition dans un modèle non supervisé. Ces travaux ont donné lieu à une publication et à un brevet. Finalement, notre dernière contribution utilise les données issues de la signalisation à des fins de dimensionnement du réseau mobile d’opérateur. Il s’agit de dimensionner dynamiquement un réseau mobile en utilisant les bandes de fréquences dites vTV-Whitespace. Ces bandes de fréquences sont libérées sous certaines conditions aux USA et soumises à vente aux enchères. Ce que nous proposons est un système basé sur un algorithme de qualité d’expérience (QoE) et sur le coût de la ressource radio afin de choisir où déployer des femtocells supplémentaires et où en supprimer en fonction des variations de population par unité d’espace. En conclusion, cette thèse offre un aperçu du potentiel de l’analyse des metadata de signalisation d’un réseau dans un contexte plus général que la simple supervision d’un réseau d’opérateur
The aim of this thesis is to study human activities through the analysis of the signaling flow in cellular data network (GTP). In order to achieve this goal, we implemented a set of tools allowing us to collect, store and analyze this signaling data. We created an architecture independent at most of hardware manufacturers and network operators. Using data extracted by this platform we made three main contributions. In our first contribution, we present the GTP capture and analysis platform in a mobile operator network. This work intends to list the different elements triggering updates and to estimate the temporal and spatial accuracy of the data collected. Next, we present a set of measures that represent the main characteristics of human mobility observed through the mobile signaling data (the inter-arrival time of update messages, the observed distances of hops from cell to cell made by moving users). Finally, we present the analysis of the compromise that was made between the writing/reading performances and the ease of use of the file format for the data storage. In our second contribution, we propose CT-Mapper, an unsupervised algorithm that enables the mapping of mobile phone traces over a multimodal transport network. One of the main strengths of CT-Mapper is its capability to map noisy sparse cellular multimodal trajectories over a multilayer transportation network where the layers have different physical properties and not only to map trajectories associated with a single layer. Such a network is modeled by a large multilayer graph in which the nodes correspond to metro/train stations or road intersections and edges correspond to connections between them. The mapping problem is modeled by an unsupervised HMM where the observations correspond to sparse user mobile trajectories and the hidden states to the multilayer graph nodes. The HMM is unsupervised as the transition and emission probabilities are inferred using respectively the physical transportation properties and the information on the spatial coverage of antenna base stations. Finally, in our last contribution we propose a method for cellular resource planning taking into account user mobility. Since users move, the bandwidth resource should move accordingly. We design a score based method using TV Whitespace, and user experience, to determine from which cell resource should be removed and to which one it should be added. Combined with traffic history it calculates scores for each cell. Bandwidth is reallocated on a half-day basis. Before that, real traces of cellular networks in urban districts are presented which confirm that static network planning is no longer optimal. A dynamic femtocell architecture is then presented. It is based on mesh interconnected elements and designed to serve the score based bandwidth allocation algorithm. The score method along with the architecture are simulated and results are presented. They confirm the expected improvement in bandwidth and delay per user while maintaining a low operation cost at the operator side. In conclusion, this thesis provides an overview of the potential of analyzing the signaling metadata of a network in a broader context that supervision of an operator network
APA, Harvard, Vancouver, ISO, and other styles
26

Gensollen, Nicolas. "Modeling and optimizing a distributed power network : a complex system approach of the "prosumer" management in the smart grid." Thesis, Evry, Institut national des télécommunications, 2016. http://www.theses.fr/2016TELE0019/document.

Full text
Abstract:
Cette thèse est consacrée à l'étude d’agents appelés prosumers parce qu’ils peuvent, à partir d’énergies renouvelables, à la fois produire et consommer de l’électricité. Si leurs productions excèdent leurs propres besoins, ceux-ci cherchent à vendre leur surplus sur des marchés de l’électricité. Nous proposons de modéliser ces prosumers à partir de données météorologiques, ce qui nous a permit de mettre en évidence des corrélations spatio-temporelles non triviales, d'une grande importance pour les agrégateurs qui forment des portefeuilles d’équipements afin de vendre des services à l'opérateur du réseau. Comme un agrégateur est lié par un contrat avec l'opérateur, il peut faire l'objet de sanctions s’il ne remplit pas son rôle. Nous montrons que ces corrélations impactent la stabilité des agrégats, et donc le risque encouru par les agrégateurs. Nous proposons un algorithme minimisant le risque d'un ensemble d’agrégations, tout en maximisant le gain attendu. La mise en place de dispositifs de stockage dans un réseau où les générateurs et les charges sont dynamiques et stochastiques est complexe. Nous proposons de répondre à cette question grâce à la théorie du contrôle. Nous modélisons le système électrique par un réseau d'oscillateurs couplés, dont la dynamique des angles de phase est une approximation de la dynamique réelle du système. Le but est de trouver le sous-ensemble des nœuds du graphe qui, lors d'une perturbation du système, permet le retour à l'équilibre si les bons signaux sont injectés, et ceci avec une énergie minimum. Nous proposons un algorithme pour trouver un placement proche de l'optimum permettant de minimiser l'énergie moyenne de contrôle
This thesis is devoted to the study of agents called prosumers because they can, from renewable, both produce and consume electricity. If their production exceeds their own needs, they are looking to sell their surplus on electricity markets. We propose to model these prosumers from meteorological data, which has allowed us to highlight non trivial spatial and temporal correlations. This is of great importance for aggregators that form portfolios of equipments to sell services to the network operator. As an aggregator is bound by a contract with the operator, it can be subject to penalties if it does not fulfill its role. We show that these correlations impact the stability of aggregates, and therefore the risk taken by the aggregators. We propose an algorithm minimizing the risk of the aggregations, while maximizing the expected gain. The placement of storage devices in a network where generators and loads are stochastic and not fixed is complex. We propose to answer this question with control theory. We model the electrical system as a network of coupled oscillators, whose phase angles dynamics is an approximation of the actual dynamics of the system. The goal is to find the subset of nodes in the graph that, during a disturbance of the system, allows returning to equilibrium if the right signals are injected and this with a minimum energy. We propose an algorithm to find a near optimal placement to minimize the average energy control
APA, Harvard, Vancouver, ISO, and other styles
27

Al-Awami, Louai. "Distributed Data Storage System for Data Survivability in Wireless Sensor Networks." Thesis, 2013. http://hdl.handle.net/1974/8403.

Full text
Abstract:
Wireless Sensor Networks (WSNs) that use tiny wireless devices capable of communicating, processing, and sensing promise to have applications in virtually all fields. Smart homes and smart cities are just few of the examples that WSNs can enable. Despite their potential, WSNs suffer from reliability and energy limitations. In this study, we address the problem of designing Distributed Data Storage Systems (DDSSs) for WSNs using decentralized erasure codes. A unique aspect of WSNs is that their data is inherently decentralized. This calls for a decentralized mechanism for encoding and decoding. We propose a distributed data storage framework to increase data survivability in WSNs. The framework utilizes Decentralized Erasure Codes for Data Survivability (DEC-DS) which allow for determining the amount of redundancy required in both hardware and data to allow sensed data to survive failures in the network. To address the energy limitations, we show two approaches to implement the proposed solution in an energy efficient manner. The two approaches employ Random Linear Network Coding (RLNC) to exploit coding opportunities in order to save energy and in turn prolong network life. A routing based scheme, called DEC Encode-and-Forward (DEC-EaF), applies to networks with routing capability, while the second, DEC Encode-and-Disseminate (DEC-EaD), uses a variation of random walk to build the target code in a decentralized fashion. We also introduce a new decentralized approach to implement Luby Transform (LT)-Codes based DDSSs. The scheme is called Decentralized Robust Soliton Storage (DRSS) and it operates in a decentralized fashion and requires no coordination between sensor nodes. The schemes are tested through extensive simulations to evaluate their performance. We also compare the proposed schemes to similar schemes in the literature. The comparison considers energy efficiency as well as coding related aspects. Using the proposed schemes can greatly improve the reliability of WSNs especially under harsh working conditions.
Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2013-09-30 22:43:04.509
APA, Harvard, Vancouver, ISO, and other styles
28

Castro, Daniel Burnier de. "Simulation of intelligent active distributed networks implementation of storage voltage control." Dissertação, 2008. http://hdl.handle.net/10216/11903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Castro, Daniel Burnier de. "Simulation of intelligent active distributed networks implementation of storage voltage control." Master's thesis, 2008. http://hdl.handle.net/10216/11903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Dharmadeep, M. C. "Optimizations In Storage Area Networks And Direct Attached Storage." Thesis, 2007. http://hdl.handle.net/2005/574.

Full text
Abstract:
The thesis consists of three parts. In the first part, we introduce the notion of device-cache-aware schedulers. Modern disk subsystems have many megabytes of memory for various purposes such as prefetching and caching. Current disk scheduling algorithms make decisions oblivious of the underlying device cache algorithms. In this thesis, we propose a scheduler architecture that is aware of underlying device cache. We also describe how the underlying device cache parameters can be automatically deduced and incorporated into the scheduling algorithm. In this thesis, we have only considered adaptive caching algorithms as modern high end disk subsystems are by default configured to use such algorithms. We implemented a prototype for Linux anticipatory scheduler, where we observed, compared with the anticipatory scheduler, upto 3 times improvement in query execution times with Benchw benchmark and upto 10 percent improvement with Postmark benchmark. The second part deals with implementing cooperative caching for the Redhat Global File System. The Redhat Global File System (GFS) is a clustered shared disk file system. The coordination between multiple accesses is through a lock manager. On a read, a lock on the inode is acquired in shared mode and the data is read from the disk. For a write, an exclusive lock on the inode is acquired and data is written to the disk; this requires all nodes holding the lock to write their dirty buffers/pages to disk and invalidate all the related buffers/pages. A DLM (Distributed Lock Manager) is a module that implements the functions of a lock manager. GFS’s DLM has some support for range locks, although it is not being used by GFS. While it is clear that a data sourced from a memory copy is likely to have lower latency, GFS currently reads from the shared disk after acquiring a lock (just as in other designs such as IBM’s GPFS) rather than from remote memory that just recently had the correct contents. The difficulties are mainly due to the circular relationships that can result between GFS and the generic DLM architecture while integrating DLM locking framework with cooperative caching. For example, the page/buffer cache should be accessible from DLM and yet DLM’s generality has to be preserved. The symmetric nature of DLM (including the SMP concurrency model) makes it even more difficult to understand and integrate cooperative caching into it (note that GPFS has an asymmetrical design). In this thesis, we describe the design of a cooperative caching scheme in GFS. To make it more effective, we also have introduced changes to the locking protocol and DLM to handle range locks more efficiently. Experiments with micro benchmarks on our prototype implementation reveal that, reading from a remote node over gigabit Ethernet can be upto 8 times faster than reading from a enterprise class SCSI disk for random disk reads. Our contributions are an integrated design for cooperative caching and lock manager for GFS, devising a novel method to do interval searches and determining when sequential reads from a remote memory perform better than sequential reads from a disk. The third part deals with selecting a primary network partition in a clustered shared disk system, when node/network failures occur. Clustered shared disk file systems like GFS, GPFS use methods that can fail in case of multiple network partitions and also in case of a 2 node cluster. In this thesis, we give an algorithm for fault-tolerant proactive leader election in asynchronous shared memory systems, and later its formal verification. Roughly speaking, a leader election algorithm is proactive if it can tolerate failure of nodes even after a leader is elected, and (stable) leader election happens periodically. This is needed in systems where a leader is required after every failure to ensure the availability of the system and there might be no explicit events such as messages in the (shared memory) system. Previous algorithms like DiskPaxos are not proactive. In our model, individual nodes can fail and reincarnate at any point in time. Each node has a counter which is incremented every period, which is same across all the nodes (modulo a maximum drift). Different nodes can be in different epochs at the same time. Our algorithm ensures that per epoch there can be at most one leader. So if the counter values of some set of nodes match, then there can be at most one leader among them. If the nodes satisfy certain timeliness constraints, then the leader for the epoch with highest counter also becomes the leader for the next epoch (stable property). Our algorithm uses shared memory proportional to the number of processes, the best possible. We also show how our protocol can be used in clustered shared disk systems to select a primary network partition. We have used the state machine approach to represent our protocol in Isabelle HOL logic system and have proved the safety property of the protocol.
APA, Harvard, Vancouver, ISO, and other styles
31

Owuor, James Odhiambo. "Integration of small hydro distributed generation into distribution networks : a pumped hydro-storage topology." 2014. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1001032.

Full text
Abstract:
D. Tech. Electrical Engineering.
Discusses the objective of this study is to develop an embedded generator-pump set topology using a wound rotor induction machine using the doubly fed induction generator concept, and a synchronous machine electrically and mechanically coupled to it, powering its magnetisation circuit. An adjustable pitch pump is also coupled to the generating set on the same shaft to provide an embedded generating-pumping solution that can provide co-incident generating ans pumping functions. The research objectives are as follows: to develop an overall plant topology, to identify plant attributes necessary for proper functionality of the proposed plant, to identify a pumping/generation topology that meets the required electro-mechanical and overall topological layout attribute requirements, to develop a primitive mathematical model of the plant that provides insight into fundamental physical behaviour of the plant, to investigate the stability issues arising from the electromechanical coupling of the two machines used, to establish controllability of the proposed configuration, to identify influencing factors on the stable operation of the proposed plant, to develop an overall system model for simulation. This also entails developing a suitable mathematical model for the variable pitch pump and to simulate the system steady state and dynamic behaviour.
APA, Harvard, Vancouver, ISO, and other styles
32

Chaudhry, Mohammad. "Network Coding in Distributed, Dynamic, and Wireless Environments: Algorithms and Applications." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10529.

Full text
Abstract:
The network coding is a new paradigm that has been shown to improve throughput, fault tolerance, and other quality of service parameters in communication networks. The basic idea of the network coding techniques is to relish the "mixing" nature of the information flows, i.e., many algebraic operations (e.g., addition, subtraction etc.) can be performed over the data packets. Whereas traditionally information flows are treated as physical commodities (e.g., cars) over which algebraic operations can not be performed. In this dissertation we answer some of the important open questions related to the network coding. Our work can be divided into four major parts. Firstly, we focus on network code design for the dynamic networks, i.e., the networks with frequently changing topologies and frequently changing sets of users. Examples of such dynamic networks are content distribution networks, peer-to-peer networks, and mobile wireless networks. A change in the network might result in infeasibility of the previously assigned feasible network code, i.e., all the users might not be able to receive their demands. The central problem in the design of a feasible network code is to assign local encoding coefficients for each pair of links in a way that allows every user to decode the required packets. We analyze the problem of maintaining the feasibility of a network code, and provide bounds on the number of modifications required under dynamic settings. We also present distributed algorithms for the network code design, and propose a new path-based assignment of encoding coefficients to construct a feasible network code. Secondly, we investigate the network coding problems in wireless networks. It has been shown that network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. In wireless networks each packet transmitted by a device is broadcasted within a certain area and can be overheard by the neighboring devices. When a device needs to transmit packets, it employs the Index Coding that uses the knowledge of what the device's neighbors have heard in order to reduce the number of transmissions. With the Index Coding, each transmitted packet can be a linear combination of the original packets. The Index Coding problem has been proven to be NP-hard, and NP-hard to approximate. We propose an efficient exact, and several heuristic solutions for the Index Coding problem. Noting that the Index Coding problem is NP-hard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem, where the objective is to maximize the number of transmissions that are saved by employing coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms. Thirdly, we consider the problem of accessing large data files stored at multiple locations across a content distribution, peer-to-peer, or massive storage network. Parts of the data can be stored in either original form, or encoded form at multiple network locations. Clients access the parts of the data through simultaneous downloads from several servers across the network. For each link used client has to pay some cost. A client might not be able to access a subset of servers simultaneously due to network restrictions e.g., congestion etc. Furthermore, a subset of the servers might contain correlated data, and accessing such a subset might not increase amount of information at the client. We present a novel efficient polynomial-time solution for this problem that leverages the matroid theory. Fourthly, we explore applications of the network coding for congestion mitigation and over flow avoidance in the global routing stage of Very Large Scale Integration (VLSI) physical design. Smaller and smarter devices have resulted in a significant increase in the density of on-chip components, which has given rise to congestion and over flow as critical issues in on-chip networks. We present novel techniques and algorithms for reducing congestion and minimizing over flows.
APA, Harvard, Vancouver, ISO, and other styles
33

Gonçalves, José António Ribeiro. "Methodology for real impact assessment of the best location of distributed electric energy storage systems." Doctoral thesis, 2016. http://hdl.handle.net/10316/29810.

Full text
Abstract:
Tese de doutoramento em Sistemas Sustentáveis de Energia, apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Esta tese apresenta uma metodologia para ajudar os decisores a encontrar soluções viáveis que permitam integrar um sistema de armazenamento de energia elétrica distribuída (DEESS) num ambiente urbano, como uma ferramenta para fornecer serviços de potência e de energia para a rede elétrica. Requerendo dados de fácil obtenção no setor elétrico Português, a metodologia desenvolvida utiliza diagramas protótipo de consumo de energia elétrica, de preços de eletricidade, e de geração renovável de eletricidade, visando otimizar a localização das unidades de armazenamento de energia elétrica. Os diagramas protótipo são baseados em dados reais, sendo obtidos através de técnicas de agrupamento (clustering). Para a otimização é utilizado um algoritmo genético melhorado baseado no Non-dominated Sorting Genetic Algorithm-II (NSGAII) que permite encontrar os locais mais adequados para as unidades do DEESS. O presente trabalho considera as atitudes expectáveis dos principais interessados na implementação de um DEESS e discute possíveis opções de enquadramento regulatório, tais como o uso de um incentivo nas tarifas, para definir um modelo de negócio que estimule o aparecimento de intervenientes no mercado com vontade de investir em sistemas de armazenamento de energia. A metodologia foi aplicada a um estudo de caso, utilizando a tecnologia de baterias de Nanofosfato de iões de lítio (LiFePO4) devido a sua crescente utilização em redes de eletricidade e às vantagens oferecidas quando comparado a outras tecnologias disponíveis no mercado. A escolha das localizações usa uma definição do melhor horário de funcionamento enquanto otimiza quatro funções objetivo: a minimização das perdas, desvios de tensão e custo de investimento, e a maximização dos ganhos líquidos de exploração das diferenças entre os preços da energia que variam no tempo. Neste último objetivo é incluída uma avaliação de externalidades com base no sistema europeu de comércio de emissões, a fim de tentar contemplar os principais benefícios associados ao armazenamento. Os resultados mostraram que a melhor localização de DEESS depende do serviço de energia a ser fornecida, nomeadamente nos objetivos que definem o regime de gestão do sistema de armazenamento. Esta característica sugere a necessidade de incorporar este nível de decisão na formulação multiobjetivo e torna a metodologia desenvolvida apropriada para ser usada por diferentes tipos de interessados, tais como investidores privados, o DSO ou uma autoridade pública.
This thesis presents a methodology to assist decision makers on the assessment of feasible solutions to integrate a distributed electric energy storage system (DEESS) in an urban environment, as a tool to provide power and energy services to the electric network. Requiring data easily found in the Portuguese energy sector, the developed methodology uses prototype diagrams of electricity demand, electricity prices and renewable electricity generation to optimize the location of electric energy storage units. The profile prototypes are based on real data, obtained through clustering techniques, and an improved genetic algorithm, based on Non-dominated Sorting Genetic Algorithm-II (NSGAII) is used for the optimization that allows the most suitable locations of DEESS units to be found. The present work considers expected attitudes of the main stakeholders towards DEESS implementation and discusses possible regulatory framework options to define the DEESS business model in order to stimulate the appearance of market players intending to invest on energy storage systems, such as the use of a feed-in-tariff scheme. The methodology was applied to a case study, using the nanophosphate lithium-ion (LiFePO4) battery technology due to its increasing use in electricity networks and to the advantages it offers when compared to other commercially available technologies. The choice of location uses a definition of the best schedule of operation, while optimizing four objective functions: the minimization of losses, voltage deviations and investment cost, and the maximization of the net gains of exploiting the differences among time-varying energy prices. This last objective included an externality assessment based on the European emissions trading system, trying to account for the main associated benefits of DEESS. Results showed that the best DEESS location depends on the energy service to be provided, namely of the goal that defines the management scheme of the storage system. This feature suggests the need to incorporate this level of decision on the multiple objective formulation and makes the developed methodology appropriate to be used by different types of stakeholders, such as a private investor, the DSO or a public authority.
Fundação para a Ciência e Tecnologia
APA, Harvard, Vancouver, ISO, and other styles
34

Liao, Chen-Hung, and 廖振宏. "A Link Eavesdropping Prevention Problem in Distributed Network Coded Data Storage Systems." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/93535746974018799557.

Full text
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
101
In recent years, network coding plays a key role in distributed storage systems, because of high reliability, security, and low storage cost. However, network coding-based distributed storage systems face an eavesdropping problem when transmitting the repairing data from remote datacenters. This problem is especially crucial in distributed network coded storage systems because more repair bandwidth and repair links are required, compared to conventional replication. In this thesis, we propose an optimization approach to compute the minimum storage according to the required security level. Our numerical results demonstrate that there exists an optimal tradeoff between remote repair bandwidth and storage cost. Moreover, we analyze the relation between security level requirement and the number of remote and local storage nodes, storage cost, data reliability, and secrecy capacity.
APA, Harvard, Vancouver, ISO, and other styles
35

Vadlamani, Lalitha. "Coding Schemes For Distributed Subspace Computation, Distributed Storage And Local Correctability." Thesis, 2015. http://hdl.handle.net/2005/2646.

Full text
Abstract:
In this thesis, three problems have been considered and new coding schemes have been devised for each of them. The first is related to distributed function computation, the second to coding for distributed storage and the final problem is based on locally correctable codes. A common theme of the first two problems considered is distributed computation. The first problem is motivated by the problem of distributed function computation considered by Korner and Marton, where the goal is to compute XOR of two binary sources at the receiver. It has been shown that linear encoders give better sum rates for some source distributions as compared to the usual Slepian-Wolf scheme. We generalize this distributed function computation setting to the case of more than two sources and the receiver is interested in computing multiple linear combinations of the sources. Consider `m' random variables each of which takes values from a finite field and are associated with a certain joint probability distribution. The receiver is interested in the lossless computation of `s' linear combinations of the m random variables. By considering the set of all linear combinations of m random variables as a vector space V , this problem can be interpreted as a subspace-computation problem. For this problem, we develop three increasingly refined approaches, all based on linear encoders. The first two approaches which are termed as common code approach and selected subspace approach, use a common matrix to encode all the sources. In the common code approach, the desired subspace W is computed at the receiver, whereas in the selected subspace approach, possibly a larger subspace U which contains the desired subspace is computed. The larger subspace U which gives the minimum sum rate itself is based on a decomposition of vector space V into a chain of subspaces. The chain of subspaces is determined by the joint probability distribution of m random variables and a notion of normalized measure of entropy. The third approach is a nested code approach, where all the encoding matrices are nested and the same subspace U which is identified in the selected subspace approach is computed. We characterize the sum rates under all the three approaches. The sum rate under nested code approach is no larger than both selected subspace approach and Slepian-Wolf approach. For a large class of joint distributions and subspaces W , the nested code scheme is shown to improve upon Slepian-Wolf scheme. Additionally, a class of source distributions and subspaces are identified, for which the nested code approach is sum-rate optimal. In the second problem, we consider a distributed storage network, where data is stored across nodes in a network which are failure-prone. The goal is to store data reliably and efficiently. For a required level of reliability, it is of interest to minimise storage overhead and also of interest to perform node repair efficiently. Conventionally replication and maximum distance separable (MDS) codes are employed in such systems. Though replication is very efficient in terms of node repair, the storage overhead is high. MDS codes have low storage overhead but even the repair of a single failed node requires contacting a large number of nodes and downloading all their data. We consider two coding solutions that have recently been proposed, which enable efficient node repair in case of single node failure. The first solution called regenerating codes seeks to minimize the amount of data downloaded for node repair, while codes with locality attempt to minimize the number of helper nodes accessed. We extend these results in two directions. In the first one, we introduce the notion of codes with locality where the local codes have minimum distance more than 2 and hence can recover a code symbol locally even in the presence of multiple erasures. These codes are termed as codes with local erasure correction. We say that a code has information locality if there exists a set of message symbols, each of which is covered by local codes. A code is said to have all-symbol locality if all the code symbols are covered by local codes. An upper bound on the minimum distance of codes with information locality is presented and codes that are optimal with respect to this bound are constructed. We make a connection between codes with local erasure correction and concatenated codes. The second direction seeks to build codes that combine the advantages of both codes with locality as well as regenerating codes. These codes, termed here as codes with local regeneration, are codes with locality over a vector alphabet, in which the local codes themselves are regenerating codes. There are two well known classes of regenerating codes known as minimum storage regenerating (MSR) codes and minimum bandwidth regenerating (MBR) codes. We derive two upper bounds on the minimum distance of vector-alphabet codes with locality, one for the case when the local codes are MSR codes and the second for the case when the local codes are MBR codes. We also provide several optimal constructions of both classes of codes which achieve their respective minimum distance bounds with equality. The third problem deals with locally correctable codes. A block code of length `n' is said to be locally correctable, if there exists a randomized algorithm such that any one of the coordinates of the codeword can be recovered by querying at most `r' coordinates, even in presence of some fraction of errors. We study the local correctability of linear codes whose duals contain 4-designs. We also derive a bound relating `r' and fraction of errors that can be tolerated, when each instance of the randomized algorithm is `t'-error correcting instead of simple parity computation.
APA, Harvard, Vancouver, ISO, and other styles
36

Chan, Yi-Sheng, and 詹義勝. "A Collusion Avoidance Node Selection Scheme for Social network-based distributed data storage." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/3ccsgd.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
106
In an era of information explosion, cloud storage or P2P (Peer-to-Peer) distributed storage techniques are used to solve the problem of data storage and sharing. However, the distributed storages techniques still face the privacy issues. Although cloud storage stores data in a distributed manner, the administration and management are still centralized and thus poses the problem of trustworthy and being monitored. In addition, whether the administrator has the data protection capabilities is another problem. On the other hand, P2P distributed storage systems are vulnerable to privacy leakage because data are stored in multiple independent nodes that are not trustworthy nor stable. In this thesis, motivated by the privacy protection requirement of a traffic accident emergency rescue system, we propose a Collusion Avoidance Node Selection Scheme for Social Network-based Distributed Data Storage to protect privacy in a distributed and unstable environment. Specifically, we use Fountain Code to encode data and generate sufficient amount of coding symbols and store them distributed. By this approach, we solve the problems of privacy preserving and node unstable problem simultaneously. To improve the reliability, we propose to select node from one’s social network as the data keepers and design a trust score function to evaluate the nodes in the node selection algorithm. Besides, we further consider the concept of gamming theory to prevent the collusion between data-keepers. In the experiments, we use real datasets to study the performance of our proposed approach. The experimental results show that our approach outperforms the Best Score Selection and Random Selection methods and our approach can avoid collusion effectively and improve the data privacy.
APA, Harvard, Vancouver, ISO, and other styles
37

Cruz, Marco Rafael Meneses. "Benefits of coordinating distribution network reconfiguration with distributed generation and energy storage systems." Dissertação, 2016. https://repositorio-aberto.up.pt/handle/10216/84368.

Full text
Abstract:
Avaliar a importância da reconfiguração numa rede eléctrica de distribuição, assim como a integração (localização e tamanho) e os impactos que a produção distribuída pode ter na rede e a localização e tamanho de sistemas de armazenamento de energia para contrapor o aumento da penetração na rede de produção distribuída que traz consigo imprevisibilidade na produção de energia. A reconfiguração e os sistemas de armazenamento de energia têm como grande finalidade ajudar a integrar na rede cada vez mais produção distribuída de origem renovável.
APA, Harvard, Vancouver, ISO, and other styles
38

Cruz, Marco Rafael Meneses. "Benefits of coordinating distribution network reconfiguration with distributed generation and energy storage systems." Master's thesis, 2016. https://repositorio-aberto.up.pt/handle/10216/84368.

Full text
Abstract:
Avaliar a importância da reconfiguração numa rede eléctrica de distribuição, assim como a integração (localização e tamanho) e os impactos que a produção distribuída pode ter na rede e a localização e tamanho de sistemas de armazenamento de energia para contrapor o aumento da penetração na rede de produção distribuída que traz consigo imprevisibilidade na produção de energia. A reconfiguração e os sistemas de armazenamento de energia têm como grande finalidade ajudar a integrar na rede cada vez mais produção distribuída de origem renovável.
APA, Harvard, Vancouver, ISO, and other styles
39

Flouris, Michail D. "Extensible Networked-storage Virtualization with Metadata Management at the Block Level." Thesis, 2009. http://hdl.handle.net/1807/17759.

Full text
Abstract:
Increased scaling costs and lack of desired features is leading to the evolution of high-performance storage systems from centralized architectures and specialized hardware to decentralized, commodity storage clusters. Existing systems try to address storage cost and management issues at the filesystem level. Besides dictating the use of a specific filesystem, however, this approach leads to increased complexity and load imbalance towards the file-server side, which in turn increase costs to scale. In this thesis, we examine these problems at the block-level. This approach has several advantages, such as transparency, cost-efficiency, better resource utilization, simplicity and easier management. First of all, we explore the mechanisms, the merits, and the overheads associated with advanced metadata-intensive functionality at the block level, by providing versioning at the block level. We find that block-level versioning has low overhead and offers transparency and simplicity advantages over filesystem-based approaches. Secondly, we study the problem of providing extensibility required by diverse and changing application needs that may use a single storage system. We provide support for (i)adding desired functions as block-level extensions, and (ii)flexibly combining them to create modular I/O hierarchies. In this direction, we design, implement and evaluate an extensible block-level storage virtualization framework, Violin, with support for metadata-intensive functions. Extending Violin we build Orchestra, an extensible framework for cluster storage virtualization and scalable storage sharing at the block-level. We show that Orchestra's enhanced block interface can substantially simplify the design of higher-level storage services, such as cluster filesystems, while being scalable. Finally, we consider the problem of consistency and availability in decentralized commodity clusters. We propose RIBD, a novel storage system that provides support for handling both data and metadata consistency issues at the block layer. RIBD uses the notion of consistency intervals (CIs) to provide fine-grain consistency semantics on sequences of block level operations by means of a lightweight transactional mechanism. RIBD relies on Orchestra's virtualization mechanisms and uses a roll-back recovery mechanism based on low-overhead block-level versioning. We evaluate RIBD on a cluster of 24 nodes, and find that it performs comparably to two popular cluster filesystems, PVFS and GFS, while offering stronger consistency guarantees.
APA, Harvard, Vancouver, ISO, and other styles
40

Chang, Chun-Fu, and 張淳甫. "Design and Implementation of an Ontology-based Distributed RDF Store Based on Chord Network." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/09631080434308445619.

Full text
Abstract:
碩士
大同大學
資訊工程學系(所)
97
RDF and OWL are used as the data model and schema, respectively, to build the distributed knowledge base of the Semantic Web. The components in RDF models, subjects, predicates, and objects, are identified universally on the web, which makes it suitable for distributed operations. In this paper, we employ the distributed hash table (DHT) technology on peer-to-peer network to develop distributed RDF store. To take into account of ontology in RDF, we extend the chord ring to be two-level ring, where the first level is based on the ontology schema and the next one is on the RDF itself. The extension retains the complexity of O(log N) in maintaining and lookup message in an N-node system. The simulation results show that adding an additional level reduces the path length of message lookup. We design three-layered system architecture for the ontology-based distributed RDF store. We are developing a prototype according to the design in this paper to show how the two-level ring works.
APA, Harvard, Vancouver, ISO, and other styles
41

(9748934), Sugirdhalakshmi Ramaraj. "A HYBRID NETWORK FLOW ALGORITHM FOR THE OPTIMAL CONTROL OF LARGE-SCALE DISTRIBUTED ENERGY SYSTEMS." Thesis, 2020.

Find full text
Abstract:
This research focuses on developing strategies for the optimal control of large-scale Combined Cooling, Heating and Power (CCHP) systems to meet electricity, heating, and cooling demands, and evaluating the cost savings potential associated with it. Optimal control of CCHP systems involves the determination of the mode of operation and set points to satisfy the specific energy requirements for each time period. It is very complex to effectively design optimal control strategies because of the stochastic behavior of energy loads and fuel prices, varying component designs and operational limitations, startup and shutdown events and many more. Also, for large-scale systems, the problem involves a large number of decision variables, both discrete and continuous, and numerous constraints along with the nonlinear performance characteristic curves of equipment. In general, the CCHP energy dispatch problem is intrinsically difficult to solve because of the non-convex, non-differentiable, multimodal and discontinuous nature of the optimization problem along with strong coupling to multiple energy components.

This work presents a solution methodology for optimizing the operation of a campus CCHP system using a detailed network energy flow model solved by a hybrid approach combining mixed-integer linear programming (MILP) and nonlinear programming (NLP) optimization techniques. In the first step, MILP optimization is applied to a plant model that includes linear models for all components and a penalty for turning on or off the boilers and steam chillers. The MILP step determines which components need to be turned on and their respective load needed to meet the campus energy demand for the chosen time period (short, medium or long term) with one-hour resolution. Based on the solution from MILP solver as a starting point, the NLP optimization determines the actual hourly state of operation of selected components based on their nonlinear performance characteristics. The optimal energy dispatch algorithm provides operational signals associated with resource allocation ensuring that the systems meet campus electricity, heating, and cooling demands. The chief benefits of this formulation are its ability to determine the optimal mix of equipment with on/off capabilities and penalties for startup and shutdown, consideration of cost from all auxiliary equipment and its applicability to large-scale energy systems with multiple heating, cooling and power generation units resulting in improved performance.

The case-study considered in this research work is the Wade Power Plant and the Northwest Chiller Plant (NWCP) located at the main campus of Purdue University in West Lafayette, Indiana, USA. The electricity, steam, and chilled water are produced through a CCHP system to meet the campus electricity, heating and cooling demands. The hybrid approach is validated with the plant measurements and then used with the assumption of perfect load forecasts to evaluate the economic benefits of optimal control subjected to different operational conditions and fuel prices. Example cost optimizations were performed for a 24-hour period with known cooling, heating, and electricity demand of Purdue’s main campus, and based on actual real-time prices (RTP) for purchasing electricity from utility. Three optimization cases were considered for analysis: MILP [no on/off switch penalty (SP)]; MILP [including on/off switch penalty (SP)] and NLP optimization. Around 3.5% cost savings is achievable with both MILP optimization cases while almost 10.7% cost savings is achieved using the hybrid MILP-NLP approach compared to the current plant operation. For the selected components from MILP optimization, NLP balances the equipment performance to operate at the state point where its efficiency is maximum while still meeting the demand. Using this hybrid approach, a high-quality global solution is determined when the linear model is feasible while still taking into account the nonlinear nature of the problem.

Simulations were extended for different seasons to examine the sensitivity of the optimization results to differences in electric, heating and cooling demand. All the optimization results suggest there are opportunities for potential cost savings across all seasons compared to the current operation of the power plant. For a large CCHP plant, this could mean significant savings for a year. The impact of choosing different time range is studied for MILP optimization because any changes in MILP outputs impact the solutions of NLP optimization. Sensitivity analysis of the optimized results to the cost of purchased electricity and natural gas were performed to illustrate the operational switch between steam and electric driven components, generation and purchasing of electricity, and usage of coal and natural gas boilers that occurs for optimal operation. Finally, a modular, generalizable, easy-to-configure optimization framework for the cost-optimal control of large-scale combined cooling, heating and power systems is developed and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography