Dissertations / Theses on the topic 'Document network'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Document network.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Hasan, Mohammed Jaffer. "Document legalisation : a new approach to the document legalisation process using enterprise network technology." Thesis, Middlesex University, 2012. http://eprints.mdx.ac.uk/9875/.
Full textDe, Bacco Caterina. "Decentralized network control, optimization and random walks on networks." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112164/document.
Full textIn the last years several problems been studied at the interface between statistical physics and computer science. The reason being that often these problems can be reinterpreted in the language of physics of disordered systems, where a big number of variables interacts through local fields dependent on the state of the surrounding neighborhood. Among the numerous applications of combinatorial optimisation the optimal routing on communication networks is the subject of the first part of the thesis. We will exploit the cavity method to formulate efficient algorithms of type message-passing and thus solve several variants of the problem through its numerical implementation. At a second stage, we will describe a model to approximate the dynamic version of the cavity method, which allows to decrease the complexity of the problem from exponential to polynomial in time. This will be obtained by using the Matrix Product State formalism of quantum mechanics. Another topic that has attracted much interest in statistical physics of dynamic processes is the random walk on networks. The theory has been developed since many years in the case the underneath topology is a d-dimensional lattice. On the contrary the case of random networks has been tackled only in the past decade, leaving many questions still open for answers. Unravelling several aspects of this topic will be the subject of the second part of the thesis. In particular we will study the average number of distinct sites visited during a random walk and characterize its behaviour as a function of the graph topology. Finally, we will address the rare events statistics associated to random walks on networks by using the large-deviations formalism. Two types of dynamic phase transitions will arise from numerical simulations, unveiling important aspects of these problems. We will conclude outlining the main results of an independent work developed in the context of out-of-equilibrium physics. A solvable system made of two Brownian particles surrounded by a thermal bath will be studied providing details about a bath-mediated interaction arising for the presence of the bath
Li, Yue. "Edge computing-based access network selection for heterogeneous wireless networks." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S042/document.
Full textTelecommunication network has evolved from 1G to 4G in the past decades. One of the typical characteristics of the 4G network is the coexistence of heterogeneous radio access technologies, which offers end-users the capability to connect them and to switch between them with their mobile devices of the new generation. However, selecting the right network is not an easy task for mobile users since access network condition changes rapidly. Moreover, video streaming is becoming the major data service over the mobile network where content providers and network operators should cooperate to guarantee the quality of video delivery. In order to cope with this context, the thesis concerns the design of a novel approach for making an optimal network selection decision and architecture for improving the performance of adaptive streaming in the context of a heterogeneous network. Firstly, we introduce an analytical model (i.e. linear discrete-time system) to describe the network selection procedure considering one traffic class. Then, we consider the design of a selection strategy based on foundations from linear optimal control theory, with the objective to maximize network resource utilization while meeting the constraints of the supported services. Computer simulations with MATLAB are carried out to validate the efficiency of the proposed mechanism. Based on the same principal we extend this model with a general analytical model describing the network selection procedures in heterogeneous network environments with multiple traffic classes. The proposed model was, then, used to derive a scalable mechanism based on control theory, which allows not only to assist in steering dynamically the traffic to the most appropriate network access but also helps in blocking the residual traffic dynamically when the network is congested by adjusting dynamically the access probabilities. We discuss the advantages of a seamless integration with the ANDSF. A prototype is also implemented into ns-3. Simulation results sort out that the proposed scheme prevents the network congestion and demonstrates the effectiveness of the controller design, which can maximize the network resources allocation by converging the network workload to the targeted network occupancy. Thereafter, we focus on enhancing the performance of DASH in a mobile network environment for the users which has one access network. We introduce a novel architecture based on MEC. The proposed adaptation mechanism, running as an MEC service, can modify the manifest files in real time, responding to network congestion and dynamic demand, thus driving clients towards selecting more appropriate quality/bitrate video representations. We have developed a virtualized testbed to run the experiment with our proposed scheme. The simulation results demonstrate its QoE benefits compared to traditional, purely client-driven, bitrate adaptation approaches since our scheme notably improves both on the achieved MOS and on fairness in the face of congestion. Finally, we extend the proposed the MEC-based architecture to support the DASH service in a multi-access heterogeneous network in order to maximize the QoE and fairness of mobile users. In this scenario, our scheme should help users select both video quality and access network and we formulate it as an optimization problem. This optimization problem can be solved by IBM CPLEX tool. However, this tool is time-consuming and not scalable. Therefore, we introduce a heuristic algorithm to make a sub-optimal solution with less complexity. Then we implement a testbed to conduct the experiment and the result demonstrates that our proposed algorithm notably can achieve similar performance on overall achieved QoE and fairness with much more time-saving compared to the IBM CPLEX tool
Benfattoum, Youghourta. "Network coding for quality of service in wireless multi-hop networks." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112267/document.
Full textIn this thesis we deal with the application of Network Coding to guarantee the Quality of Service (QoS) for wireless multi-hop networks. Since the medium is shared, wireless networks suffer from the negative interference impact on the bandwidth. It is thus interesting to propose a Network Coding based approach that takes into account this interference during the routing process. In this context, we first propose an algorithm minimizing the interference impact for unicast flows while respecting their required bandwidth. Then, we combine it with Network Coding to increase the number of admitted flows and with Topology Control to still improve the interference management. We show by simulation the benefit of combining the three fields: Network Coding, interference consideration and Topology Control. We also deal with delay management for multicast flows and use the Generation-Based Network Coding (GBNC) that combines the packets per blocks. Most of the works on GBNC consider a fixed generation size. Because of the network state variations, the delay of decoding and recovering a block of packets can vary accordingly degrading the QoS. To solve this problem, we propose a network-and content-aware method that adjusts the generation size dynamically to respect a certain decoding delay. We also enhance it to overcome the issue of acknowledgement loss. We then propose to apply our approach in a Home Area Network for Live TV and video streaming. Our solution provides QoS and Quality of Experience for the end user with no additional equipment. Finally, we focus on a more theoretical work in which we present a new Butterfly-based network for multi-source multi-destination flows. We characterize the source node buffer size using the queuing theory and show that it matches the simulation results
Varloot, Rémi. "Dynamic network formation." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE048/document.
Full textThis thesis focuses on the rapid mixing of graph-related Markov chains. The main contribution concerns graphs with local edge dynamics, in which the topology of a graph evolves as edges slide along one another. We propose a classification of existing models of dynamic graphs, and illustrate how evolving along a changing structure improves the convergence rate. This is complemented by a proof of the rapid mixing time for one such dynamic. As part of this proof, we introduce the partial expansion of a graph. This notion allows us to track the progression of the dynamic, from a state with poor expansion to good expansion at equilibrium. The end of the thesis proposes an improvement of the Propp and Wilson perfect sampling technique. We introduce oracle sampling, a method inspired by importance sampling that reduces the overall complexity of the Propp and Wilson algorithm. We provide a proof of correctness, and study the performance of this method when sampling independent sets from certain graphs
Blein, Florent. "Automatic Document Classification Applied to Swedish News." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-3065.
Full textThe first part of this paper presents briefly the ELIN[1] system, an electronic newspaper project. ELIN is a framework that stores news and displays them to the end-user. Such news are formatted using the xml[2] format. The project partner Corren[3] provided ELIN with xml articles, however the format used was not the same. My first task has been to develop a software that converts the news from one xml format (Corren) to another (ELIN).
The second and main part addresses the problem of automatic document classification and tries to find a solution for a specific issue. The goal is to automatically classify news articles from a Swedish newspaper company (Corren) into the IPTC[4] news categories.
This work has been carried out by implementing several classification algorithms, testing them and comparing their accuracy with existing software. The training and test documents were 3 weeks of the Corren newspaper that had to be classified into 2 categories.
The last tests were run with only one algorithm (Naïve Bayes) over a larger amount of data (7, then 10 weeks) and categories (12) to simulate a more real environment.
The results show that the Naïve Bayes algorithm, although the oldest, was the most accurate in this particular case. An issue raised by the results is that feature selection improves speed but can seldom reduce accuracy by removing too many features.
Tsai, Chun-I. "A Study on Neural Network Modeling Techniques for Automatic Document Summarization." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-395940.
Full textLyazidi, Mohammed Yazid. "Dynamic resource allocation and network optimization in the Cloud Radio Access Network." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066549/document.
Full textCloud Radio Access Network (C-RAN) is a future direction in wireless communications for deploying cellular radio access subsystems in current 4G and next-generation 5G networks. In the C-RAN architecture, BaseBand Units (BBUs) are located in a pool of virtual base stations, which are connected via a high-bandwidth low latency fronthaul network to Radio Remote Heads (RRHs). In comparison to standalone clusters of distributed radio base stations, C-RAN architecture provides significant benefits in terms of centralized resource pooling, network flexibility and cost savings. In this thesis, we address the problem of dynamic resource allocation and power minimization in downlink communications for C-RAN. Our research aims to allocate baseband resources to dynamic flows of mobile users, while properly assigning RRHs to BBUs to accommodate the traffic and network demands. This is a non-linear NP-hard optimization problem, which encompasses many constraints such as mobile users' resources demands, interference management, BBU pool and fronthaul links capacities, as well as maximum transmission power limitation. To overcome the high complexity involved in this problem, we present several approaches for resource allocation strategies and tackle this issue in three stages. Obtained results prove the efficiency of our proposed strategies in terms of throughput satisfaction rate, number of active RRHs, BBU pool processing power, resiliency, and operational budget cost
Mangili, Michele. "Efficient in-network content distribution : wireless resource sharing, network planning, and security." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS182/document.
Full textIn recent years, the amount of traffic requests that Internet users generate on a daily basis has increased exponentially, mostly due to the worldwide success of video streaming services, such as Netflix and YouTube. While Content-Delivery Networks (CDNs) are the de-facto standard used nowadays to serve the ever increasing users’ demands, the scientific community has formulated proposals known under the name of Content-Centric Networks (CCN) to change the network protocol stack in order to turn the network into a content distribution infrastructure. In this context this Ph.D. thesis studies efficient techniques to foster content distribution taking into account three complementary problems:1) We consider the scenario of a wireless heterogeneous network, and we formulate a novel mechanism to motivate wireless access point owners to lease their unexploited bandwidth and cache storage, in exchange for an economic incentive.2) We study the centralized network planning problem and (I) we analyze the migration to CCN; (II) we compare the performance bounds for a CDN with those of a CCN, and (III) we take into account a virtualized CDN and study the stochastic planning problem for one such architecture.3) We investigate the security properties on access control and trackability and formulate ConfTrack-CCN: a CCN extension to enforce confidentiality, trackability and access policy evolution in the presence of distributed caches
Macpherson, Janet Robertson. "Implications of the inclusion of document retrieval systems as actors in a social network." Thesis, University of North Texas, 2005. https://digital.library.unt.edu/ark:/67531/metadc4913/.
Full textMazel, Johan. "Unsupervised network anomaly detection." Thesis, Toulouse, INSA, 2011. http://www.theses.fr/2011ISAT0024/document.
Full textAnomaly detection has become a vital component of any network in today’s Internet. Ranging from non-malicious unexpected events such as flash-crowds and failures, to network attacks such as denials-of-service and network scans, network traffic anomalies can have serious detrimental effects on the performance and integrity of the network. The continuous arising of new anomalies and attacks create a continuous challenge to cope with events that put the network integrity at risk. Moreover, the inner polymorphic nature of traffic caused, among other things, by a highly changing protocol landscape, complicates anomaly detection system's task. In fact, most network anomaly detection systems proposed so far employ knowledge-dependent techniques, using either misuse detection signature-based detection methods or anomaly detection relying on supervised-learning techniques. However, both approaches present major limitations: the former fails to detect and characterize unknown anomalies (letting the network unprotected for long periods) and the latter requires training over labeled normal traffic, which is a difficult and expensive stage that need to be updated on a regular basis to follow network traffic evolution. Such limitations impose a serious bottleneck to the previously presented problem.We introduce an unsupervised approach to detect and characterize network anomalies, without relying on signatures, statistical training, or labeled traffic, which represents a significant step towards the autonomy of networks. Unsupervised detection is accomplished by means of robust data-clustering techniques, combining Sub-Space clustering with Evidence Accumulation or Inter-Clustering Results Association, to blindly identify anomalies in traffic flows. Correlating the results of several unsupervised detections is also performed to improve detection robustness. The correlation results are further used along other anomaly characteristics to build an anomaly hierarchy in terms of dangerousness. Characterization is then achieved by building efficient filtering rules to describe a detected anomaly. The detection and characterization performances and sensitivities to parameters are evaluated over a substantial subset of the MAWI repository which contains real network traffic traces.Our work shows that unsupervised learning techniques allow anomaly detection systems to isolate anomalous traffic without any previous knowledge. We think that this contribution constitutes a great step towards autonomous network anomaly detection.This PhD thesis has been funded through the ECODE project by the European Commission under the Framework Programme 7. The goal of this project is to develop, implement, and validate experimentally a cognitive routing system that meet the challenges experienced by the Internet in terms of manageability and security, availability and accountability, as well as routing system scalability and quality. The concerned use case inside the ECODE project is network anomaly
Pham, Van Dung. "Architectural exploration of network Interface for energy efficient 3D optical network-on-chip." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S076/document.
Full textElectrical Network-on-Chip (ENoC) has long been considered as the de facto technology for interconnects in multiprocessor systems-on-chip (MPSoCs). However, with the increase of the number of cores integrated on a single chip, ENoCs are less and less suitable to adapt the bandwidth and latency requirements of nowadays complex and highly-parallel applications. In recent years, due to power consumption constraint, low latency, and high data bandwidth requirements, optical interconnects became an interesting solution to overcome these limitations. Indeed, Optical Networks on Chip (ONoC) are based on waveguides which drive optical signals from source to destination with very low latency. Unfortunately, the optical devices used to built ONoCs suffer from some imperfections which introduce losses during communications. These losses (crosstalk noises and optical losses) are very important factors which impact the energy efficiency and the performance of the system. Furthermore, Wavelength Division Multiplexing (WDM) technology can help the designer to improve ONoC performance, especially the bandwidth and the latency. However, using the WDM technology leads to introduce new losses and crosstalk noises which negatively impact the Signal to Noise Ratio (SNR) and Bit Error Rate (BER). In detail, this results in higher BER and increases power consumption, which therefore reduces the energy efficiency of the optical interconnects. The contributions presented in this manuscript address these issues. For that, we first model and analyze the optical losses and crosstalk in WDM based ONoC. The model can provide an analytical evaluation of the worst case of loss and crosstalk with different parameters for optical ring network-on-chip. Based on this model, we propose a methodology to improve the performance and then to reduce the power consumption of optical interconnects relying on the use of forward error correction (FEC). We present two case studies of lightweight FEC with low implementation complexity and high error-correction performance under 28nm Fully-Depleted Silicon-On-Insulator (FDSOI) technology. The results demonstrate the advantages of using FEC on the optical interconnect in the context of the CHAMELEON ONoC. Secondly, we propose a complete design of Optical Network Interface (ONI) which is composed of data flow allocation, integrated FECs, data serialization/deserialization, and control of the laser driver. The details of these different elements are presented in this manuscript. Relying on this network interface, an allocation management to improve energy efficiency can be supported at runtime depending on the application demands. This runtime management of energy vs. performance can be integrated into the ONI manager through configuration manager located in each ONI. Finally, the design of an ONoC configuration sequencer (OCS), located at the center of the optical layer, is presented. By using the ONI manager, the OCS can configure ONoC at runtime according to the application performance and energy requirements
Iova, Oana-Teodora. "Standards optimization and network lifetime maximization for wireless sensor networks in the Internet of things." Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD022/document.
Full textNew protocols have been standardized in order to integrate Wireless Sensor Networks (WSN) in the Internet. Among them, the IEEE 802.15.4 MAC layer protocol, and RPL, the IPv6 Routing Protocol for Low-power and Lossy Networks. The goal of this thesis is to improve these protocols, considering the energy constraints of the devices that compose the WSN. First, we proposed a new MAC layer broadcast mechanism in IEEE 802.15.4, to ensure a reliable delivery of the control packets from the upper layers (especially from RPL). Then, we provided an exhaustive evaluation of RPL and highlighted an instability problem. This instability generates a large overhead, consuming a lot of energy. Since the lifetime of WSN is very limited, we proposed a new routing metric that identifies the energy bottlenecks and maximizes the lifetime of the network. Finally, by coupling this metric with a multipath version of RPL, we are able to solve the instability problem previously highlighted
Vallet, Jason. "Where Social Networks, Graph Rewriting and Visualisation Meet : Application to Network Generation and Information Diffusion." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0818/document.
Full textIn this thesis, we present a collection of network generation and information diffusion models expressed using a specific formalism called strategic located graph rewriting, as well as a novel network layout algorithm to show the result of information diffusion in large social networks. Graphs are extremely versatile mathematical objects which can be used to represent a wide variety of high-level systems. They can be transformed in multiple ways (e.g., creating new elements, merging or altering existing ones), but such modifications must be controlled to avoid unwanted operations. To ensure this point, we use a specific formalism called strategic graph rewriting. In this work, a graph rewriting system operates on a single graph, which can then be transformed according to some transformation rules and a strategy to steer the transformation process. First, we adapt two social network generation algorithms in order to create new networks presenting small-world characteristics. Then, we translate different diffusion models to simulate information diffusion phenomena. By adapting the different models into a common formalism, we make their comparison much easier along with the adjustment of their parameters. Finally, we finish by presenting a novel compact layout method to display overviews of the results of our information diffusion method
Cros, Olivier. "Mixed criticality management into real-time and embedded network architectures : application to switched ethernet networks." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1033/document.
Full textMC (Mixed-Criticality) is an answer for industrial systems requiring different network infrastructures to manage informations of different criticality levels inside the same system. Our purpose in this work is to find solutions to integrate gls{MC} inside highly constrained industrial domains in order to mix flows of various criticality levels inside the same infrastructure. This integration induces isolation constraints : the impact of non-critical traffic on critical traffic must be characterized and bounded. This a condition to respect timing constraints. To analyze transmission delays and focus on the determinism of transmissions, we use an end-to-end delay computation method called the trajectory approach. In our work, we use a corrected version of the trajectory approach taking into account the serialization of messages.To assure the respect of timing constraints in mixed critical networks, we first present a theoretical model of gls{MC} representation. This model is issued from gls{MC} tasks scheduling on processors. This model proposes a flow modelization which considers that each flow can be of one (low critical flows) or several criticality levels.To integrate gls{MC} inside gls{RT} networks, we propose two network protocols. The first is the centralized protocol. It is structured around the definition of a central node in the network, which is responsible for synchronizing the criticality level switch of each node through a reliable multicast protocol in charge of switching the network criticality level. This centralized protocol proposes solutions to detect the needs to change the criticality levels of all nodes and to transmit this information to the central node. The second protocol is based on a distributed approach. It proposes a local gls{MC} management on each node of a network. Each node individually manages its own internal criticality level. This protocol offers solutions to preserve when possible non-critical network flows even while transmitting critical flows in the network through weak isolation.In order to propose an implementation of these protocols inside Ethernet, we describe how to use Ethernet 802.1Q header tag to specify the criticality level of a message directly inside the frame. With this solution, each flow in the network is tagged with its criticality level and this information can be analyzed by the nodes of the network to transmit the messages from the flow or not. Additionnally, for the centralized approach, we propose a solution integrating gls{MC} configuration messages into gls{PTP} clock-synchronization messages to manage criticality configuration information in a network.In this work, we designed a simulation tool denoted as gls{ARTEMIS} (Another Real-Time Engine for Message-Issued Simulation). This tool is dedicated to gls{RT} networks analysis and gls{MC} integration scheduling scenarios. This tool, based on open and modular development guidelines, has been used all along our work to validate the theoretical models we presented through simulation. We integrated both centralized and decentralized protocols inside gls{ARTEMIS} core. The obtained simulations results allowed us to provide information about the gls{QOS} guarantees offered by both protocols. Concerning non-critical traffic : the decentralized protocol, by permitting specific nodes to stay in non-critical nodes, assures a highest success ratio of non-critical traffic correct transmission.As a conclusion, we propose solutions to integrate gls{MC} inside both industrial and gls{COTS} Ethernet architectures. The solutions can be either adapted to clock-synchronized or non clock-synchronized protocols. Depending on the protocol, the individual configuration required by each switch can be reduced to adapt these solutions to less costly network devices
Zhao, Yi. "Combination of Wireless sensor network and artifical neuronal network : a new approach of modeling." Thesis, Toulon, 2013. http://www.theses.fr/2013TOUL0013/document.
Full textA Wireless Sensor Network (WSN) consisting of autonomous sensor nodes can provide a rich stream of sensor data representing physical measurements. A well built Artificial Neural Network (ANN) model needs sufficient training data sources. Facing the limitation of traditional parametric modeling, this paper proposes a standard procedure of combining ANN and WSN sensor data in modeling. Experiments on indoor thermal modeling demonstrated that WSN together with ANN can lead to accurate fine grained indoor thermal models. A new training method "Multi-Pattern Cross Training" (MPCT) is also introduced in this work. This training method makes it possible to merge knowledge from different independent training data sources (patterns) into a single ANN model. Further experiments demonstrated that models trained by MPCT method shew better generalization performance and lower prediction errors in tests using different data sets. Also the MPCT based Neural Network Model has shown advantages in multi-variable Neural Network based Model Predictive Control (NNMPC). Software simulation and application results indicate that MPCT implemented NNMPC outperformed Multiple models based NNMPC in online control efficiency
Tran, Thi-Minh-Dung. "Methods for finite-time average consensus protocols design, network robustness assessment and network topology reconstruction." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT023/document.
Full textConsensus of Multi-agent systems has received tremendous attention during the last decade. Consensus is a cooperative process in which agents interact in order to reach an agreement. Most of studies are committed to analysis of the steady-state behavior of this process. However, during the transient of this process a huge amount of data is produced. In this thesis, our aim is to exploit data produced during the transient of asymptotic average consensus algorithms in order to design finite-time average consensus protocols, assess the robustness of the graph, and eventually recover the topology of the graph in a distributed way. Finite-time Average Consensus guarantees a minimal execution time that can ensure the efficiency and the accuracy of sophisticated distributed algorithms in which it is involved. We first focus on the configuration step devoted to the design of consensus protocols that guarantee convergence to the exact average in a given number of steps. By considering networks of agents modelled with connected undirected graphs, we formulate the problem as the factorization of the averaging matrix and investigate distributed solutions to this problem. Since, communicating devices have to learn their environment before establishing communication links, we suggest the usage of learning sequences in order to solve the factorization problem. Then a gradient backpropagation-like algorithm is proposed to solve a non-convex constrained optimization problem. We show that any local minimum of the cost function provides an accurate factorization of the averaging matrix. By constraining the factor matrices to be as Laplacian-based consensus matrices, it is now well known that the factorization of the averaging matrix is fully characterized by the nonzero Laplacian eigenvalues. Therefore, solving the factorization of the averaging matrix in a distributed way with such Laplacian matrix constraint allows estimating the spectrum of the Laplacian matrix. Since that spectrum can be used to compute some robustness indices (Number of spanning trees and Effective graph Resistance also known as Kirchoff index), the second part of this dissertation is dedicated to Network Robustness Assessment through distributed estimation of the Laplacian spectrum. The problem is posed as a constrained consensus problem formulated in two ways. The first formulation (direct approach) yields a non-convex optimization problem solved in a distributed way by means of the method of Lagrange multipliers. The second formulation (indirect approach) is obtained after an adequate re-parameterization. The problem is then convex and solved by using the distributed subgradient algorithm and the alternating direction method of multipliers. Furthermore, three cases are considered: the final average value is perfectly known, noisy, or completely unknown. We also provide a way for computing the multiplicities of the estimated eigenvalues by means of an Integer programming. In this spectral approach, given the Laplacian spectrum, the network topology can be reconstructed through estimation of Laplacian eigenvector. The efficiency of the proposed solutions is evaluated by means of simulations. However, in several cases, convergence of the proposed algorithms is slow and needs to be improved in future works. In addition, the indirect approach is not scalable to very large graphs since it involves the computation of roots of a polynomial with degree equal to the size of the network. However, instead of estimating all the spectrum, it can be possible to recover only a few number of eigenvalues and then deduce some significant bounds on robustness indices
Tandon, Seema Amit. "Web Texturizer: Exploring intra web document dependencies." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2539.
Full textMohan, Raj. "XML based adaptive IPsec policy management in a trust management context /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FMohan.pdf.
Full textThesis advisor(s): Cynthia E. Irvine, Timothy E. Levin. Includes bibliographical references (p. 71-72). Also available online.
Grandinetti, Pietro. "Control of large scale traffic network." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT102/document.
Full textThe thesis focuses on traffic lights control in large scale urban networks. It starts off with a study of macroscopic modeling based on the Cell Transmission model. We formulate a signalized version of such a model in order to include traffic lights’ description into the dynamics. Moreover, we introduce two simplifications of the signalized model towards control design, one that is based on the average theory and considers duty cycles of traffic lights, and a second one that describes traffic lights trajectories with the time instants of the rising and falling edges of a binary signals. We use numerical simulations to validate the models with respect to the signalized Cell Transmission model, and microsimulations (with the software Aimsun), to validate the same model with respect to realistic vehicles’ behavior.We propose two control algorithms based on the two models above mentioned. The first one, that uses the average Cell Transmission model, considers traffic lights’ duty cycles as controlled variables and it is formulated as an optimization problem of standard traffic measures. We analyze such a problem and we show that it is equivalent to a convex optimization problem, so ensuring its computational efficiency. We analyze its performance with respect to a best-practice control scheme both in MatLab simulations and in Aimsun simulations that emulate a large portion of Grenoble, France. The second proposed approach is an optimization problem in which the decision variables are the activation and deactivation time instants of every traffic lights. We employ the Big-M modeling technique to reformulate such a problem as a mixed integer linear program, and we show via numerical simulations that the expressivity of it can lead to improvements of the traffic dynamics, at the price of the computational efficiency of the control scheme.To pursue the scalability of the proposed control techniques we develop two iterative distributed approaches to the traffic lights control problem. The first, based on the convex optimization above mentioned, uses the dual descent technique and its provably optimal, that is, it gives the same solution of the centralized optimization. The second, based on the mixed integer problem aforesaid, is a suboptimal algorithm that leads to substantial improvements by means of the computational efficiency with respect to the related centralized problem. We analyze via numerical simulations the convergence speed of the iterative algorithms, their computational burden and their performance regarding traffic metrics.The thesis is concluded with a study of the traffic lights control algorithm that is employed in several large intersections in Grenoble. We present the working principle of such an algorithm, detailing technological and methodological differences with our proposed approaches. We create into Aimsun the scenario representing the related part of the city, also reproducing the control algorithm and comparing its performance with the ones given by one of our approaches on the same scenario
Shehadeh, Dareen. "Dynamic network adaptation for energy saving." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0067/document.
Full textThe main goal of the thesis is to design an Energy Proportional Network by taking intelligent decisions into the network such as switching on and off network components in order to adapt the energy consumption to the user needs. Our work mainly focuses on reducing the energy consumption by adapting the number of APs that are operating to the actual user need. In fact, traffic load varies a lot during the day. Traffic is high in urban areas and low in the suburb during day work hours, while it is the opposite at night. Often, peak loads during rush hours are lower than capacities of the networks. Thus they remain lightly utilized for long periods of time. Thus keeping all APs active all the time even when the traffic is low causes a huge waste of energy. Our goal is to benefit from low traffic periods by automatically switch off redundant cells, taking into consideration the actual number of users, their traffic and the bandwidth requested to serve them. Ideally we wish to do so while maintaining reliable service coverage for existing and new coming users. First we consider a home networking scenario. In this case only one AP covers a given area. So when this AP is switched off (when no users are present), there will be no other AP to fill the gap of coverage. Moreover, upon the arrival of new users, no controller or other mechanism exists to wake up the AP. Consequently, new arriving users would not be served and would remain out of coverage. The study of the state of the art allowed us to have a clear overview of the existing approaches in this context. As a result, we designed a platform to investigate different methods to wake up an AP using different technologies. We measure two metrics to evaluate the Switching ON/OFF process for the different methods. The first is the energy consumed by the AP during the three phases it goes through. The second is the delay of time for the AP to wake up and be operational to serve the new users. In the second case we consider a dense network such as the ones found in urban cities, where the coverage area of an AP is also covered by several other APs. In other words, the gap resulting from switching off one or several APs can be covered by other neighbouring ones. Thus the first thing to do was to evaluate the potential of switching off APs using real measurements taken in a dense urban area. Based on this collected information, we evaluate how many APs can be switched off while maintaining the same coverage. To this end, we propose two algorithms that select the minimum set of APs needed to provide full coverage. We compute several performance parameters, and evaluate the proposed algorithms in terms of the number of selected APs, and the coverage they provide
Pinat, Magali. "Global linkages, trade network and development." Thesis, Paris 1, 2018. http://www.theses.fr/2018PA01E031/document.
Full textThis doctoral dissertation investigates the impact of networks effects on international trade and finance. The first chapter estimates the role a trade partners’ centrality plays in the diffusion of knowledge and finds that importing from countries at the core of the network leads to a significant increase in economic growth. The second chapter investigates the role of clusters in the speed of technology adoption and concludes that the diffusion of ideas is fostered among countries belonging to the same cluster. The third chapter emphasizes the role of current partners in choosing a destination for new investments and finds that countries are more likely to invest in a new destination if one of their existing partners have already made some investments in the location. The fourth chapter evaluates the impact of importing risky products on the economy and finds that the elasticity of a country’s exports with respect to its import share of fragile products from a partner impacted by a natural disaster is -0.7 percent
Iwaza, Lana. "Joint Source-Network Coding & Decoding." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112048/document.
Full textWhile network data transmission was traditionally accomplished via routing, network coding (NC) broke this rule by allowing network nodes to perform linear combinations of the upcoming data packets. Network operations are performed in a specific Galois field of fixed size q. Decoding only involves a Gaussian elimination with the received network-coded packets. However, in practical wireless environments, NC might be susceptible to transmission errors caused by noise, fading, or interference. This drawback is quite problematic for real-time applications, such as multimediacontent delivery, where timing constraints may lead to the reception of an insufficient number of packets and consequently to difficulties in decoding the transmitted sources. At best, some packets can be recovered, while in the worst case, the receiver is unable to recover any of the transmitted packets.In this thesis, we propose joint source-network coding and decoding schemes in the purpose of providing an approximate reconstruction of the source in situations where perfect decoding is not possible. The main motivation comes from the fact that source redundancy can be exploited at the decoder in order to estimate the transmitted packets, even when some of them are missing. The redundancy can be either natural, i.e, already existing, or artificial, i.e, externally introduced.Regarding artificial redundancy, we choose multiple description coding (MDC) as a way of introducing structured correlation among uncorrelated packets. By combining MDC and NC, we aim to ensure a reconstruction quality that improves gradually with the number of received network-coded packets. We consider two different approaches for generating descriptions. The first technique consists in generating multiple descriptions via a real-valued frame expansion applied at the source before quantization. Data recovery is then achieved via the solution of a mixed integerlinear problem. The second technique uses a correlating transform in some Galois field in order to generate descriptions, and decoding involves a simple Gaussian elimination. Such schemes are particularly interesting for multimedia contents delivery, such as video streaming, where quality increases with the number of received descriptions.Another application of such schemes would be multicasting or broadcasting data towards mobile terminals experiencing different channel conditions. The channel is modeled as a binary symmetric channel (BSC) and we study the effect on the decoding quality for both proposed schemes. Performance comparison with a traditional NC scheme is also provided.Concerning natural redundancy, a typical scenario would be a wireless sensor network, where geographically distributed sources capture spatially correlated measures. We propose a scheme that aims at exploiting this spatial redundancy, and provide an estimation of the transmitted measurement samples via the solution of an integer quadratic problem. The obtained reconstruction quality is compared with the one provided by a classical NC scheme
Plissonneau, Louis. "Network tomography from an operator perspective." Thesis, Paris, ENST, 2012. http://www.theses.fr/2012ENST0033/document.
Full textNetwork tomography is the study of a network's traffic characteristics using measures. This subject has already been addressed by a whole community of researchers, especially to answer the need for knowledge of residential Internet traffic that ISPs have to carry. One of the main aspects of the Internet is that it evolves very quickly, so that there is a never ending need for Internet measurements. In this work, we address the issue of residential Internet measure from two different perspectives: passive measurements and active measurements. In the first part of this thesis, we passively collect and analyse statistics of residential users' connections spanning over a whole week. We use this data to update and deepen our knowledge of Internet residential traffic. Then, we use clustering methods to form groups of users according to the application they use. This shows how the vast majority of customers are now using the Internet mainly for Web browsing and watching video Streaming. This data is also used to evaluate new opportunities for managing the traffic of a local ADSL platform. As the main part of the traffic is video streaming, we use multiple snapshots of packet captures of this traffic over a period of many years to accurately understand its evolution. Moreover we analyse and correlate its performance, defined out of quality of service indicators, to the behavior of the users of this service. In the second part of this thesis, we take advantage of this knowledge to design a new tool for actively probing the quality of experience of video streaming sites. We have modeled the playback of streaming videos so that we are able to figure out its quality as perceived by the users. With this tool, we can understand the impact of the video server selection and the DNS servers on the user's perception of the video quality. Moreover the ability to perform the experiments on different ISPs allows us to further dig into the delivery policies of video streaming sites
Dinkelacker, Vera. "Network pathology in temporal lobe epilepsy." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066156/document.
Full textOur vision of temporal lobe epilepsy (TLE) with hippocampal sclerosis has much evolved in recent years. Initially regarded as a disease centered on a single lesion, it is now perceived as a genuine network disease, which we intended to explore with a multimodal approach. We examined structural connectivity, fMRI, EEG and cognitive dysfunction in a cohort of 44 patients with unilateral hippocampal sclerosis (HS, 22 with right, 22 with left HS) and 28 healthy age and gender matched control participants. Cortical regions of interest and hippocampal volumes were determined with Freesurfer, structural connectivity with MRtrix (pairwise disconnections and component effects with Network Based Statistics), or for hippocampal-thalamic connections with FSL. We found a pronounced pattern of disconnections most notably in the left hemisphere of patients with left TLE. Network Based Statistics showed large bi hemispheric clusters lateralized to the diseased side in both left and right temporal lobe epilepsy. We suggest that hippocampal sclerosis is associated with widespread disconnections if situated in the dominant hemisphere. We then determined streamline connections between hippocampus and thalamus and found an increase in connections in relation to the HS. This increase was seemingly dysfunctional as the number of hippocampal-thalamic connections was negatively correlated with performance in executive tasks. EEG analysis revealed predominantly ipsilateral epileptic discharge. The number of sharp waves was highly correlated with a number of executive functions depending on the frontal lobe, hence at distance of the HS. Our data thus confirms the concept of temporal lobe epilepsy as a network disease that finds its expression both in widespread, though lateralized alterations of structural connectivity and in neuropsychological dysfunction way beyond the hippocampus
Ahrneteg, Jakob, and Dean Kulenovic. "Semantic Segmentation of Historical Document Images Using Recurrent Neural Networks." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18219.
Full textBakgrund. Detta arbete handlar om semantisk segmentering av historiska dokument med recurrent neural network. Semantisk segmentering av dokument inbegriper att dela in ett dokument i olika regioner, något som är viktigt för att i efterhand kunna utföra automatisk dokument analys och digitalisering med optisk teckenläsning. Vidare är convolutional neural network det främsta alternativet för bearbetning av dokument bilder medan recurrent neural network aldrig har använts för semantisk segmentering av dokument. Detta är intressant eftersom om vi tar hänsyn till hur ett recurrent neural network fungerar och att recurrent neural network har uppnått mycket bra resultat inom binär bearbetning av dokument, borde det likväl vara möjligt att använda ett recurrent neural network för semantisk segmentering av dokument och även här uppnå bra resultat. Syfte. Syftet med arbetet är att undersöka om ett recurrent neural network kan uppnå ett likvärdigt resultat jämfört med ett convolutional neural network för semantisk segmentering av dokument. Vidare är syftet även att undersöka om en kombination av ett convolutional neural network och ett recurrent neural network kan ge ett bättre resultat än att bara endast använda ett recurrent neural network. Metod. För att kunna avgöra om ett recurrent neural network är ett lämpligt alternativ för semantisk segmentering av dokument utvärderas prestanda resultatet för tre olika modeller av recurrent neural network. Därefter jämförs dessa resultat med prestanda resultatet för ett convolutional neural network. Vidare utförs förbehandling av bilder och multi klassificering för att modellerna i slutändan ska kunna producera mätbara resultat av uppskattnings bilder. Resultat. Genom att utvärdera prestanda resultaten för modellerna kan vi i en jämförelse med den bästa modellen och ett convolutional neural network uppmäta en prestanda skillnad på 2.7%. Noterbart i det här fallet är att den bästa modellen uppvisar en jämnare fördelning av prestanda. För de två modellerna som uppvisade en lägre prestanda kan slutsatsen dras att deras utfall beror på en lägre modell komplexitet. Vidare vid en jämförelse av dessa två modeller, där den ena har en kombination av ett convolutional neural network och ett recurrent neural network medan den andra endast har ett recurrent neural network uppmäts en prestanda skillnad på 4.9%. Slutsatser. Resultatet antyder att ett recurrent neural network förmodligen är ett lämpligt alternativ till ett convolutional neural network för semantisk segmentering av dokument. Vidare dras slutsatsen att en kombination av de båda varianterna bidrar till ett bättre prestanda resultat.
Baker, Dylan. "The Document Similarity Network: A Novel Technique for Visualizing Relationships in Text Corpora." Scholarship @ Claremont, 2017. https://scholarship.claremont.edu/hmc_theses/100.
Full textGross, Pierre Henri. "A document architecture and conferencing system for a network of multimedia medical workstations." Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/5951.
Full textBarbosa, Camila Cornutti. "A bossa nova, seus documentos e articulações: um movimento para além da música." Universidade do Vale do Rio do Sinos, 2008. http://www.repositorio.jesuita.org.br/handle/UNISINOS/2637.
Full textCoordenação de Aperfeiçoamento de Pessoal de Nível Superior
A presente dissertação tem como tema “A Bossa Nova, seus documentos e suas articulações: um movimento para além da música”. Trata-se, mais especificamente, de selecionar documentos relativos à Bossa Nova, entre os anos de 1958 e 1964, para a composição do corpus de pesquisa e articulá-los sob o processo metodológico de rede para observação. Assim sendo, faz-se uma retomada de como o movimento da Bossa Nova é formalmente contado, buscando pontuar sua história, bem como o contexto social e político de quando surgiu. Nesta seqüência faz-se uma exploração parcial do arquivo de documentos selecionados da Bossa Nova a partir de anúncios publicitários, materiais jornalísticos, capas de LPs, letras de canções e o som das músicas do movimento, fotografias dos personagens da Bossa Nova e os movimentos artísticos contemporâneos ao gênero, buscando pistas, traços comuns e oposições, partindo do movimento musical, também no Cinema Novo, nas Artes Plásticas, no Design, na Arquitetura e na Poesia Concreta. Já no último capí
The theme of the present work is "Bossa Nova, its documents and unfoldings: a movement beyond the music”. More specifically, it is about selecting Bossa Nova related documents from 1958 to 1964 to compose the research corpus and articulate them under a methodological network process for observation. Thus, a recollection is made on how the Bossa Nova movement is formally spoken, aiming at punctuating its history, as well as the social and political context at the time it appeared. In this sequence, a partial exploration is made on the archive of Bossa Nova documents previously selected from the point viewpoint of commercial advertising, journalistic materials, LP covers, song lyrics and the movement sounds, photographs of Bossa Nova personalities and contemporaneous artistic movements, searching for evidences, common and oppositional threads, starting from the musical movement, but also in the Cinema Novo, plastic arts, design, architecture and the concrete poetry. Yet in the last chapter, this series of docu
Mimouni, Nada. "Interrogation d'un réseau sémantique de documents : l'intertextualité dans l'accès à l'information juridique." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCD084/document.
Full textA collection of documents is generally represented as a set of documents but this simple representation does not take into account cross references between documents, which often defines their context of interpretation. This standard document model is less adapted for specific professional uses in specialized domains in which documents are related by many various references and the access tools need to consider this complexity. We propose two models based onformal and relational concept analysis and on semantic web techniques. Applied on documentary objects, these two models represent and query in a unified way documents content descriptors and documents relations
Bui, Quang Vu. "Pretopology and Topic Modeling for Complex Systems Analysis : Application on Document Classification and Complex Network Analysis." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEP034/document.
Full textThe work of this thesis presents the development of algorithms for document classification on the one hand, or complex network analysis on the other hand, based on pretopology, a theory that models the concept of proximity. The first work develops a framework for document clustering by combining Topic Modeling and Pretopology. Our contribution proposes using topic distributions extracted from topic modeling treatment as input for classification methods. In this approach, we investigated two aspects: determine an appropriate distance between documents by studying the relevance of Probabilistic-Based and Vector-Based Measurements and effect groupings according to several criteria using a pseudo-distance defined from pretopology. The second work introduces a general framework for modeling Complex Networks by developing a reformulation of stochastic pretopology and proposes Pretopology Cascade Model as a general model for information diffusion. In addition, we proposed an agent-based model, Textual-ABM, to analyze complex dynamic networks associated with textual information using author-topic model and introduced Textual-Homo-IC, an independent cascade model of the resemblance, in which homophily is measured based on textual content obtained by utilizing Topic Modeling
Baccouche, Alexandre. "Functional analysis of artificial DNA reaction network." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCB135/document.
Full textInformation processing within and in between living organisms involves the production and exchange of molecules through signaling pathways organized in chemical reactions networks. They are various by their shape, size, and by the nature of the molecules embroiled. Among them, gene regulatory networks were our inspiration to develop and implement a new framework for in-vitro molecular programming. Indeed, the expression of a gene is mostly controlled by transcription factors or regulatory proteins and/or nucleic acids that are themselves triggered by other genes. The whole assembly draws a web of cross-interacting genes and their subproducts, in which the well controlled topology relates to a precise function. With a closer look at the links between nodes in such architectures, we identify three key points in the inner operating system. First, the interactions either activate or inhibit the production of the later node, meaning that non trivial behaviors are obtained by a combination of nodes rather than a specific new interaction. Second, the chemical stability of DNA, together with the precise reactivity of enzymes ensures the longevity of the network. Finally, the dynamics are sustained by the constant anabolism/catabolism of the effectors, and the subsequent use of fuel/energy. All together, these observations led us to develop an original set of 3 elementary enzymatic reactions: the PEN-DNA toolbox. The architecture of the assembly, i.e. the connectivity between nodes relies on the sequence of synthetic DNA strands (called DNA templates), and 3 enzymes (a polymerase, a nickase and an exonuclease) are taking care of catalysis. The production and degradation of intermediates consume deoxyribonucleoside triphosphates (dNTP) and produce deoxynucleotide monophosphates leading to the dissipation of chemical potential. Reactions are monitored thanks to a backbone modification of a template with a fluorophore and the nucleobase quenching effect consecutive to an input strand binding the template. The activation mechanism is then the production of an output following the triggering of an input strand, and the inhibition comes from the production of an output strand that binds the activator-producing sequence. Various behaviors such as oscillation, bistability, or switchable memory have been implemented, requiring more and more complex topologies. For that, each circuit requires a fine tuning in the amount of chemical parameters, such as templates and enzymes. This underlies the fact that a given network may lead to different demeanors depending on the set of parameters. Mapping the output of each combination in the parameter space to find out the panel of behaviors leads to the bifurcation diagram of the system. In order to explore exhaustively the possibilities of one circuit with a reasonable experimental cost, we developped a microfluidic tool generating picoliter-sized water-in-oil droplets with different contents. We overcame the technical challenges in hardware (microfluidic design, droplet generation and long-term observation) and wetware (tracability of the droplet and emulsion compatibility/stability). So far, bifurcation diagrams were calculated from mathematical models based on the enzymes kinetics and the thermodynamic properties of each reaction. The model was then fitted with experimental data taken in distant points in the parameter space. Here, millions of droplets are created, and each one encloses a given amount of parameters, becoming one point in the diagram. The parameter coordinates are barcoded in the droplet, and the output fluorescence signal is recorded by time lapse microscopy. We first applied this technique to a well-known network, and obtained the first experimental two-dimensional bifurcation diagram of the bistable system. The diagram enlightens features that were not described by the previous mathematical model. (...)
Fouchet, Arnaud. "Kernel methods for gene regulatory network inference." Thesis, Evry-Val d'Essonne, 2014. http://www.theses.fr/2014EVRY0058/document.
Full textNew technologies in molecular biology, in particular dna microarrays, have greatly increased the quantity of available data. in this context, methods from mathematics and computer science have been actively developed to extract information from large datasets. in particular, the problem of gene regulatory network inference has been tackled using many different mathematical and statistical models, from the most basic ones (correlation, boolean or linear models) to the most elaborate (regression trees, bayesian models with latent variables). despite their qualities when applied to similar problems, kernel methods have scarcely been used for gene network inference, because of their lack of interpretability. in this thesis, two approaches are developed to obtain interpretable kernel methods. firstly, from a theoretical point of view, some kernel methods are shown to consistently estimate a transition function and its partial derivatives from a learning dataset. these estimations of partial derivatives allow to better infer the gene regulatory network than previous methods on realistic gene regulatory networks. secondly, an interpretable kernel methods through multiple kernel learning is presented. this method, called lockni, provides state-of-the-art results on real and realistically simulated datasets
Ben, Ticha Hamza. "Vehicle Routing Problems with road-network information." Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAC071/document.
Full textVehicle routing problems (VRPs) have drawn many researchers’ attention for more than fifty years. Most approaches found in the literature are, implicitly, based on the key assumption that the best path between each two points of interest in the road network (customers, depot, etc.) can be easily defined. Thus, the problem is tackled using the so-called customer-based graph, a complete graph representation of the road network. In many situations, such a graph may fail to accurately represent the original road network and more information are needed to address correctly the routing problem.We first examine these situations and point out the limits of the traditional customer-based graph. We propose a survey on works investigating vehicle routing problems by considering more information from the road network. We outline the proposed alternative approaches, namely the multigraph representation and the road network approach.Then, we are interested in the multigraph approach. We propose an algorithm that efficiently compute the multigraph representation for large sized road networks. We present an empirical analysis on the impact of the multigraph representation on the solution quality for the VPR with time windows (VRPTW) when several attributes are defined on road segments. Then, we develop an efficient heuristic method for the multigraph-based VRPTW.Next, we investigate the road network approach. We develop a complete branch-and-price algorithm that can solve the VRPTW directly on the original road network. We evaluate the relative efficiency of the two approaches through an extensive computational study.Finally, we are interested in problems where travel times vary over the time of the day, called time dependent vehicle routing problems (TDVRPs). We develop a branch-and-price algorithm that solves the TDVRP with time windows directly on the road network and we analyze the impact of the proposed approach on the solution quality
Islam, Saif Ul. "Energy management in content distribution network servers." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30007/document.
Full textExplosive increase in Internet infrastructure and installation of energy hungry devices because of huge increase in Internet users and competition of efficient Internet services causing a great increase in energy consumption. Energy management in large scale distributed systems has an important role to minimize the contribution of Information and Communication Technology (ICT) industry in global CO2 (Carbon Dioxide) footprint and to decrease the energy cost of a product or service. Content distribution Networks (CDNs) are one of the popular large scale distributed systems, in which client requests are forwarded towards servers and are fulfilled either by surrogate servers or by origin server, depending on contents availability and CDN redirection policy. Our main goal is therefore, to propose and to develop simulation-based principled mechanisms for the design of CDN redirection policies which will do and carry out dynamic decisions to reduce CDN energy consumption and then to analyze its impact on user experience constraints to provide services. We started from modeling surrogate server utilization and derived surrogate server energy consumption model based on its utilization. We targeted CDN redirection policies by proposing and developing load-balance and load-unbalance policies using Zipfian distribution, to redirect client requests to servers. We took into account two energy reduction techniques, Dynamic Voltage Frequency Scaling (DVFS) and server consolidation. We applied these energy reduction techniques in the context of a CDN at surrogate server level and injected them in load-balance and load-unbalance policies to have energy savings. In order to evaluate our proposed policies and mechanisms, we have emphasized, how efficiently the CDN resources are utilized, at what energy cost, its impact on user experience and on quality of infrastructure management. For that purpose, we have considered surrogate server's utilization, energy consumption, energy per request, mean response time, hit ratio and failed requests as evaluation metrics. In order to analyze energy reduction and its impact on user experience, energy consumption, mean response time and failed requests are considered more important parameters. We have transformed a discrete event simulator CDNsim into Green CDNsim and evaluated our proposed work in different scenarios of a CDN by changing: CDN surrogate infrastructure (number of surrogate servers), traffic load (number of client requests) and traffic intensity (client requests frequency) by taking into account previously discussed evaluation metrics. We are the first who proposed DVFS and the combination of DVFS and consolidation in a CDN simulation environment, considering load-balance and loadunbalance policies. We have concluded that energy reduction techniques offer considerable energy savings while user experience is degraded. We have exhibited that server consolidation technique performs better in energy reduction while surrogate servers are lightly loaded. While, DVFS impact is more considerable for energy gains when surrogate servers are well loaded. Impact of DVFS on user experience is lesser than that of server consolidation. Combination of both (DVFS and server consolidation) presents more energy savings at higher cost of user experience degradation in comparison when both are used individually
Falih, Issam. "Attributed Network Clustering : Application to recommender systems." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD011/document.
Full textIn complex networks analysis field, much effort has been focused on identifying graphs communities of related nodes with dense internal connections and few external connections. In addition to node connectivity information that are mostly composed by different types of links, most real-world networks contains also node and/or edge associated attributes which can be very relevant during the learning process to find out the groups of nodes i.e. communities. In this case, two types of information are available : graph data to represent the relationship between objects and attributes information to characterize the objects i.e nodes. Classic community detection and data clustering techniques handle either one of the two types but not both. Consequently, the resultant clustering may not only miss important information but also lead to inaccurate findings. Therefore, various methods have been developed to uncover communities in networks by combining structural and attribute information such that nodes in a community are not only densely connected, but also share similar attribute values. Such graph-shape data is often referred to as attributed graph.This thesis focuses on developing algorithms and models for attributed graphs. Specifically, I focus in the first part on the different types of edges which represent different types of relations between vertices. I proposed a new clustering algorithms and I also present a redefinition of principal metrics that deals with this type of networks.Then, I tackle the problem of clustering using the node attribute information by describing a new original community detection algorithm that uncover communities in node attributed networks which use structural and attribute information simultaneously. At last, I proposed a collaborative filtering model in which I applied the proposed clustering algorithms
Fabre, Pierre-Edouard. "Using network resources to mitigate volumetric DDoS." Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0020/document.
Full textMassive Denial of Service attacks represent a genuine threat for Internet service, but also significantly impact network service providers and even threat the Internet stability. There is a pressing need to control damages caused by such attacks. Numerous works have been carried out, but were unable to combine the need for mitigation, the obligation to provide continuity of service and network constraints. Proposed countermeasures focus on authenticating legitimate traffic, filtering malicious traffic, making better use of interconnection between network equipment or absorbing attack with the help of available resources. In this thesis, we propose a damage control mechanism against volumetric Denial of Services. Based on a novel attack signature and with the help of Multiprotocol Label Switching (MPLS) network functions, we isolate malicious from legitimate traffic. We apply a constraint-based forwarding to malicious traffic. The goal is to discard enough attack traffic to sustain network stability while preserving legitimate traffic. It is not only aware of attack details but also network resource, especially available bandwidth. Following that network operators do not have equal visibility on their network, we also study the impact of operational constraints on the efficiency of a commonly recommended countermeasure, namely blacklist filtering. The operational criteria are the level of information about the attack and about the traffic inside the network. We then formulate scenario which operators can identify with. We demonstrate that the blacklist generation algorithm should be carefully chosen to fit the operator context while maximizing the filtering efficiency
Sareh, Said Adel Mounir. "Ubiquitous sensor network in the NGN environment." Thesis, Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0016/document.
Full textUbiquités Sensor Network (USN) is a conceptual network built over existing physical networks. It makes use of sensed data and provides knowledge services to anyone, anywhere and at anytime, and where the information is generated by using context awareness. Smart wearable devices and USNs are emerging rapidly providing many reliable services facilitating people life. Those very useful small end terminals and devices require a global communication substrate to provide a comprehensive global end user service. In 2010, the ITU-T provided the requirements to support USN applications and services in the Next Génération Network (NGN) environment to exploit the advantages of the core network. One of the main promising markets for the USN application and services is the e-Health. It provides continuous patients’ monitoring and enables a great improvement in medical services. On the other hand, Vehicular Ad-Hoc NETwork (VANET) is an emerging technology, which provides intelligent communication between mobile vehicles. Integrating VANET with USN has a great potential to improve road safety and traffic efficiency. Most VANET applications are applied in real time and they are sensitive to delay, especially those related to safety and health. In this work, we propose to use IP Multimedia Subsystem (IMS) as a service controller sub-layer in the USN environment providing a global substrate for a comprehensive end-to-end service. Moreover, we propose to integrate VANETs with USN for more rich applications and facilities, which will ease the life of humans. We started studying the challenges on the road to achieve this goal
Masri, Ali. "Multi-Network integration for an Intelligent Mobility." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV091/document.
Full textMultimodality requires the integration of heterogeneous transportation data and services to construct a broad view of the transportation network. Many new transportation services (e.g. ridesharing, car-sharing, bike-sharing) are emerging and gaining a lot of popularity since in some cases they provide better trip solutions.However, these services are still isolated from the existing multimodal solutions and are proposed as alternative plans without being really integrated in the suggested plans. The concept of open data is raising and being adopted by many companies where they publish their data sources to the web in order to gain visibility. The goal of this thesis is to use these data to enable multimodality by constructing an extended transportation network that links these new services to existing ones.The challenges we face mainly arise from the integration problem in both transportation services and transportation data
Desmouceaux, Yoann. "Network-Layer Protocols for Data Center Scalability." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX011/document.
Full textWith the development of demand for computing resources, data center architectures are growing both in scale and in complexity.In this context, this thesis takes a step back as compared to traditional network approaches, and shows that providing generic primitives directly within the network layer is a great way to improve efficiency of resource usage, and decrease network traffic and management overhead.Using recently-introduced network architectures, Segment Routing (SR) and Bit-Indexed Explicit Replication (BIER), network layer protocols are designed and analyzed to provide three high-level functions: (1) task mobility, (2) reliable content distribution and (3) load-balancing.First, task mobility is achieved by using SR to provide a zero-loss virtual machine migration service.This then opens the opportunity for studying how to orchestrate task placement and migration while aiming at (i) maximizing the inter-task throughput, while (ii) maximizing the number of newly-placed tasks, but (iii) minimizing the number of tasks to be migrated.Second, reliable content distribution is achieved by using BIER to provide a reliable multicast protocol, in which retransmissions of lost packets are targeted towards the precise set of destinations having missed that packet, thus incurring a minimal traffic overhead.To decrease the load on the source link, this is then extended to enable retransmissions by local peers from the same group, with SR as a helper to find a suitable retransmission candidate.Third, load-balancing is achieved by way of using SR to distribute queries through several application candidates, each of which taking local decisions as to whether to accept those, thus achieving better fairness as compared to centralized approaches.The feasibility of hardware implementation of this approach is investigated, and a solution using covert channels to transparently convey information to the load-balancer is implemented for a state-of-the-art programmable network card.Finally, the possibility of providing autoscaling as a network service is investigated: by letting queries go through a fixed chain of applications using SR, autoscaling is triggered by the last instance, depending on its local state
La, Vinh Hoa. "Security monitoring for network protocols and applications." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLL006/document.
Full textComputer security, also known as cyber-security or IT security, is always an emerging topic in computer science research. Because cyber attacks are growing in both volume and sophistication, protecting information systems or networks becomes a difficult task. Therefore, researchers in research community give an ongoing attention in security including two main directions: (i)-designing secured infrastructures with secured communication protocols and (ii)-monitoring/supervising the systems or networks in order to find and re-mediate vulnerabilities. The former assists the later by forming some additional monitoring-supporting modules. Whilst, the later verifies whether everything designed in the former is correctly and securely functioning as well as detecting security violations. This is the main topic of this thesis.This dissertation presents a security monitoring framework that takes into consideration different types of audit dataset including network traffic and application logs. We propose also some novel approaches based on supervised machine learning to pre-process and analyze the data input. Our framework is validated in a wide range of case studies including traditional TCP/IPv4 network monitoring (LAN, WAN, Internet monitoring), IoT/WSN using 6LoWPAN technology (IPv6), and other applications' logs. Last but not least, we provide a study regarding intrusion tolerance by design and propose an emulation-based approach to simultaneously detect and tolerate intrusion.In each case study, we describe how we collect the audit dataset, extract the relevant attributes, handle received data and decode their security meaning. For these goals, the tool Montimage Monitoring Tool (MMT) is used as the core of our approach. We assess also the solution's performance and its possibility to work in "larger scale" systems with more voluminous dataset
Khadraoui, Younes. "Towards a seamless multi-technology access network." Thesis, Télécom Bretagne, 2016. http://www.theses.fr/2016TELB0411/document.
Full textThe mobile data traffic has been continuously increasing. To avoid saturation of cellular network, operators need to use alternative access networks for offloading purpose. WiFi is a good solution as the operator can take advantage of its unlicensed spectrum as well as the large number of deployed WiFi access points.In this thesis, we first provide a state-of-the-art of the different coupling solutions between LTE and WiFi. We show that most solutions cannot guarantee session continuity or duplicate the security procedures. This leads to propose "Very Tight Coupling" between LTE and WiFi. In this architecture, WiFi access points are connected to the LTE base stations and the security mechanisms of LTE are reused to ensure fast access to WiFi. It allows dual connectivity and to keep control signalling in the LTE network, which gives the possibility to have optimized interface selection procedures.We study how very tight coupling can be implemented and how WiFi APs that integrated in customer residential gateways can be connected to LTE base stations in a converged fixed/cellular network. We then mathematically evaluate the performance of different deployment schemes and compute how much capacity can be saved on the LTE network. Furthermore, we implement the solution on a platform with a real LTE radio interface based on the Open Air Interface framework as a proof-of-concept. We perform several experiments to find the configuration of the link-layer protocols that gives the highest bit rate. In particular, we show that using WiFi and LTE simultaneously does not always increase the bit rate
Guellier, Antoine. "Strongly Private Communications in a Homogeneous Network." Thesis, CentraleSupélec, 2017. http://www.theses.fr/2017SUPL0001/document.
Full textWith the development of online communications in the past decades, new privacy concerns have emerged. A lot of research effort have been focusing on concealing relationships in Internet communications. However, most works do not prevent particular network actors from learning the original sender or the intended receiver of a communication. While this level of privacy is satisfactory for the common citizen, it is insufficient in contexts where individuals can be convicted for the mere sending of documents to a third party. This is the case for so-called whistle-blowers, who take personal risks to alert the public of anti-democratic or illegal actions performed by large organisations. In this thesis, we consider a stronger notion of anonymity for peer-to-peer communications on the Internet, and aim at concealing the very fact that users take part in communications. To this end, we deviate from the traditional client-server architecture endorsed by most existing anonymous networks, in favor of a homogeneous, fully distributed architecture in which every user also acts as a relay server, allowing it to conceal its own traffic in the traffic it relays for others. In this setting, we design an Internet overlay inspired from previous works, that also proposes new privacy-enhancing mechanisms, such as the use of relationship pseudonyms for managing identities. We formally prove with state-of-the-art cryptographic proof frameworks that this protocol achieves our privacy goals. Furthermore, a practical study of the protocol shows that it introduces high latency in the delivery of messages, but ensures a high anonymity level even for networks of small size
Lin, Trista Shuenying. "Smart parking : Network, infrastructure and urban service." Thesis, Lyon, INSA, 2015. http://www.theses.fr/2015ISAL0138/document.
Full textSmart parking, allowing drivers to access parking information through their smart-phone, is proposed to ease drivers' pain. We first spotlight the manner to collect parking information by introducing the multi-hop sensor network architecture, and how the network is formed. We then introduce the traffic intensity models by looking at the vehicle's arrival and departure probabilities, following the heavy-tailed distribution. We study the deployment strategy of wireless on-street parking sensor layouts. We define a multiple-objective problem and solve it with two real street parking maps. In turn, we present a Publish-Subscribe service system to provide good parking information to drivers. We illustrate the system with a vehicular network and point out the importance of content and context of a driver’s message. To evaluate the resilience, we propose an extended Publish-Subscribe model, and evaluate it under different unforeseen circumstances. Our work is based on the premise that large-scale parking sensors are deployed in the city. We look at the whole picture of urban service from viewpoint of the municipality. As such, we shed light on two main topics: the information collection on sensor deployment and an extended version of Publish-Subscribe messaging paradigm. Our work gives a guideline from network-related perspectives for city before launching a smart parking or any similar real-time urban service. It also provides a meaningful evaluation platform for testing more realistic datasets, such as real vehicle traces or network traffic
Ravaioli, Riccardo. "Inférence active de la neutralité des réseaux." Thesis, Nice, 2016. http://www.theses.fr/2016NICE4044/document.
Full textIn the last decade, some ISPs have been reported to discriminate againstspecific user traffic, especially if generated by bandwidth-hungry applications(e.g., peer-to-peer, video streaming) or competing services (e.g. Voice-over-IP).Network neutrality, a design principle according to which a network shouldtreat all incoming packets equally, has been widely debated ever since. In thisthesis we present ChkDiff, a novel tool for the detection of trafficdifferentiation at the Internet access. In contrast to existing work, our methodis agnostic to both the applications being tested and the shaping mechanismsdeployed by an ISP. The experiment comprises two parts, in which we check fordifferentiation separately on upstream and downstream traffic that wepreviously dump directly from the user. In the upstream direction, ChkDiffreplays the user's outgoing traffic with a modified TTL value in order to checkfor differentiation on routers at the first few hops from the user. By comparingthe resulting delays and losses of flows that traversed the same routers, andanalyzing the behaviour on the immediate router topology spawning from theuser end point, we manage to detect instances of traffic shaping and attempt tolocalize shapers. Our study on the responsiveness of routers to TTL-limitedprobes consolidates our choice of measurements in the upstream experiment.In the downstream experiment, we replay the user's incoming traffic from ameasurement server and analyze per-flow one-way delays and losses, whiletaking into account the possibility of multiple paths between the two endpoints.Along the chapters of this thesis, we provide a detailed description of ourmethodology and a validation of our tool
Shaun, Ferdous Jahan. "Multi-Parameters Miniature Sensor for Water Network Management." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1138/document.
Full textWater is a vital element for every living being on the earth. Like many other dwindling natural resources, clean water faces a strong pressure because of human activity and the rapid growth of global population. The situation is so critical that clean water has been identified as one of the seventeenth sustainable development goals of the United Nations. Under these conditions, a sustainable management of water resources is necessary. For this purpose, a smart solution for water networks monitoring can be very helpful. However, commercially available solutions lack compactness, self-powering capabilities cost competitiveness, necessary to enable the large rollout over water networks. The present thesis takes place in the framework of a European research project, PROTEUS, which addresses these different problems by designing and fabricating a multi-parameter sensor chip (MPSC) for water resources monitoring. The MPSC enables the measurement of 9 physical and chemical parameters, is reconfigurable and self-powered. The present thesis addresses more precisely physical sensors, their design, optimization and co-integration on the MPSC. The developed device exhibits state of the art or larger performances with regard to its redundancy, turn-down ratio and power consumption. The present manuscript is split into two main parts: Part-I and Part-II. Part-I deals with non-thermal aspects of the MPSC, the pressure and conductivity sensor for instance, as well as the fabrication process of the whole device (Chapter 1 and 2). The background of environmental monitoring is presented in Chapter 1 along with the State of Art review. Chapter 2 describes fabrication methods of the MPSC. Preliminary characterization results of non-thermal sensors are also reported in this chapter. Chapter 3 and 4, included in Part-II, deal with thermal sensors (temperature and flow-rate). Chapter 3 describes the many possible uses of electric resistances for sensing applications. Finally, in chapter four, we focus on flowrate sensors before concluding and making a few suggestions for future works
Bouzembrak, Yamine. "Multi-criteria Supply Chain Network Design under uncertainty." Thesis, Artois, 2011. http://www.theses.fr/2011ARTO0211/document.
Full textThis thesis contributes to the debate on how uncertainty and concepts of sustainable development can be put into modern supply chain network and focuses on issues associated with the design of multi-criteria supply chain network under uncertainty. First, we study the literature review , which is a review of the current state of the art of Supply Chain Network Design approaches and resolution methods. Second, we propose a new methodology for multi-criteria Supply Chain Network Design (SCND) as well as its application to real Supply Chain Network (SCN), in order to satisfy the customers demand and respect the environmental, social, legislative, and economical requirements. The methodology consists of two different steps. In the first step, we use Geographic Information System (GIS) and Analytic Hierarchy Process (AHP) to buildthe model. Then, in the second step, we establish the optimal supply chain network using Mixed Integer Linear Programming model (MILP). Third, we extend the MILP to a multi-objective optimization model that captures a compromisebetween the total cost and the environment influence. We use Goal Programming approach seeking to reach the goals placed by Decision Maker. After that, we develop a novel heuristic solution method based on decomposition technique, to solve large scale supply chain network design problems that we failed to solve using exact methods. The heuristic method is tested on real case instances and numerical comparisons show that our heuristic yield high quality solutions in very limited CPU time. Finally, again, we extend the MILP model presented before where we assume that the costumer demands are uncertain. We use two-stage stochastic programming approach to model the supply chain network under demand uncertainty. Then, we address uncertainty in all SC parameters: opening costs, production costs, storage costs and customers demands. We use possibilistic linear programming approach to model the problem and we validate both approaches in a large application case
Millereau, Pierre Michel. "Large Strain and Fracture of Multiple Network Elastomers." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066082/document.
Full textWe investigated systematically the mechanical and fracture properties of multiple network elastomers synthesized by successive swelling/polymerization steps inspired by the molecular architecture of Gong’s double network gels. A more versatile synthesis method was used to vary continuously the isotropic degree of prestretching λ0 of the first network resulting in a wider range of mechanical behaviours, where λ0 controls the Young’s modulus at small strain and the strain hardening at large strain. If the first network is diluted enough (<10%) molecular bond breakage occurs in this prestretched network at high strain while avoiding sample failure. The degree of dilution controls the amount of damage and therefore the slope of the stress-strain curve. Finally, for the most diluted systems (<3%), a yield stress and a necking phenomenon was observed. Changing the degree of crosslinking of the first network or the monomers used led to the same qualitative mechanical behaviour. The fracture energy Γ was shown to be an increasing function of λ0 however different regimes could be distinguished with macroscopic fracture occurring before or after bulk damage was detected. Visualisation techniques such as Digital Image Correlation and embedded mechanoluminescent molecules were used to map a damage zone in front of the crack tip, the size of which increased with λ0. Finally, the toughening mechanism of the multiple network elastomers could be understood in a nearly quantitative way within the framework of Brown's model of fracture of double network gels
Rubanova, Natalia. "MasterPATH : network analysis of functional genomics screening data." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC109/document.
Full textIn this work we developed a new exploratory network analysis method, that works on an integrated network (the network consists of protein-protein, transcriptional, miRNA-mRNA, metabolic interactions) and aims at uncovering potential members of molecular pathways important for a given phenotype using hit list dataset from “omics” experiments. The method extracts subnetwork built from the shortest paths of 4 different types (with only protein-protein interactions, with at least one transcription interaction, with at least one miRNA-mRNA interaction, with at least one metabolic interaction) between hit genes and so called “final implementers” – biological components that are involved in molecular events responsible for final phenotypical realization (if known) or between hit genes (if “final implementers” are not known). The method calculates centrality score for each node and each path in the subnetwork as a number of the shortest paths found in the previous step that pass through the node and the path. Then, the statistical significance of each centrality score is assessed by comparing it with centrality scores in subnetworks built from the shortest paths for randomly sampled hit lists. It is hypothesized that the nodes and the paths with statistically significant centrality score can be considered as putative members of molecular pathways leading to the studied phenotype. In case experimental scores and p-values are available for a large number of nodes in the network, the method can also calculate paths’ experiment-based scores (as an average of the experimental scores of the nodes in the path) and experiment-based p-values (by aggregating p-values of the nodes in the path using Fisher’s combined probability test and permutation approach). The method is illustrated by analyzing the results of miRNA loss-of-function screening and transcriptomic profiling of terminal muscle differentiation and of ‘druggable’ loss-of-function screening of the DNA repair process. The Java source code is available on GitHub page https://github.com/daggoo/masterPATH
Belabed, Dallal. "Design and Evaluation of Cloud Network Optimization Algorithms." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066149/document.
Full textThis dissertation tries to give a deep understanding of the impact of the new Cloud paradigms regarding to the Traffic Engineering goal, to the Energy Efficiency goal, to the fairness in the endpoints offered throughput, and of the new opportunities given by virtualized network functions.In the first part of our dissertation we investigate the impact of these novel features in Data Center Network optimization, providing a formal comprehensive mathematical formulation on virtual machine placement and a metaheuristic for its resolution. We show in particular how virtual bridging and multipath forwarding impact common DCN optimization goals, Traffic Engineering and Energy Efficiency, assess their utility in the various cases in four different DCN topologies.In the second part of the dissertation our interest move into better understand the impact of novel attened and modular DCN architectures on congestion control protocols, and vice-versa. In fact, one of the major concerns in congestion control being the fairness in the offered throughput, the impact of the additional path diversity, brought by the novel DCN architectures and protocols, on the throughput of individual endpoints and aggregation points is unclear.Finally, in the third part we did a preliminary work on the new Network Function Virtualization paradigm. In this part we provide a linear programming formulation of the problem based on virtual network function chain routing problem in a carrier network. The goal of our formulation is to find the best route in a carrier network where customer demands have to pass through a number of NFV node, taking into consideration the unique constraints set by NFV