To see the other types of publications on this topic, follow the link: Deterministic networks.

Dissertations / Theses on the topic 'Deterministic networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Deterministic networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gibson, David James. "Deterministic SpaceWire networks." Thesis, University of Dundee, 2017. https://discovery.dundee.ac.uk/en/studentTheses/86f0873d-7eea-4377-960b-249c9171574e.

Full text
Abstract:
SpaceWire-D is an extension to the SpaceWire protocol that adds deterministic capabilities over existing equipment. It does this by using time-division multiplexing, controlled by the sequential broadcasting of time-codes by a network manager. A virtual bus abstraction is then used to divide the network architecture into segments in which all traffic is controlled by a single Remote Memory Access Protocol (RMAP) transaction initiator. Virtual buses are then allocated a number of time-slots in which they are allowed to operate, forming the SpaceWire-D schedule. This research starts by contributing an efficient embedded SpaceWire-D software layer, running on top of the RTEMS real-time operating system, for use in the initiators of a SpaceWire-D network. Next, the SpaceWire-D software layer was used in two LEON2-FT processor boards in combination with multiple other RMAP target boards, routers, a network manager, and a host PC running a suite of applications to create a SpaceWire-D Demonstrator. The SpaceWire-D software layer and SpaceWire-D Demonstrator were used to verify and demonstrate the SpaceWire-D protocol during the ESA SpaceWire-D project and resulted in multiple deliverables to ESA. Finally, this research contributes a novel SpaceWire-D scheduling strategy using a combination of path selection and transaction allocation algorithms. This strategy allows for a SpaceWire-D network to be defined as a list of periodic, aperiodic and payload data bandwidth requirements and outputs a list of paths and an allocation of transactions to time-slots which satisfy the networking requirements of a mission.
APA, Harvard, Vancouver, ISO, and other styles
2

Sansavini, Giovanni. "Network Modeling Stochastic and Deterministic Approaches." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/28857.

Full text
Abstract:
Stochastic and deterministic approaches for modeling complex networks are presented. The methodology combines analysis of the structure formed by the interconnections among the elements of a network with an assessment of the vulnerability towards the propagation of cascading failures. The goal is to understand the mutual interplay between the structure of the network connections and the propagation of cascading failures. Two fundamental issues related to the optimal design and operation of complex networks are addressed. The first concerns the impact that cascading failures have on networks due to the connectivity pattern linking their components. If the state of load on the network components is high, the risk of cascade spreadings becomes significant. In this case, the needed reduction of the connectivity efficiency to prevent the propagation of failures affecting the entire system is quantified. The second issue concerns the realization of the most efficient connectivity in a network that minimizes the propagations of cascading failures. It is found that a system that routinely approaches the critical load for the onset of cascading failures during its operation should have a larger efficiency value. This allows for a smoother transition to the cascade region and for a reasonable reaction time to counteract the onset of significant cascading failures. The interplay between the structure of the network connections and the propagation of cascading failures is assessed also in interdependent networks. In these systems, the linking among several network infrastructures is necessary for their optimal and economical operation. Yet, the interdependencies introduce weaknesses due to the fact that failures may cascade from one system to other interdependent systems, possibly affecting their overall functioning. Inspired by the global efficiency, a measure of the communication capabilities among interdependent systems, i.e. the interdependency efficiency, is defined. The relations between the structural parameters, i.e. the system links and the interdependency links, and the interdependency efficiency, are also quantified, as well as the relations between the structural parameters and the vulnerability towards the propagation of cascading failures. Resorting to this knowledge, the optimal interdependency connectivity is identified. Similar to the spreading of failures, the formation of a giant component is a critical phenomenon emerging as a result of the connectivity pattern in a network. This structural transition is exploited to identify the formation of macrometastases in the developed model for metastatic colonization in tumor growth. The methods of network theory proves particularly suitable to reproduce the local interactions among tumor cells that lead to the emergent global behavior of the metastasis as a community. This model for intercellular sensing reproduces the stepwise behavior characteristic of metastatic colonization. Moreover, it prompts the consideration of a curative intervention that hinders intercellular communication, even in the presence of a significant tumor cell population.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Schrammar, Nicolas. "On Deterministic Models for Wireless Networks." Licentiate thesis, KTH, Kommunikationsteori, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-32116.

Full text
Abstract:
Wireless communication is commonly modeled as a stochastic system. This is justified by the fact that the wireless channel incorporates a number of stochastic effects including fading, interference and thermal noise.One example for a stochastic model is the additive white Gaussian noise (AWGN) model, which has been successfully used to analyze the capacity of the point-to-point channel and some multi-terminal networks. However, the AWGN capacity of most networks is still an open problem. This includes small examples like the relay channel, which consists of just three terminals.In order to progress, it was suggested to investigate deterministic channel models as an approximation of the AWGN model. The objective is to find a deterministic model, which is accessible to capacity analysis. Furthermore, this analysis should provide insights on the capacity of the AWGN model.In this thesis we consider two deterministic models, the linear finite-field model (LFFM) by Avestimehr et at. and the discrete superposition model (DSM) by Anand and Kumar.It has been shown that the capacity of the DSM is a constant gap approximation of the AWGN capacity for some networks including the parallel relay network (PRN). We find upper and lower bounds on the DSM capacity of the point-to-point channel, the multiple-access channel, the broadcast channel and the PRN. Our bounds are within a constant gap, hence, they yield a constant gap approximation to the AWGN capacity of the PRN.We also show how the LFFM can be utilized to design transmission strategies for AWGN relay networks. A transmission strategy in the LFFM can be translated into a transmission strategy in the AWGN model if it fulfills certain constraints. We consider two sets of constraints, and we show that in both cases the rate in the AWGN model is at most a constant below the rate in the corresponding LFFM.
QC 20110407
APA, Harvard, Vancouver, ISO, and other styles
4

Schrammar, Nicolas. "On Deterministic Models for Gaussian Networks." Doctoral thesis, KTH, Kommunikationsteori, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122275.

Full text
Abstract:
In this thesis we study wireless networks modeled by the additive white Gaussian noise (AWGN) model. The AWGN capacity region of most network topologies is unknown, which means that the optimal transmission scheme is unknown as well. This motivates the search for capacity approximations and for approximately optimal schemes. Deterministic channel models have been proposed as means to approximate the AWGN model within a constant additive gap. We consider two particular models, the linear finite-field model (LFFM) and the discrete superposi- tion model (DSM). In the first part of the thesis we utilize the LFFM to design transmission schemes for layered relay networks in the AWGN model. We show that if a transmission scheme in the LFFM satisfies a certain set of coordination constraints, it can be translated to the AWGN model. A form of hierarchical modulation is used to build multiple transmission layers. By analyzing the performance in the AWGN model, we show that the AWGN rate is at most a constant gap below the LFFM rate. In the second part, we use the DSM to approximate the capacity and secrecy capacity of AWGN networks. First, we prove that the DSM capacity of some topologies is within a constant gap to the corresponding AWGN capacity. The topologies are given by the partially cognitive interference channel (PCIFC), a class of multiple-unicast networks, and a class of relay networks with secrecy con- straints, respectively. Then, we approximate the capacity in the DSM. We bound the capacity of the point-to-point channel, the capacity regions of the multiple- access channel and the broadcast channel, as well as the secrecy capacity of parallel relay networks (PRN) with an orthogonal eavesdropper and conventional relays. Furthermore, we find inner bounds on the capacity region of the PCIFC. This approach yields achievable rate regions for the PCIFC in the AWGN model and the AWGN secrecy capacity of the PRN within a constant gap.

QC 20130516

APA, Harvard, Vancouver, ISO, and other styles
5

SOUZA, MARCELO GOMES DE. "DETERMINISTIC ACOUSTIC SEISMIC INVERSION USING ARTIFICIAL NEURAL NETWORKS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=34647@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
A inversão sísmica é o processo de transformar dados de Sísmica de Reflexão em valores quantitativos de propriedades petroelásticas das rochas. Esses valores, por sua vez, podem ser correlacionados com outras propriedades ajudando os geocientistas a fazer uma melhor interpretação que resulta numa boa caracterização de um reservatório de petróleo. Existem vários algoritmos tradicionais para Inversão Sísmica. Neste trabalho revisitamos a Inversão Colorida (Impedância Relativa), a Inversão Recursiva, a Inversão Limitada em Banda e a Inversão Baseada em Modelos. Todos esses quatro algoritmos são baseados em processamento digital de sinais e otimização. O presente trabalho busca reproduzir os resultados desses algoritmos através de uma metodologia simples e eficiente baseada em Redes Neurais e na pseudo-impedância. Este trabalho apresenta uma implementação dos algoritmos propostos na metodologia e testa sua validade num dado sísmico público que tem uma inversão feita pelos métodos tradicionais.
Seismic inversion is the process of transforming Reflection Seismic data into quantitative values of petroleum rock properties. These values, in turn, can be correlated with other properties helping geoscientists to make a better interpretation that results in a good characterization of an oil reservoir.There are several traditional algorithms for Seismic Inversion. In this work we revise Color Inversion (Relative Impedance), Recursive Inversion, Bandwidth Inversion and Model-Based Inversion. All four of these algorithms are based on digital signal processing and optimization. The present work seeks to reproduce the results of these algorithms through a simple and efficient methodology based on Neural Networks and pseudo-impedance. This work presents an implementation of the algorithms proposed in the methodology and tests its validity in a public seismic data that has an inversion made by the traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Thubert, Pascal. "Converging over deterministic networks for an Industrial Internet." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0011/document.

Full text
Abstract:
En s'appuyant sur une connaissance précise du temps, sur la réservation de ressources et l'application distribuée de règles d'admission strictes, un réseau déterministe permet de transporter des flux pré-spécifiés avec un taux de perte extrêmement bas et une latence maximale majorée, ouvrant la voie au support d'applications critiques et/ou temps-réel sur une infrastructure de réseau convergée. De nos jours, la Technologie Opérationnelle (OT) s'appuie sur des réseaux déterministes mais conçus à façon, en général propriétaires, utilisant typiquement des liens série spécifiques, et opérés en isolation les uns des autres, ce qui multiplie la complexité physique et les coûts d'achat et de déploiement (CAPEX), ainsi que d'opération et maintenance (OPEX), et empêche l'utilisation agile des ressources. En apportant le déterminisme dans les réseaux des Technologies de l'Information (IT), une nouvelle génération de réseaux commutés de l'IT va permettre l'émulation de ces liens série et la convergence de réseaux autrefois dédiés sur une infrastructure commune à base d'IP. En retour, la convergence de l'IT et de l'OT permettra de nouvelles optimisations industrielles, en introduisant des technologies héritées de l'IT, comme le BigData et la virtualisation des fonctions du réseau (NFV), en support des opérations de l'OT, améliorant les rendements tout en apportant une réduction supplémentaire des coûts. Les solutions de réseaux déterministes réclament des possibilités nouvelles de la part des équipements, possibilités qui vont bien au-delà de celles demandées pour les besoins classiques de la QoS. Les attributs-clé sont : - la synchronisation précise de tous les n'uds, en incluant souvent la source et la destination des flux- le calcul centralisé de chemins de bout en bout à l'échelle du réseau- de nouveaux filtres de mise en forme du trafic à l'intérieur comme à l'entrée du réseau afin de le protéger en tous points- des moyens matériels permettant l'accès au medium à des échéances précises. Au travers de multiples papiers, de contributions à des standards, et de publication de propriété industrielle, le travail présenté ici repousse les limites des réseaux industriels sans fils en offrant : 1. Le calcul centralisé de chemin complexes basé sur une technologie innovante appelée ARC 2. La signalisation de ces chemins complexes et la traçabilité des paquets par une extension de la technologie BIER-TE 3. Réplication, Renvoi et Elimination des doublons le long de ces chemins complexes 4. Un temps-réel basé sur un échéancier qui assure un haut taux de délivrance et garantit une latence bornée 5. La capacité de transporter à la fois des flux déterministes et du trafic IPv6 à multiplexage statistique sur un maillage 6TiSCH partagéCe manuscrit rapporte des améliorations apportées aux techniques existantes des réseaux sans fils à basse puissance (LoWPAN) comme Zigbee, WirelessHART'et ISA100.11a, afin d'amener ces nouveaux bénéfices jusqu'aux réseaux opérationnels sans fil. Elle a été implémentée en programme et sur du matériel open-source, et évaluée face à du IEEE Std. 802.15.4 classique ainsi que du 802.15.4 TSCH, utilisés en topologie maillée. L'expérience menée montre que notre nouvelle proposition permet d'éviter les à-coups et de garantir des taux élevés de délivrance, même face à des évènements exceptionnels comme la perte d'un relais ou la dégradation temporaire d'un lien radio
Based on time, resource reservation, and policy enforcement by distributed shapers, Deterministic Networking provides the capability to carry specified unicast or multicast data streams for real-time applications with extremely low data loss rates and bounded latency, so as to support time-sensitive and mission-critical applications on a converged enterprise infrastructure.As of today, deterministic Operational Technology (OT) networks are purpose-built, mostly proprietary, typically using serial point-to-point wires, and operated as physically separate networks, which multiplies the complexity of the physical layout and the operational (OPEX) and capital (CAPEX) expenditures, while preventing the agile reuse of the compute and network resources.Bringing determinism in Information Technology (IT) networks will enable the emulation of those legacy serial wires over IT fabrics and the convergence of mission-specific OT networks onto IP. The IT/OT convergence onto Deterministic Networks will in turn enable new process optimization by introducing IT capabilities, such as the Big Data and the network functions virtualization (NFV), improving OT processes while further reducing the associated OPEX.Deterministic Networking Solutions and application use-cases require capabilities of the converged network that is beyond existing QOS mechanisms.Key attributes of Deterministic Networking are: - Time synchronization on all the nodes, often including source and destination - The centralized computation of network-wide deterministic paths - New traffic shapers within and at the edge to protect the network- Hardware for scheduled access to the media.Through multiple papers, standard contribution and Intellectual Property publication, the presented work pushes the limits of wireless industrial standards by providing: 1. Complex Track computation based on a novel ARC technology 2. Complex Track signaling and traceability, extending the IETF BIER-TE technology 3. Replication, Retry and Duplicate Elimination along the Track 4. Scheduled runtime enabling highly reliable delivery within bounded time 5. Mix of IPv6 best effort traffic and deterministic flows within a shared 6TiSCH mesh structureThis manuscript presents enhancements to existing low power wireless networks (LoWPAN) such as Zigbee, WirelessHART¿and ISA100.11a to provide those new benefits to wireless OT networks. It was implemented on open-source software and hardware, and evaluated against classical IEEE Std. 802.15.4 and 802.15.4 TSCH radio meshes. This manuscript presents and discusses the experimental results; the experiments show that the proposed technology can guarantee continuous high levels of timely delivery in the face of adverse events such as device loss and transient radio link down
APA, Harvard, Vancouver, ISO, and other styles
7

Morrison, Erin Seidler, and Erin Seidler Morrison. "Exploring the Deterministic Landscape of Evolution: An Example with Carotenoid Diversification in Birds." Diss., The University of Arizona, 2017. http://hdl.handle.net/10150/624290.

Full text
Abstract:
Establishing metrics of diversification can calibrate the observed scope of diversity within a lineage and the potential for further phenotypic diversification. There are two potential ways to calibrate differences between phenotypes. The first metric is based on the structure of the network of direct and indirect connections between elements, such as the genes, proteins, enzymes and metabolites that underlie a phenotype. The second metric characterizes the dynamic properties that determine the strength of the interactions among elements, and influence which elements are the most likely to interact. Determining how the connectivity and strength of interactions between elements lead to specific phenotypic variations provides insight into the tempo and mode of observed evolutionary changes. In this dissertation, I proposed and tested hypotheses for how the structure and metabolic flux of a biochemical network delineate patterns of phenotypic variation. I first examined the role of structural properties in shaping observed patterns of carotenoid diversification in avian plumage. I found that the diversification of species-specific carotenoid networks was predictable from the connectivity of the underlying metabolic network. The compounds with the most enzymatic reactions, that were part of the greatest number of distinct pathways, were more conserved across species’ networks than compounds associated with the fewest enzymatic reactions. These results established that compounds with the greatest connectivity act as hotspots for the diversification of pathways between species. Next, I investigated how dynamic properties of biochemical networks influence patterns of phenotypic variation in the concentration and occurrence of compounds. Specifically, I examined if the rate of compound production, known as metabolic flux, is coordinated among compounds in relation to their structural properties. I developed predictions for how different distributions of flux could cause distinct diversification patterns in the concentrations and presence of compounds in a biochemical network. I then tested the effect of metabolic network structure on the concentrations of carotenoids in the plumage of male house finches (Haemorhous mexicanus) from the same population. I assessed whether the structure of a network corresponds to a specific distribution of flux among compounds, or if flux is independent of network structure. I found that flux coevolves with network structure; concentrations of metabolically derived compounds depended on the number of reactions per compound. There were strong correlations between compound concentrations within a network structure, and the strengths of these correlations varied among structures. These findings suggest that changes in network structure, and not independent changes in flux, influence local adaptations in the concentrations of compounds. Lastly, the influence of carotenoid network structure in the evolutionary diversification of compounds across species of birds depends on how the structure of the network itself evolves. To test whether the carotenoid metabolic network structure evolves in birds, I examined the patterns of carotenoid co-occurrence across ancestral and extant species. I found that the same groups of compounds are always gained or lost together even as lineages diverge further from each other. These findings establish that the diversification of carotenoids in birds is constrained by the structure of an ancestral network, and does not evolve independently within a lineage. Taken together, the results of this dissertation establish that local adaptations and the evolutionary diversification of carotenoid metabolism are qualitatively predictable from the structure of an ancestral enzymatic network, and this suggests there is significant structural determinism in phenotypic evolution.
APA, Harvard, Vancouver, ISO, and other styles
8

Medlej, Sara, and Sara Medlej. "Scalable Trajectory Approach for ensuring deterministic guarantees in large networks." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00998249.

Full text
Abstract:
In critical real-time systems, any faulty behavior may endanger lives. Hence, system verification and validation is essential before their deployment. In fact, safety authorities ask to ensure deterministic guarantees. In this thesis, we are interested in offering temporal guarantees; in particular we need to prove that the end-to-end response time of every flow present in the network is bounded. This subject has been addressed for many years and several approaches have been developed. After a brief comparison between the existing approaches, the Trajectory Approach sounded like a good candidate due to the tightness of its offered bound. This method uses results established by the scheduling theory to derive an upper bound. The reasons leading to a pessimistic upper bound are investigated. Moreover, since the method must be applied on large networks, it is important to be able to give results in an acceptable time frame. Hence, a study of the method's scalability was carried out. Analysis shows that the complexity of the computation is due to a recursive and iterative processes. As the number of flows and switches increase, the total runtime required to compute the upper bound of every flow present in the network understudy grows rapidly. While based on the concept of the Trajectory Approach, we propose to compute an upper bound in a reduced time frame and without significant loss in its precision. It is called the Scalable Trajectory Approach. After applying it to a network, simulation results show that the total runtime was reduced from several days to a dozen seconds.
APA, Harvard, Vancouver, ISO, and other styles
9

Neely, Michael J. (Michael James) 1975. "Queue occupancy in single-server deterministic service time tree networks." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9318.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 167).
Tree networks of single server, deterministic service time queues are often used as models for packet flow in systems with ATM traffic. In this thesis, we present methods of analyzing packet occupancy in these systems. We develop general theorems which enable the analysis of individual nodes within a multi-stage system to be reduced to the analysis of a simpler single-stage or 2- stage equivalent model. In these theorems, we make very few assumptions about the nature of the exogenous input processes themselves, and hence our results apply to a variety of input sources. In particular, we treat three input source cases: bursty on/off inputs, periodic continuous bit rate (CBR) inputs, and discrete time Generalized Independent (GI) inputs. For each of these input sources, we derive mean queue lengths for individual nodes and aggregate occupancy distribution functions for multi-stage systems. For GI-type inputs (which includes memoryless inputs), we derive explicit expressions for the means and variances of packet occupancy in any node of a multi-stage, deterministic service time tree network. We also create a general definition of a "distributable input," which includes any collection of M sources which run independently and are identically distributed (iid) according to some arbitrary type of arrival process (in particular, this includes periodic CBR sources). We demonstrate that the expected occupancy of a single-stage system is a convex, monotonic function of the distributable input loading. Furthermore, the expected occupancy of any node within a multi-stage tree network is a concave function of the multiple exogenous input loadings at the upstream nodes.
by Michael J. Neely.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
10

Medlej, Sara. "Scalable Trajectory Approach for ensuring deterministic guarantees in large networks." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112168/document.

Full text
Abstract:
Tout comportement défectueux d’un système temps-réel critique, comme celui utilisé dans le réseau avionique ou le secteur nucléaire, peut mettre en danger des vies. Par conséquent, la vérification et validation de ces systèmes est indispensable avant leurs déploiements. En fait, les autorités de sécurité demandent d’assurer des garanties déterministes. Dans cette thèse, nous nous intéressons à obtenir des garanties temporelles, en particulier nous avons besoin de prouver que le temps de réponse de bout-en-bout de chaque flux présent dans le réseau est borné. Ce sujet a été abordé durant de nombreuses années et plusieurs approches ont été développées. Après une brève comparaison entre les différentes approches existantes, une semble être un bon candidat. Elle s’appelle l’approche par trajectoire; cette méthode utilise les résultats établis par la théorie de l'ordonnancement afin de calculer une limite supérieure. En réalité, la surestimation de la borne calculée peut entrainer la rejection de certification du réseau. Ainsi une première partie du travail consiste à détecter les sources de pessimisme de l’approche adoptée. Dans le cadre d’un ordonnancement FIFO, les termes ajoutant du pessimisme à la borne calculée ont été identifiés. Cependant, comme les autres méthodes, l’approche par trajectoire souffre du problème de passage à l’échelle. En fait, l’approche doit être appliquée sur un réseau composé d’une centaine de commutateur et d’un nombre de flux qui dépasse les milliers. Ainsi, il est important qu’elle soit en mesure d'offrir des résultats dans un délai acceptable. La première étape consiste à identifier, dans le cas d’un ordonnancement FIFO, les termes conduisant à un temps de calcul important. L'analyse montre que la complexité du calcul est due à un processus récursif et itératif. Ensuite, en se basant toujours sur l’approche par trajectoire, nous proposons de calculer une limite supérieure dans un intervalle de temps réduit et sans perte significative de précision. C'est ce qu'on appelle l'approche par trajectoire scalable. Un outil a été développé permettant de comparer les résultats obtenus par l’approche par trajectoire et notre proposition. Après application sur un réseau de taille réduite (composé de 10 commutateurs), les résultats de simulations montrent que la durée totale nécessaire pour calculer les bornes des milles flux a été réduite de plusieurs jours à une dizaine de secondes
In critical real-time systems, any faulty behavior may endanger lives. Hence, system verification and validation is essential before their deployment. In fact, safety authorities ask to ensure deterministic guarantees. In this thesis, we are interested in offering temporal guarantees; in particular we need to prove that the end-to-end response time of every flow present in the network is bounded. This subject has been addressed for many years and several approaches have been developed. After a brief comparison between the existing approaches, the Trajectory Approach sounded like a good candidate due to the tightness of its offered bound. This method uses results established by the scheduling theory to derive an upper bound. The reasons leading to a pessimistic upper bound are investigated. Moreover, since the method must be applied on large networks, it is important to be able to give results in an acceptable time frame. Hence, a study of the method’s scalability was carried out. Analysis shows that the complexity of the computation is due to a recursive and iterative processes. As the number of flows and switches increase, the total runtime required to compute the upper bound of every flow present in the network understudy grows rapidly. While based on the concept of the Trajectory Approach, we propose to compute an upper bound in a reduced time frame and without significant loss in its precision. It is called the Scalable Trajectory Approach. After applying it to a network, simulation results show that the total runtime was reduced from several days to a dozen seconds
APA, Harvard, Vancouver, ISO, and other styles
11

Petrides, Andreas. "Advances in the stochastic and deterministic analysis of multistable biochemical networks." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/279059.

Full text
Abstract:
This dissertation is concerned with the potential multistability of protein concentrations in the cell that can arise in biochemical networks. That is, situations where one, or a family of, proteins may sit at one of two or more different steady state concentrations in otherwise identical cells, and in spite of them being in the same environment. Models of multisite protein phosphorylation have shown that this mechanism is able to exhibit unlimited multistability. Nevertheless, these models have not considered enzyme docking, the binding of the enzymes to one or more substrate docking sites, which are separate from the motif that is chemically modified. Enzyme docking is, however, increasingly being recognised as a method to achieve specificity in protein phosphorylation and dephosphorylation cycles. Most models in the literature for these systems are deterministic i.e. based on Ordinary Differential Equations, despite the fact that these are accurate only in the limit of large molecule numbers. For small molecule numbers, a discrete probabilistic, stochastic, approach is more suitable. However, when compared to the tools available in the deterministic framework, the tools available for stochastic analysis offer inadequate visualisation and intuition. We firstly try to bridge that gap, by developing three tools: a) a discrete `nullclines' construct applicable to stochastic systems - an analogue to the ODE nullcines, b) a stochastic tool based on a Weakly Chained Diagonally Dominant M-matrix formulation of the Chemical Master Equation and c) an algorithm that is able to construct non-reversible Markov chains with desired stationary probability distributions. We subsequently prove that, for multisite protein phosphorylation and similar models, in the deterministic domain, enzyme docking and the consequent substrate enzyme-sequestration must inevitably limit the extent of multistability, ultimately to one steady state. In contrast, bimodality can be obtained in the stochastic domain even in situations where bistability is not possible for large molecule numbers. We finally extend our results to cases where we have an autophosphorylating kinase, as for example is the case with $Ca^{2+}$/calmodulin-dependent protein kinase II (CaMKII), a key enzyme in synaptic plasticity.
APA, Harvard, Vancouver, ISO, and other styles
12

Acuna, David A. Elizondo. "The recursive deterministic perceptron and topology reduction strategies for neural networks." Université Louis Pasteur (Strasbourg) (1971-2008), 1997. http://www.theses.fr/1997STR13001.

Full text
Abstract:
Les strategies de reduction de la topologie des reseaux de neurones peuvent potentiellement offrir des avantages en termes de temps d'apprentissage, d'utilisation, de capacite de generalisation, de reduction des besoins materiels, ou comme etant plus proches du modele biologique. Apres avoir presente un etat de l'art des differentes methodes existantes pour developper des reseaux des neurones partiellement connectes, nous proposons quelques nouvelles methodes pour reduir le nombre de neurones intermediaires dans une topologie de reseaux neuronal. Ces methodes sont basees sur la notion de connexions d'ordre superieur. Un nouvel algorithme pour tester la separabilite lineaire et, d'autre part, une borne superieure de convergence pour l'algorithme d'apprentissage du perceptron sont donnes. Nous presentons une generalisation du reseau neuronal du perceptron, que nous nommons perceptron deterministe recursif (rdp) qui permet dans tous les cas de separer deux classes, de facon deterministe (meme si les deux classes ne sont pas directement lineairement separables). Cette generalisation est basee sur l'augmentation de la dimension du vecteur d'entree, laquelle produit plus de degres de liberte. Nous proposons une nouvelle notion de separabilite lineaire pour m classes et montrons comment generaliser le rdp a m classes en utilisant cette nouvelle notion
APA, Harvard, Vancouver, ISO, and other styles
13

Anishchenko, Anastasiia [Verfasser], and Oliver [Akademischer Betreuer] Mülken. "Efficiency of continuous-time quantum walks: from networks with disorder to deterministic fractals." Freiburg : Universität, 2015. http://d-nb.info/1122592876/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Adasme, Soto Pablo Alberto. "Deterministic uncertain nonlinear formulations for wireless OFDMA networks with applications on semidefinite programming." Paris 11, 2010. http://www.theses.fr/2010PA112323.

Full text
Abstract:
Dans cette thèse, on étudie l'utilisation de la programmation semi-définie (SDP), l'optimisation robuste, la programmation stochastique, les relaxations lagrangiennes et des approches polyédriques de traitement de l'incertitude pour résoudre le problème d'allocation de ressources dans les réseaux sans fil OFDMA. Le premier chapitre introduit ce problème d'allocation de ressources. Puis, on fournit dans le chapitre 2 un bref aperçu théorique des concepts et méthodes dont on aura besoin dans la suite de la thèse. Dans le chapitre 3, les principales formulations mathématiques de la littérature liées aux canaux OFDMA à liaison montante sont présentées. Un schéma de M-allocation sur liaison montante est proposé sous l'hypothèse de faisabilité d'une méthode de détection de M signaux entrants sur chaque sous-porteuse. Un algorithme glouton de complexité polynomiale est dérivé de la relaxation lagrangienne de la formulation mathématique. Dans le chapitre 4, on propose deux programmes quadratiques sous contraintes quadratiques (BQCQP) en variables binaires pour la minimisation de l'énergie (contraintes de débit binaire et d'allocation sur les sous-porteuses) et on en déduit ensuite deux relaxations SDP. Dans le chapitre 5, trois approches d'optimisation robuste sont étudiées ; deux relaxations SDP et un programme de second ordre conique sont proposés. Dans le chapitre 6, on formule d'autres modèles quadratiques en utilisant la programmation stochastique et une approche polyédrale. Enfin dans le chapitre 7, on décrit les contributions principales et les conclusions générales de la thèse. En outre, de futures directions de recherche sont décrites
Ln this thesis, modern optimization techniques such as semidefinite programming (SDP), robust optimization, stochastic programming, lagrangian relaxations and polyhedral based uncertainty approaches are used to deal with the problem of resource allocation in wireless OFDMA networks. The thesis starts in chapter 1 by introducing the resource allocation problem. Ln chapter 2 a brief theoretical background describing the concepts and methods necessary for the development of the thesis are provided. Ln chapter 3, the main mathematical formulations from the literature related to uplink OFDMA channels are presented while an uplink M-Allocation scheme is proposed under the feasibility assumption of a new detection scheme of M incoming signals on each sub-carrier. A polynomial complexity greedy algorithm is derived from the lagrangian relaxation. Ln chapter 4, two binary quadratically constrained quadratic programs (BQCQP) for minimizing power subject to bit rate and sub-carrier allocation constraints for OFDMA are proposed and two SDP relaxations are derived. Ln chapter 5, three robust optimization approaches are studied; two SDP relaxations and a second order conic program are proposed. Ln chapter 6, further BQCQP models are formulated using stochastic programming and a robustness polyhedral approach. Finally in chapter 7, the main contributions as well as general conclusions of the thesis are outlined. Besides, further research directions are pointed
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Pengyuan. "Bridging the Gap between Deterministic and Stochastic Modeling with Automatic Scaling and Conversion." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/33199.

Full text
Abstract:
During the past decade, many successful deterministic models of macromolecular regulatory networks have been built. Deterministic simulations of these models can show only average dynamics of the systems. However, stochastic simulations of macromolecular regulatory models can account for behaviors that are introduced by the noisy nature of the systems but not revealed by deterministic simulations. Thus, converting an existing model of value from the most common deterministic formulation to one suitable for stochastic simulation enables further investigation of the regulatory network. Although many different stochastic models can be developed and evolved from deterministic models, a direct conversion is the first step in practice.

This conversion process is tedious and error-prone, especially for complex models. Thus, we seek to automate as much of the conversion process as possible. However, deterministic models often omit key information necessary for a stochastic formulation. Specifically, values in the model have to be scaled before a complete conversion, and the scaling factors are typically not given in the deterministic model. Several functionalities helping model scaling and converting are introduced and implemented in the JigCell modeling environment. Our tool makes it easier for the modeler to include complete details as well as to convert the model.

Stochastic simulations are known for being computationally intensive, and thus require high performance computing facilities to be practical. With parallel computation on Virginia Tech's System X supercomputer, we are able to obtain the first stochastic simulation results for realistic cell cycle models. Stochastic simulation results for several mutants, which are thought to be biologically significant, are presented. Successful deployment of the enhanced modeling environment demonstrates the power of our techniques.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
16

Menz, Stephan [Verfasser]. "Hybrid stochastic-deterministic approaches for simulation and analysis of biochemical reaction networks / Stephan Menz." Berlin : Freie Universität Berlin, 2013. http://d-nb.info/1031667121/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Smith, Gregory Edward. "A Deterministic Approach to Partitioning Neural Network Training Data for the Classification Problem." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28710.

Full text
Abstract:
The classification problem in discriminant analysis involves identifying a function that accurately classifies observations as originating from one of two or more mutually exclusive groups. Because no single classification technique works best for all problems, many different techniques have been developed. For business applications, neural networks have become the most commonly used classification technique and though they often outperform traditional statistical classification methods, their performance may be hindered because of failings in the use of training data. This problem can be exacerbated because of small data set size. In this dissertation, we identify and discuss a number of potential problems with typical random partitioning of neural network training data for the classification problem and introduce deterministic methods to partitioning that overcome these obstacles and improve classification accuracy on new validation data. A traditional statistical distance measure enables this deterministic partitioning. Heuristics for both the two-group classification problem and k-group classification problem are presented. We show that these heuristics result in generalizable neural network models that produce more accurate classification results, on average, than several commonly used classification techniques. In addition, we compare several two-group simulated and real-world data sets with respect to the interior and boundary positions of observations within their groups' convex polyhedrons. We show by example that projecting the interior points of simulated data to the boundary of their group polyhedrons generates convex shapes similar to real-world data group convex polyhedrons. Our two-group deterministic partitioning heuristic is then applied to the repositioned simulated data, producing results superior to several commonly used classification techniques.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
18

Moghtasad-Azar, Khosro. "Surface deformation analysis of dense GPS networks based on intrinsic geometry : deterministic and stochastic aspects." kostenfrei, 2007. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-33534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Fakeih, Adnan M. "A deterministic approach for identifying the underlying states of multi-stationary systems using neural networks." Thesis, Lancaster University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Spera, Manuel <1978&gt. "Motion control and real-time systems: an approach to trajectory rebuilding in non-deterministic networks." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/637/.

Full text
Abstract:
Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.
APA, Harvard, Vancouver, ISO, and other styles
21

Malhis, Luai Mohammed 1964. "Development and application of an efficient method for the solution of stochastic activity networks with deterministic activities." Diss., The University of Arizona, 1996. http://hdl.handle.net/10150/282098.

Full text
Abstract:
Modeling and evaluation of communication and computing systems is an important undertaking. In many cases, large-scale systems are designed in an ad-hoc manner, with validation (or disappointment regarding) system performance coming only after an implementation is made. This does not need to be the case. Modern modeling tools and techniques can yield accurate performance predictions that can be used in the design process. Stochastic activity networks (SANs), stochastic Petri nets (SPNs) and analytic solution methods permit specification and fast solution of many complex system models. To enhance the modeling power of SANs (SPNs), new steady-state analysis methods have been proposed for SAN (SPN) models that include non-exponential activities (transitions). The underlying stochastic process is a Markov regenerative process (MRP) when at most one non-exponential activity (transition) is enabled in each marking. Time-efficient algorithms for constructing the Markov regenerative process have been developed. However, the space required to solve such models is often extremely large. This largeness is due to the large number of transitions in the MRP. Traditional analysis methods require all these transitions be stored in memory for efficient computation. If the size of available memory is smaller than that needed to store these transitions, a time-efficient computation is impossible using these methods. To use this class of SANs to model real systems, the space complexity of MRP analysis algorithms must be reduced. In this thesis, we propose a new steady-state analysis method that is time and space efficient. The new method takes advantage of the structure of the underlying process to reduce both computation time and required memory. The performance of the proposed method is compared to existing methods using several SAN examples. In addition, the ability to model real systems using SANs that include exponential and deterministic activities is demonstrated by modeling and evaluating the performability of a group communication protocol, called Psync. In particular, we study message stabilization time (the time required for messages to arrive at all hosts) under a wide variety of workload and message loss probabilities. We then use this information to suggest a modification to Psync to reduce message stabilizing time. Another important issue we consider is the dependability modeling and evaluation of fault-tolerant parallel and distributed systems. Because of the inherent component redundancy in such systems, the state space size of the underlying stochastic process is often very large. Reduced base model construction techniques that take advantage of symmetries in the structure of such systems have the potential to avoid this state space growth. We investigate this claim, by considering the application of SANs together with reduced base model construction for the dependability modeling and evaluation of three different systems: a fault-tolerant parallel computing system, a distributed database architecture, and a multiprocessor shared-memory system.
APA, Harvard, Vancouver, ISO, and other styles
22

Woolley, Nick C. "Identification of weak areas and worst served customers for power quality issues using limited monitoring and non-deterministic data processing techniques." Thesis, University of Manchester, 2012. http://www.manchester.ac.uk/escholar/uk-ac-man-scw:162534.

Full text
Abstract:
The current international trend in distribution networks is towards increased monitoring. This trend is being driven by distribution network operators (DNOs) who hope that through increased monitoring, they will be able to optimise capital and operational expenditure and thus operate a more efficient networks. One of the key areas of focus relating to the increased interest in distribution network monitoring is power quality. Power quality disturbances affect consumers by interrupting equipment or halting industrial processes and can result in very significant financial losses. DNOs are also financially impacted by power quality issues if they breach regulatory limits or contractual arrangements. To extract value from power quality monitoring, DNOs must process and then interpret data from a variety monitoring devices placed at different locations all potentially measuring different quantities. The challenge of how best to extract useful and practical power quality information from disparate monitoring devices is the subject of this thesis. This thesis describes and develops monitoring techniques for two power quality phenomena: voltage sags and unbalance. The research presents new techniques which can graphically identify the weakest areas and the worst served customers for voltage sags and unbalance. All the developed techniques utilise non-deterministic methods (such as statistics and artificial intelligence) to deal robustly with network and measurement uncertainties. This thesis can be dissected into four areas: voltage sag monitoring, optimal power quality monitor placement, voltage unbalance monitoring and identification of the weakest areas and worst served customers for both issues. The first section of this thesis is dedicated to voltage sags. This section introduces a multi-step process to identify and estimate the impacts of voltage sags within networks. The first stage in this process is classification and detection where several different classification methods (including immune inspired techniques) are compared to determine which algorithms work best under the context of limited monitoring. The research then proposes a novel robust method for performing fault location and voltage sag profile estimation using multiple monitors. The method pays particular attention to the errors in measurement inputs and identifies the most likely location for both the fault location and the voltage magnitude using statistical methods. The voltage sag monitoring research concludes by defining the probable impacts of voltage sags on customers, and by introducing a new measure known as the sag trip probability. The second major section covered by this thesis is optimal monitor placement. This thesis presents a comprehensive methodology which enables network operators to place monitors in locations best suited for voltage sag monitoring based on future likely topological and loading changes. The third major section covered by this thesis is unbalance monitoring. A three phase distribution system state estimation model is developed which can estimate the location and impact of unbalance within the network, without assuming the loading is balanced. The final section of this thesis shows how the worst served customers and the weakest areas of the network can be identified presents for both voltage sag and unbalance using limited monitoring and the developed techniques. The results are presented graphically using a series of topological heat maps, and these show visually how the techniques could work to monitor a distribution network.
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Jidong. "Road crack condition performance modeling using recurrent Markov chains and artificial neural networks." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Christmann, Dennis [Verfasser], and Reinhard [Akademischer Betreuer] Gotzhein. "Distributed Real-time Systems - Deterministic Protocols for Wireless Networks and Model-Driven Development with SDL / Dennis Christmann. Betreuer: Reinhard Gotzhein." Kaiserslautern : Technische Universität Kaiserslautern, 2015. http://d-nb.info/1073868486/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Khaledi, Nasab Ali. "Collective Dynamics of Excitable Tree Networks." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1562669848013115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Guck, Jochen [Verfasser], Wolfgang [Akademischer Betreuer] Kellerer, Wolfgang [Gutachter] Kellerer, and Martin [Gutachter] Reisslein. "Centralized Online Routing for Deterministic Quality of Service in Packet Switched Networks / Jochen Guck ; Gutachter: Wolfgang Kellerer, Martin Reisslein ; Betreuer: Wolfgang Kellerer." München : Universitätsbibliothek der TU München, 2018. http://d-nb.info/1161528709/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ji, Shouling. "Data Collection and Capacity Analysis in Large-scale Wireless Sensor Networks." Digital Archive @ GSU, 2013. http://digitalarchive.gsu.edu/cs_diss/76.

Full text
Abstract:
In this dissertation, we study data collection and its achievable network capacity in Wireless Sensor Networks (WSNs). Firstly, we investigate the data collection issue in dual-radio multi-channel WSNs under the protocol interference model. We propose a multi-path scheduling algorithm for snapshot data collection, which has a tighter capacity bound than the existing best result, and a novel continuous data collection algorithm with comprehensive capacity analysis. Secondly, considering most existing works for the capacity issue are based on the ideal deterministic network model, we study the data collection problem for practical probabilistic WSNs. We design a cell-based path scheduling algorithm and a zone-based pipeline scheduling algorithm for snapshot and continuous data collection in probabilistic WSNs, respectively. By analysis, we show that the proposed algorithms have competitive capacity performance compared with existing works. Thirdly, most of the existing works studying the data collection capacity issue are for centralized synchronous WSNs. However, wireless networks are more likely to be distributed asynchronous systems. Therefore, we investigate the achievable data collection capacity of realistic distributed asynchronous WSNs and propose a data collection algorithm with fairness consideration. Theoretical analysis of the proposed algorithm shows that its achievable network capacity is order-optimal as centralized and synchronized algorithms do and independent of network size. Finally, for completeness, we study the data aggregation issue for realistic probabilistic WSNs. We propose order-optimal scheduling algorithms for snapshot and continuous data aggregation under the physical interference model.
APA, Harvard, Vancouver, ISO, and other styles
28

Kim, Jinho D. "Centralized random backoff for collision free wireless local area networks." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31055.

Full text
Abstract:
Over the past few decades, wireless local area networks (WLANs) have been widely deployed for data communication in indoor environments such as offices, houses, and airports. In order to fairly and efficiently use the unlicensed frequency band that Wi-Fi devices share, the devices follow a set of channel access rules, which is called a wireless medium access control (MAC) protocol. It is known that wireless devices following the 802.11 standard MAC protocol, i.e. the distributed coordination function (DCF), suffer from packet collisions when multiple nodes simultaneously transmit. This significantly degrades the throughput performance. Recently, several studies have reported access techniques to reduce the number of packet collisions and to achieve a collision free WLAN. Although these studies have shown that the number of collisions can be reduced to zero in a simple way, there have been a couple of remaining issues to solve, such as dynamic parameter adjustment and fairness to legacy DCF nodes in terms of channel access opportunity. Recently, In-Band Full Duplex (IBFD) communication has received much attention, because it has significant potential to improve the communication capacity of a radio band. IBFD means that a node can simultaneously transmit one signal and receive another signal in the same band at the same time. In order to maximize the performance of IBFD communication capability and to fairly share access to the wireless medium among distributed devices in WLANs, a number of IBFD MAC protocols have been proposed. However, little attention has been paid to fairness issues between half duplex nodes (i.e. nodes that can either transmit or receive but not both simultaneously in one time-frequency resource block) and IBFD capable nodes in the presence of the hidden node problem.
APA, Harvard, Vancouver, ISO, and other styles
29

Fritschek, Rick [Verfasser], Gerhard [Akademischer Betreuer] Wunder, Giuseppe [Gutachter] Caire, Suhas [Gutachter] Diggavi, and Gerhard [Gutachter] Wunder. "On deterministic models for capacity approximations in interference networks and information theoretic security / Rick Fritschek ; Gutachter: Giuseppe Caire, Suhas Diggavi, Gerhard Wunder ; Betreuer: Gerhard Wunder." Berlin : Technische Universität Berlin, 2018. http://d-nb.info/1162189622/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Jasanský, Michal. "Využití prostředků umělé inteligence pro podporu na kapitálových trzích." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2013. http://www.nusl.cz/ntk/nusl-224231.

Full text
Abstract:
This diploma thesis deals with the prediction of financial time series on capital markets using artificial intelligence methods. There are created several dynamic architectures of artificial neural networks, which are learned and subsequently used for prediction of future movements of shares. Based on the results an assessment and recommendations for working with artificial neural networks are provided.
APA, Harvard, Vancouver, ISO, and other styles
31

Diana, Rémi. "Le routage dans les réseaux DTN : du cas pratique des réseaux satellitaires quasi-déterministes à la modélisation théorique." Thesis, Toulouse, ISAE, 2012. http://www.theses.fr/2012ESAE0036/document.

Full text
Abstract:
Les communications par satellites sont l’aboutissement de recherches menées dans les domaines de télécommunications et des technologies spatiales depuis plus de 50 ans. Les premiers satellites souffraient d’un coût exorbitant pour des performances très limitées. Les avancées technologiques apparues dans ces domaines ont permis de rendre ce rapport satisfaisant et commercialement viable ce qui a permis de multiplier leurs lancements et ainsi de mettre en place de véritables réseaux de satellites. À ce jour, il existe de nombreuses constellations de satellites géostationnaires et d’orbites basses utilisées à des fins civiles ou militaires. De manière générale, le routage au sein de ces constellations s’effectue suivant un pré-calcul des routes existantes qui est alors utilisé sur une période donnée et rafraîchi si besoin. Ce type de routage n’étant optimal que sur des topologies déterministes, nous sommes donc amenés à considérer d’autres solutions si l’on relaxe cette hypothèse. L’objectif de cette thèse est d’explorer les alternatives possibles au routage pré-calculé. En tant que piste potentielle, nous proposons de vérifier l’adéquation des protocoles de routage à réplication issus du monde des réseaux tolérants au délai, DTN, aux constellations de satellites. Afin de nous offrir un cadre d’étude pertinent à la vue de cet objectif, nous nous focalisons sur une constellation particulière à caractère quasi-déterministe n’offrant pas une connectivité directe entre tous les nœuds du système. Dans une deuxième partie nous nous intéressons à la modélisation du protocole de routage Binary Spray and Wait. Nous développons un modèle capable de déterminer théoriquement la distribution du délai d’acheminement pour tout type de réseau, homogène et hétérogène
Satellite communication is the achievement of more than 50 years of research in the fields of telecommunications and space technologies.First satellites had exorbitant costs for very limited performances. Technological advances occurred in these areas have helped them to become commercially feasible and satisfying. This enable the increase of satellite launches and thus, building complete satellite networks.Today, there are many GEO or LEO satellite constellations used for civilian or military applications. In general, routing in these constellations is done by pre-computing existing routes. These routes are then used for a given period and refreshed if needed. This type of routing is optimal only on deterministic topologies as a consequence we need to consider other solutions if we relax this assumption. The objective of this thesis is to explore alternatives to pre-computed routing. As a potential solution, we propose to assess the suitability of replication based routing protocols issued from the world of delay tolerant networks, DTN. To provide a relevant framework to study this topic, we focus on a particular constellation that present a quasi-deterministic nature and do not provide direct connectivity between all nodes of the system. In a second part, we focus on the modeling of the Binary Spray and Wait, routing protocol. We develop a model that can theoretically determine the distribution of end-to-end delay for any type of network, homogeneous and heterogeneous. Finally, we present a possible use of this model to conduct more in-depth theoretical analysis
APA, Harvard, Vancouver, ISO, and other styles
32

Kunert, Kristina. "Architectures and Protocols for Performance Improvements of Real-Time Networks." Doctoral thesis, Högskolan i Halmstad, Inbyggda system (CERES), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-14082.

Full text
Abstract:
When designing architectures and protocols for data traffic requiring real-time services, one of the major design goals is to guarantee that traffic deadlines can be met. However, many real-time applications also have additional requirements such as high throughput, high reliability, or energy efficiency. High-performance embedded systems communicating heterogeneous traffic with high bandwidth and strict timing requirements are in need of more efficient communication solutions, while wireless industrial applications, communicating control data, require support of reliability and guarantees of real-time predictability at the same time. To meet the requirements of high-performance embedded systems, this thesis work proposes two multi-wavelength high-speed passive optical networks. To enable reliable wireless industrial communications, a framework in­corporating carefully scheduled retransmissions is developed. All solutions are based on a single-hop star topology, predictable Medium Access Control algorithms and Earliest Deadline First scheduling, centrally controlled by a master node. Further, real-time schedulability analysis is used as admission control policy to provide delay guarantees for hard real-time traffic. For high-performance embedded systems an optical star network with an Arrayed Waveguide Grating placed in the centre is suggested. The design combines spatial wavelength re­use with fixed-tuned and tuneable transceivers in the end nodes, enabling simultaneous transmis­sion of both control and data traffic. This, in turn, permits efficient support of heterogeneous traf­fic with both hard and soft real-time constraints. By analyzing traffic dependencies in this mul­tichannel network, and adapting the real-time schedulability analysis to incorporate these traffic dependencies, a considerable increase of the possible guaranteed throughput for hard real-time traffic can be obtained. Most industrial applications require using existing standards such as IEEE 802.11 or IEEE 802.15.4 for interoperability and cost efficiency. However, these standards do not provide predict­able channel access, and thus real-time guarantees cannot be given. A framework is therefore de­veloped, combining transport layer retransmissions with real-time analysis admission control, which has been adapted to consider retransmissions. It can be placed on top of many underlying communication technologies, exemplified in our work by the two aforementioned wireless stan­dards. To enable a higher data rate than pure IEEE 802.15.4, but still maintaining its energy saving properties, two multichannel network architectures based on IEEE 802.15.4 and encompassing the framework are designed. The proposed architectures are evaluated in terms of reliability, utiliza­tion, delay, complexity, scalability and energy efficiency and it is concluded that performance is enhanced through redundancy in the time and frequency domains.
APA, Harvard, Vancouver, ISO, and other styles
33

Harvey, Nicholas James Alexander. "Deterministic network coding by matrix completion." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34107.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (leaves 81-85).
Network coding is a new field of research that addresses problems of transmitting data through networks. Multicast problems are an important class of network coding problems where there is a single sender and all data must be transmitted to a set of receivers. In this thesis, we present a new deterministic algorithm to construct solutions for multicast problems that transmit data at the maximum possible rate. Our algorithm easily generalizes to several variants of multicast problems. Our approach is based on a new algorithm for maximum-rank completion of mixed matrices-taking a matrix whose entries are a mixture of numeric values and symbolic variables, and assigning values to the variables so as to maximize the resulting matrix rank. Our algorithm is faster than existing deterministic algorithms and can operate over smaller fields. This algorithm is extended to handle collections of matrices that can share variables. Over sufficiently large fields, the algorithm can compute a completion that simultaneously maximizes the rank of all matrices in the collection. Our simultaneous matrix completion algorithm requires working over a field whose size exceeds the number of matrices in the collection. We show that this algorithm is best-possible, in the sense that no efficient algorithm can operate over a smaller field unless P=NP.
by Nicholas James Alexander Harvey.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Shan. "Railway sleeper modelling with deterministic and non-deterministic support conditions." Thesis, KTH, Väg- och banteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91634.

Full text
Abstract:
Railway sleepers have important roles in the complex railway system. Due to different loading condition, poor maintenance of sleeper or bad quality of ballast, a random load distribution along the sleeper-ballast interface may occur. A sleeper design, and also the track system design, which do not consider the random load distribution, could influence the performance of the sleeper and even damage the whole railway system. Thus, a numerical static and dynamic analysis for a pre-stressed concrete mono-block railway sleeper is carried out using finite element method. The structural behaviour of a single sleeper subjected to a random sleeper-ballast interaction is studied in three steps. First, four typical scenarios of support condition for sleeper are discussed in numerical analysis. Second, large enough numerical results under different random support conditions are conducted. Finally, Neural Network methodology is used to study the performance of sleeper under a stochastic support condition. Results of vertical displacement of rail seat, tensile stress at midpoint and underneath rail seat are presented. Moreover, the worst support condition is also identified.
APA, Harvard, Vancouver, ISO, and other styles
35

Ribas, Lucas Correia. "Análise de texturas dinâmicas baseada em sistemas complexos." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-28072017-141204/.

Full text
Abstract:
A análise de texturas dinâmicas tem se apresentado como uma área de pesquisa crescente e em potencial nos últimos anos em visão computacional. As texturas dinâmicas são sequências de imagens de textura (i.e. vídeo) que representam objetos dinâmicos. Exemplos de texturas dinâmicas são: evolução de colônia de bactérias, crescimento de tecidos do corpo humano, escada rolante em movimento, cachoeiras, fumaça, processo de corrosão de metal, entre outros. Apesar de existirem pesquisas relacionadas com o tema e de resultados promissores, a maioria dos métodos da literatura possui limitações. Além disso, em muitos casos as texturas dinâmicas são resultado de fenômenos complexos, tornando a tarefa de caracterização um desafio ainda maior. Esse cenário requer o desenvolvimento de um paradigma de métodos baseados em complexidade. A complexidade pode ser compreendida como uma medida de irregularidade das texturas dinâmicas, permitindo medir a estrutura dos pixels e quantificar os aspectos espaciais e temporais. Neste contexto, o objetivo deste mestrado é estudar e desenvolver métodos para caracterização de texturas dinâmicas baseado em metodologias de complexidade advindas da área de sistemas complexos. Em particular, duas metodologias já utilizadas em problemas de visão computacional são consideradas: redes complexas e caminhada determinística parcialmente auto-repulsiva. A partir dessas metodologias, três métodos de caracterização de texturas dinâmicas foram desenvolvidos: (i) baseado em difusão em redes - (ii) baseado em caminhada determinística parcialmente auto-repulsiva - (iii) baseado em redes geradas por caminhada determinística parcialmente auto-repulsiva. Os métodos desenvolvidos foram aplicados em problemas de nanotecnologia e tráfego de veículos, apresentando resultados potenciais e contribuindo para o desenvolvimento de ambas áreas.
Dynamic texture analysis has been an area of research increasing and in potential in recent years in computer vision. Dynamic textures are sequences of texture images (i.e. video) that represent dynamic objects. Examples of dynamic textures are: evolution of the colony of bacteria, growth of body tissues, moving escalator, waterfalls, smoke, process of metal corrosion, among others. Although there are researches related to the topic and promising results, most literature methods have limitations. Moreover, in many cases the dynamic textures are the result of complex phenomena, making a characterization task even more challenging. This scenario requires the development of a paradigm of methods based on complexity. The complexity can be understood as a measure of irregularity of the dynamic textures, allowing to measure the structure of the pixels and to quantify the spatial and temporal aspects. In this context, this masters aims to study and develop methods for the characterization of dynamic textures based on methodologies of complexity from the area of complex systems. In particular, two methodologies already used in computer vision problems are considered: complex networks and deterministic walk partially self-repulsive. Based on these methodologies, three methods of characterization of dynamic textures were developed: (i) based on diffusion in networks - (ii) based on deterministic walk partially self-repulsive - (iii) based on networks generated by deterministic walk partially self-repulsive. The developed methods were applied in problems of nanotechnology and vehicle traffic, presenting potencial results and contribuing to the development of both areas.
APA, Harvard, Vancouver, ISO, and other styles
36

Cormican, Kelly James. "Computational methods for deterministic and stochastic network interdiction problems." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA297596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Abdullah, Shahrum Shah. "Experiment design for deterministic model reduction and neural network training." Thesis, Imperial College London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.406584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Brun-Laguna, Keoma. "Deterministic Networking for the Industrial IoT." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS157.

Full text
Abstract:
L’Internet des Objets (IoT) a évolué d’un toaster connecté en 1990 vers des réseaux de centaines de petit appareils utilisés dans des applications industrielle. Ces « Objects » sont habituellement de petit appareils électroniques capable de mesurer une valeur physique (température, humidité, etc.) et/ou d’agir sur le monde physique (pump, valve, etc.). De part leur faible coût et leur facilité de déploiement, ces réseaux sans fil alimentés par batteries ont étés rapidement adoptés. La promesse des communications sans fil est d’offrir une connectivité similaire au réseau filaires. De nombreuses amélioration ont étés fait dans ce sens, mais plein de défis restent à surmonter car les applications industrielles ont de fortes exigences opérationnelles. Cette section de l’IoT s’appelle l’Internet Industriel des Objets. La principale exigence est la fiabilité. Chaque bout d’information transmit dans le réseau ne doit pas être perdu. Des solutions commerciales sont aujourd’hui accessibles et propose des fiabilités de l’ordre de 99.999 %. C’est à dire, pour chaque centaine de paquet d’information généré, moins d’un est perdu. Vient ensuite la latence et l’efficience énergétique. Comme ces appareils sont alimentés par des batteries, ils doivent consommer le moins possible et être capable d’opérer pendant des années. La prochaine étape pour l’IoT est d’être appliqué au applications nécessitant des garanties de latence. Les technologies de l’IIoT sont maintenant adoptés par de nombreuses entreprises autour du monde et sont maintenant des technologies éprouvées. Néanmoins des défis restent à accomplir et certaines limites de ces technologies ne sont pas encore connues. Dans ce travail, nous nous adressons au réseaux sans fils fondés sur TSCH dont nous testons les limites de latence et de durée de vie dans des conditions réelles. Nous avons collecté plus de 3M statistiques réseaux et 32M données de capteurs dans 11 déploiements sur un total de 170,037 heures machines dans des environnements réels ainsi que dans des bancs d’essais. Nous avons réuni ce que nous pensons être le plus grand jeu de données de réseau TSCH disponible à la communauté réseau. En s’appuyant sur ces données et sur notre expérience des réseaux sans fils en milieu réel, nous avons étudié les limites des réseaux TSCH et avons fourni des méthodes et outils qui permettent d’estimer des performances de ces réseaux dans diverses conditions. Nous pensons avoir assemblé les bons outils pour que les architectes de protocoles réseaux construise des réseaux déterministes pour l’IIoT
The Internet of Things (IoT) evolved from a connected toaster in 1990 to networks of hundreds of tiny devices used in industrial applications. Those “Things” usually are tiny electronic devices able to measure a physical value (temperature, humidity, etc.) and/or to actuate on the physical world (pump, valve, etc). Due to their cost and ease of deployment, battery-powered wireless IoT networks are rapidly being adopted. The promise of wireless communication is to offer wire-like connectivity. Major improvements have been made in that sense, but many challenges remain as industrial application have strong operational requirements. This section of the IoT application is called Industrial IoT (IIoT). The main IIoT requirement is reliability. Every bit of information that is transmitted in the network must not be lost. Current off-the-shelf solutions offer over 99.999% reliability. That is, for every 100k packets of information generated, less than one is lost. Then come latency and energy-efficiency requirements. As devices are battery-powered, they need to consume as little as possible to be able to operate during years. The next step for the IoT is to target time-critical applications. Industrial IoT technologies are now adopted by companies over the world, and are now a proven solution. Yet, challenges remain and some of the limits of the technologies are still not fully understood. In this work we address TSCH-based Wireless Sensor Networks and study their latency and lifetime limits under real-world conditions. We gathered 3M network statistics 32M sensor measurements on 11 datasets with a total of 170,037 mote hours in real-world and testbeds deployments. We assembled what we believed to be the largest dataset available to the networking community. Based on those datasets and on insights we learned from deploying networks in real-world conditions, we study the limits and trade-offs of TSCH-based Wireless Sensor Networks. We provide methods and tools to estimate the network performances of such networks in various scenarios. We believe we assembled the right tools for protocol designer to built deterministic networking to the Industrial IoT
APA, Harvard, Vancouver, ISO, and other styles
39

Parker, Christopher Gareth. "Mathematical frameworks for the transmission dynamics of HIV on a concurrent partnership network." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Guan, Xiao. "Deterministic and Flexible Parallel Latent Feature Models Learning Framework for Probabilistic Knowledge Graph." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-35788.

Full text
Abstract:
Knowledge Graph is a rising topic in the field of Artificial Intelligence. As the current trend of knowledge representation, Knowledge graph research is utilizing the large knowledge base freely available on the internet. Knowledge graph also allows inspection, analysis, the reasoning of all knowledge in reality. To enable the ambitious idea of modeling the knowledge of the world, different theory and implementation emerges. Nowadays, we have the opportunity to use freely available information from Wikipedia and Wikidata. The thesis investigates and formulates a theory about learning from Knowledge Graph. The thesis researches probabilistic knowledge graph. It only focuses on a branch called latent feature models in learning probabilistic knowledge graph. These models aim to predict possible relationships of connected entities and relations. There are many models for such a task. The metrics and training process is detailed described and improved in the thesis work. The efficiency and correctness enable us to build a more complex model with confidence. The thesis also covers possible problems in finding and proposes future work.
APA, Harvard, Vancouver, ISO, and other styles
41

Subramanian, Sivaramakrishnan. "Deterministic knowledge about nearby nodes in a mobile one dimensional environment." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

LUKOVIC, BOJAN. "MODELING UNSTEADINESS IN STEADY SIMULATIONS WITH NEURAL NETWORK GENERATED LUMPED DETERMINISTIC SOURCE TERMS." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1035332082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Herbach, Ulysse. "Modélisation stochastique de l'expression des gènes et inférence de réseaux de régulation." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1155/document.

Full text
Abstract:
L'expression des gènes dans une cellule a longtemps été observable uniquement à travers des quantités moyennes mesurées sur des populations. L'arrivée des techniques «single-cell» permet aujourd'hui d'observer des niveaux d'ARN et de protéines dans des cellules individuelles : il s'avère que même dans une population de génome identique, la variabilité entre les cellules est parfois très forte. En particulier, une description moyenne est clairement insuffisante étudier la différenciation cellulaire, c'est-à-dire la façon dont les cellules souches effectuent des choix de spécialisation. Dans cette thèse, on s'intéresse à l'émergence de tels choix à partir de réseaux de régulation sous-jacents entre les gènes, que l'on souhaiterait pouvoir inférer à partir de données. Le point de départ est la construction d'un modèle stochastique de réseaux de gènes capable de reproduire les observations à partir d'arguments physiques. Les gènes sont alors décrits comme un système de particules en interaction qui se trouve être un processus de Markov déterministe par morceaux, et l'on cherche à obtenir un modèle statistique à partir de sa loi invariante. Nous présentons deux approches : la première correspond à une approximation de champ assez populaire en physique, pour laquelle nous obtenons un résultat de concentration, et la deuxième se base sur un cas particulier que l'on sait résoudre explicitement, ce qui aboutit à un champ de Markov caché aux propriétés intéressantes
Gene expression in a cell has long been only observable through averaged quantities over cell populations. The recent development of single-cell transcriptomics has enabled gene expression to be measured in individual cells: it turns out that even in an isogenic population, the molecular variability can be very important. In particular, an averaged description is not sufficient to account for cell differentiation. In this thesis, we are interested in the emergence of such cell decision-making from underlying gene regulatory networks, which we would like to infer from data. The starting point is the construction of a stochastic gene network model that is able to explain the data using physical arguments. Genes are then seen as an interacting particle system that happens to be a piecewise-deterministic Markov process, and our aim is to derive a tractable statistical model from its stationary distribution. We present two approaches: the first one is a popular field approximation, for which we obtain a concentration result, and the second one is based on an analytically tractable particular case, which provides a hidden Markov random field with interesting properties
APA, Harvard, Vancouver, ISO, and other styles
44

Muñoz, Soto Jonathan Mauricio. "Km-scale Industrial Networking." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS252.

Full text
Abstract:
L'Internet des objets (IoT) a pour objectif de fournir une connectivité à des millions d'appareils utilisés au quotidien. Pour la majorité des applications, les connexions filaires sont peu pratiques et trop coûteuses. Par conséquent, les connexions sans fil sont le seul moyen réalisable de fournir une connectivité aux dispositifs. Une des nombreuses solutions sans fil est la norme IEEE802.15.4, conçue pour les réseaux maillés de faible consommation. Cette norme est largement utilisée pour les bâtiments intelligents, la domotique et les applications industrielles. Un amendement ultérieur, IEEE802.15.4g, définit 3 PHY (FSK, OFDM et O-QPSK). Cela cible les applications SUN (Smart Utility Networks), c’est-à-dire le comptage intelligent, tout en offrant une couverture étendue. Dans cette thèse, nous analysons l'utilisation de cette norme en dehors de l'environnement SUN et sur des applications industrielles. Premièrement, nous menons une série d’expériences avec IEEE802.15.4g-dispositifs afin de mesurer la portée des liens dans des scénarios extérieurs réels. Les résultats montrent que des communications très fiables avec des débits jusqu'à 800 kbps (avec OFDM) peuvent être atteintes en milieu urbain à 540 m entre les nœuds, et que la liaison radio la plus longue utile est obtenue à 779 m (FSK). Deuxièmement, nous comparons les performances de la norme IEEE802.15.4 à celle de la norme IEEE802.15.4g OFDM dans les bâtiments intelligents. A partir d'expériences, nous avons déterminé que l'OFDM IEEE802.15.4g surpasse l'IEEE802.15.4 et doit être considéré comme une solution pour les déploiements ultérieurs. Enfin, nous introduisons le concept du réseau agile: des nœuds pouvant modifier dynamiquement leur PHY en fonction de leurs besoins et de leur situation
The Internet of Things (IoT) aims to provide connectivity to millions of devices used in our day-to-day life. For the vast majority of applications, wired connections are unpractical and too expensive, therefore wireless connections is the only feasible way to provide connectivity to the devices. One of many wireless solutions is the standard IEEE802.15.4, specially designed for low power mesh networks. This standard is widely used for Smart Building, Home Automation and Industrial Applications.A subsequent amendment, the IEEE802.15.4g, defines 3 PHYs (FSK, OFDM and O-QPSK). This targets Smart Utility Networks(SUN) applications, i.e., Smart Metering, while providing extended coverage. In this thesis, we analyse the use of this standard outside the SUN environment and onto Industrial Networking applications.First, we conduct a series of experiments using IEEE802.15.4g compliant devices in order to measure the range coverage on radio links in real use case outdoor scenarios. Results show that highly reliable communications with data rates up to 800 kbps (with OFDM) can be achieved in urban environments at 540 m between nodes, and the longest useful radio link is obtained at 779 m (FSK). Sencond, regarding the robustness and high data rate of OFDM, we compare the performance of the IEEE802.15.4 with the IEEE802.15.4g OFDM in Smart Building scenarios. From experiments, we determine that IEEE802.15.4g OFDM outperforms IEEE802.15.4 and should be considered as a solution for further deployments in combination with a TSCH MAC approach. Finally, we introduce the concept of Network Agility: nodes that can dynamically change their PHY according to their needs and circumstances
APA, Harvard, Vancouver, ISO, and other styles
45

Lu, Lu. "Wireless Broadcasting with Network Coding." Licentiate thesis, KTH, Kommunikationsteori, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-40472.

Full text
Abstract:
Wireless digital broadcasting applications such as digital audio broadcast (DAB) and digital video broadcast (DVB) are becoming increasingly popular since the digital format allows for quality improvements as compared to traditional analogue broadcast. The broadcasting is commonly based on packet transmission. In this thesis, we consider broadcasting over packet erasure channels. To achieve reliable transmission, error-control schemes are needed. By carefully designing the error-control schemes, transmission efficiency can be improved compared to traditiona lautomatic repeat-request (ARQ) schemes and rateless codes. Here, we first study the application of a novel binary deterministic rateless (BDR) code. Then, we focus on the design of network coding for the wireless broadcasting system, which can significantly improve the system performance compared to traditional ARQ. Both the one-hop broadcasting system and a relay-aided broadcasting system areconsidered. In the one-hop broadcasting system, we investigate the application of systematic BDR (SBDR) codes and instantaneously decodable network coding (IDNC). For the SBDR codes, we determine the number of encoded redundancy packets that guarantees high broadcast transmission efficiencies and simultaneous lowcomplexity. Moreover, with limited feedback the efficiency performance can be further improved. Then, we propose an improved network coding scheme that can asymptotically achieve the theoretical lower bound on transmission overhead for a sufficiently large number of information packets. In the relay-aided system, we consider a scenario where the relay node operates in half duplex mode, and transmissions from the BS and the relay, respectively, are over orthogonal channels. Based on random network coding, a scheduling problem for the transmissions of redundancy packets from the BS and the relay is formulated. Two scenarios; namely instantaneous feedback after each redundancy packet, and feedback after multiple redundancy packets are investigated. We further extend the algorithms to multi-cell networks. Besides random network coding, IDNC based schemes are proposed as well. We show that significant improvements in transmission efficiency are obtained as compared to previously proposed ARQ and network-coding-based schemes.
QC 20110907
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Chen. "Variants of Deterministic and Stochastic Nonlinear Optimization Problems." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112294/document.

Full text
Abstract:
Les problèmes d’optimisation combinatoire sont généralement réputés NP-difficiles, donc il n’y a pas d’algorithmes efficaces pour les résoudre. Afin de trouver des solutions optimales locales ou réalisables, on utilise souvent des heuristiques ou des algorithmes approchés. Les dernières décennies ont vu naitre des méthodes approchées connues sous le nom de métaheuristiques, et qui permettent de trouver une solution approchées. Cette thèse propose de résoudre des problèmes d’optimisation déterministe et stochastique à l’aide de métaheuristiques. Nous avons particulièrement étudié la méthode de voisinage variable connue sous le nom de VNS. Nous avons choisi cet algorithme pour résoudre nos problèmes d’optimisation dans la mesure où VNS permet de trouver des solutions de bonne qualité dans un temps CPU raisonnable. Le premier problème que nous avons étudié dans le cadre de cette thèse est le problème déterministe de largeur de bande de matrices creuses. Il s’agit d’un problème combinatoire difficile, notre VNS a permis de trouver des solutions comparables à celles de la littérature en termes de qualité des résultats mais avec temps de calcul plus compétitif. Nous nous sommes intéressés dans un deuxième temps aux problèmes de réseaux mobiles appelés OFDMA-TDMA. Nous avons étudié le problème d’affectation de ressources dans ce type de réseaux, nous avons proposé deux modèles : Le premier modèle est un modèle déterministe qui permet de maximiser la bande passante du canal pour un réseau OFDMA à débit monodirectionnel appelé Uplink sous contraintes d’énergie utilisée par les utilisateurs et des contraintes d’affectation de porteuses. Pour ce problème, VNS donne de très bons résultats et des bornes de bonne qualité. Le deuxième modèle est un problème stochastique de réseaux OFDMA d’affectation de ressources multi-cellules. Pour résoudre ce problème, on utilise le problème déterministe équivalent auquel on applique la méthode VNS qui dans ce cas permet de trouver des solutions avec un saut de dualité très faible. Les problèmes d’allocation de ressources aussi bien dans les réseaux OFDMA ou dans d’autres domaines peuvent aussi être modélisés sous forme de problèmes d’optimisation bi-niveaux appelés aussi problèmes d’optimisation hiérarchique. Le dernier problème étudié dans le cadre de cette thèse porte sur les problèmes bi-niveaux stochastiques. Pour résoudre le problème lié à l’incertitude dans ce problème, nous avons utilisé l’optimisation robuste plus précisément l’approche appelée « distributionnellement robuste ». Cette approche donne de très bons résultats légèrement conservateurs notamment lorsque le nombre de variables du leader est très supérieur à celui du suiveur. Nos expérimentations ont confirmé l’efficacité de nos méthodes pour l’ensemble des problèmes étudiés
Combinatorial optimization problems are generally NP-hard problems, so they can only rely on heuristic or approximation algorithms to find a local optimum or a feasible solution. During the last decades, more general solving techniques have been proposed, namely metaheuristics which can be applied to many types of combinatorial optimization problems. This PhD thesis proposed to solve the deterministic and stochastic optimization problems with metaheuristics. We studied especially Variable Neighborhood Search (VNS) and choose this algorithm to solve our optimization problems since it is able to find satisfying approximated optimal solutions within a reasonable computation time. Our thesis starts with a relatively simple deterministic combinatorial optimization problem: Bandwidth Minimization Problem. The proposed VNS procedure offers an advantage in terms of CPU time compared to the literature. Then, we focus on resource allocation problems in OFDMA systems, and present two models. The first model aims at maximizing the total bandwidth channel capacity of an uplink OFDMA-TDMA network subject to user power and subcarrier assignment constraints while simultaneously scheduling users in time. For this problem, VNS gives tight bounds. The second model is stochastic resource allocation model for uplink wireless multi-cell OFDMA Networks. After transforming the original model into a deterministic one, the proposed VNS is applied on the deterministic model, and find near optimal solutions. Subsequently, several problems either in OFDMA systems or in many other topics in resource allocation can be modeled as hierarchy problems, e.g., bi-level optimization problems. Thus, we also study stochastic bi-level optimization problems, and use robust optimization framework to deal with uncertainty. The distributionally robust approach can obtain slight conservative solutions when the number of binary variables in the upper level is larger than the number of variables in the lower level. Our numerical results for all the problems studied in this thesis show the performance of our approaches
APA, Harvard, Vancouver, ISO, and other styles
47

Contant, Sheila. "Modelagem de reatores de polimerização : deterministica e por redes neurais." [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/267389.

Full text
Abstract:
Orientador: Liliane Maria Ferrareso Lona
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Quimica
Made available in DSpace on 2018-08-08T09:34:01Z (GMT). No. of bitstreams: 1 Contant_Sheila_D.pdf: 2364843 bytes, checksum: 3e362baeba9fe60ca91b2f54bbcac57e (MD5) Previous issue date: 2007
Resumo: Neste trabalho, foram estudados diferentes processos de polimerização: (1) homopolimerização do estireno e copolimerização do estireno com metacrilato de metila em emulsão via radical livre convencional e (2) homopolimerização do estireno em massa via radical livre no processo controlado/vivo mediado por nitróxido. A modelagem dos processos foi realizada por meio de duas abordagens diferentes: inicialmente, modelos determinísticos foram desenvolvidos para cada caso e, utilizando resultados gerados por esses modelos, redes neurais foram treinadas para a modelagem inversa dos processos. Na modelagem determinística, foram desenvolvidos programas computacionais para as polimerizações em emulsão e simulações foram realizadas para diferentes condições operacionais. Para a polimerização controlada em massa, foi utilizado um programa computacional da literatura ao qual foram introduzidas modificações. Em todos os casos, foram levantados extensos bancos de dados de parâmetros cinéticos para todos os componentes envolvidos. Para o trabalho com as redes neurais, foi utilizado um programa computacional previamente desenvolvido ao qual foram introduzidas modificações. Redes neurais foram utilizadas para modelagem inversa dos processos, sendo treinadas para a predição de condições operacionais capazes de levar à produção de polímeros com propriedades específicas. As duas metodologias utilizadas para a modelagem matemática foram capazes de extrair importantes e diferentes informações dos processos de polimerização estudados, mostrandose portanto ferramentas bastante interessantes e eficientes para aplicação na área de engenharia de polimerização
Abstract: In this work different polymerization processes were studied: (1) styrene homopolymerization and styrene/methyl methacrylate copolymerization in emulsion in the conventional freeradical process, and (2) styrene homopolymerization in bulk in the nitroxidemediated controlled/living freeradical process. Modelling was developed using two different approaches: initially deterministic models were developed in each case, and using results from these models neural networks were trained to the inverse modelling of the processes. In the deterministic modelling, computational programas were developed to the emulsion polymerizations, and simulations were performed for different operating conditions. A modified computational program from the literature was used in the controlled polymerization in bulk. In all cases, large databases of kinetic parameters to all the compounds present were searched. A modified computational program previously developed was used in the work with neural networks. Neural networks were used to the inverse modelling of the processes, and were trained to predict operating conditions that could lead to production of polymers with specific properties. The two methodologies used in the mathematical modelling were able to extract important and different information from the polymerization processes studied, showing its potential to an efficient aplication in the polymerization area
Doutorado
Desenvolvimento de Processos Químicos
Doutor em Engenharia Química
APA, Harvard, Vancouver, ISO, and other styles
48

Agurto, Hoyos Oscar Pedro. "Comutador de dados digitais para tdm deterministico e1, visando uma implementação em microeletrônica." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1996. http://hdl.handle.net/10183/26382.

Full text
Abstract:
Este trabalho consiste na especificação e desenvolvimento da arquitetura de um Comutador Digital para TDM Determinístico E1, visando sua posterior implementação em microeletrônica. Inicialmente são apresentados os conceitos gerais sobre os Sistemas de Comutação, bem como das principais modalidades de comutação, seguidos de um estudo aprofundado da Comutação de Circuitos e suas técnicas mais utilizadas, devido a sua Intima relação com a multiplexação TDM e a hierarquia E1. Do mesmo modo, são descritas as características das Redes Corporativas E1 e dos multiplexadores E1, junto com as funções principais do Comutador dentro do ambiente de uma rede ponto-a-ponto. Com base no estudo prévio, e proposta a arquitetura de um Comutador Digital baseado em técnicas TSI capaz de fornecer funções de comutação local e remota entre os dispositivos conectados aos multiplexadores El, que formam os nos de uma Rede Corporativa com controle centralizado. 0 projeto logico e a simulação do Comutador Digital foram realizados dentro do framework SOLO/Cadence, usando a biblioteca de Standard Cells da tecnologia CMOS de 1.2µ. O simulador lógica SILOS, disponível no SOLO/Cadence, foi utilizado para validar a arquitetura proposta. Detalhes de implementação e resultados de simulação são apresentados. O módulo de controle do Comutador Digital e apenas especificado.
This work consists in the specification and development of a Digital Circuit Switch architecture for E1l Deterministic TDM, looking toward a future microelectronics implementation. First, general concepts about Switching Systems and its basic elements, as well as the main kinds of switching are presented. Also, a meticulous study about Circuit Switching and its more used techniques is realized, because of the intrinsec relation with TDM and E1 hierarchy. In the same way, the characteristics of E1 Corporate Networks and E1 multiplexers are described, along with the main functions of the Digital Switch into an end-to-end network. Taking into account the previous study, the architecture of a Digital Switch based on TSI techniques, is proposed. This architecture is able to perform local and remote switching between the devices connected to E1 multiplexers, which form the network nodes of an end-to-end Corporate Network. The logic design and the circuit simulation of the Digital Switch were performed within SOLO/Cadence Standard Cells desing framework, using CMOS 1.2µ technology. The logic simulator SILOS was used to validate the proposed architecture. Implementation details and simulation results are presented. The Control module of the Digital Switch is only specified.
APA, Harvard, Vancouver, ISO, and other styles
49

Touré, Sellé. "Optimisation des réseaux : réseau actif et flexible." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT095/document.

Full text
Abstract:
Le Système Électrique est soumis ces dernières années à plusieurs évolutions, depuis la dérégulationdu marché d'énergie à l'intégration de plus en plus importante de Générateurs Dispersés (GED). Ainsi,dans le cadre du concept de Smart Grid, les nouvelles technologies de l'information et de lacommunication (NTIC) offrent de nouvelles perspectives pour la gestion et l'exploitation des réseauxde distribution.Dans ce contexte, de nouveaux outils sont étudiés. Encore appelés Fonctions Avancéesd’Automatisation (FAA), le but principal de ces outils est d’utiliser tous les composants du réseau dedistribution de manière coordonnée en vue de les rendre plus actifs, flexibles et d’augmenter leurefficacité opérationnelle. Dans notre cas, nous avons étudié les fonctions associées à la reconfigurationen régime normal, du réglage de la tension et l’hybridation de ces deux derniers, tout en tenant comptede la présence des GED. En partant du comportement physique inhérent aux composants du réseau,plusieurs modèles ont été proposés. Certains sont tirés de la théorie des graphes et d’autres sur l’outilpuissant de la reformulation mathématique pour « convexifier » nos modèles. Cette modélisationadoptée répond à la fois à la nécessité de prendre en compte tous les moyens de réglages qui peuventêtre discrets (prises des transformateurs avec régleurs en charge ou des gradins de condensateurs),binaires (état de connectivité des composants) et continues (puissance réactive de la DG) et par lechoix des outils et des algorithmes d'optimisation mixte. En effet, la complexité de ces problèmes sonttelles que nous avons exploré à la fois des algorithmes méta-heuristiques (ACF : Algorithme desColonies de Fourmis) que déterministes (Décomposition de Benders Généralisée, Algorithme duBranch and Cut)
The Electric Power System is undergoing a lot of evolutions in recent years, including the energymarket deregulation and the increasing integration of Dispersed Generators (DG). Therefore, withinthe framework of Smart Grid concept, the New Information and Communication Technologies (NICT)provide new perspectives to manage and operate distribution networks.In this context, new tools, called Advanced Distribution Automation functions (ADA, are beingstudied). The main objective of these tools is to use all the distribution network components in acoordinated manner to make them more active and flexible, in addition to increasing their operationalefficiency. In our case, we studied the functions associated with the reconfiguration problem, thevoltage control problem and the hybridization of these two, while taking into account the presence ofthe DG. Based on the inherent components of network physical models, several models have beenproposed. Some are derived from the graph theory and others use powerful mathematicalreformulation to make our models convex. The adopted models answer to the necessity of taking intoaccount all regulation means, which can be discrete (On Load Tap-Changer and capacitor banks),binary (components connectivity such as lines or transformers) and continuous (DG reactive power ),and by the choice of tools and algorithms of mixed optimization. Indeed, the complexity of theseproblems is such that we have explored both algorithms: meta-heuristic (ACA, Ant Colony Algorithm)and deterministic (Generalized Benders Decomposition, Branch and Cut Algorithm)
APA, Harvard, Vancouver, ISO, and other styles
50

Mhedhbi, Meriem. "Contribution to deterministic simulation of Body area network channels in the context of group navigation and body motion analysis." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S049/document.

Full text
Abstract:
Les progrès récents dans les technologies et les systèmes de communications sans fil soutenus par la miniaturisation de dispositifs ont donné naissance une nouvelle génération de réseaux personnels permettant des communications autour du corps humain: les réseaux corporels. Cette thèse étudie les différents types du canal de propagation des réseaux corporels en environnement intérieur dans le contexte de l’analyse du mouvement et de la navigation de groupe. Dans ce travail, une approche de simulation pour le cala de propagation est présenté. Le simulateur de canal de propagation est basé sur les techniques de tracé de rayons et l’approche de simulation est basée sur l’utilisation d’antennes perturbées et l’utilisation des données de capture de mouvement pour la modélisation de la mobilité humaine. Premièrement, nous étudions la question de l’antenne et l’influence de la proximité du corps humain sur diagramme de rayonnement de l’antenne. En outre, un modèle simple utilisé pour prédire le diagramme de rayonnement d’une antenne placée à proximité d’un corps humain. Deuxièmement, le simulateur physique est présenté et l’approche de simulation est introduite. Afin de vérifier l’approche proposée, des simulations préliminaires ont été effectuées et une première comparaison avec des donnes de mesures disponibles est faite. Enfin, une campagne de mesure spécifique joignant les données radio et les données de capture de mouvement a été exploitée pour valider et évaluer les résultats de la simulation
Recent advances in wireless technologies and system, empowered by the miniaturization of devices, give rise to a new generation of Personal Area Networks allowing communications around the human body : Body Area networks. This thesis studies the Body Area Network channels in indoor environment in the context of motion analysis and group navigation. In this work a simulation approach for BAN channels is presented. The propagation channel simulator is based on ray tracing and the simulation approach is based on using perturbed antennas and the use of motion capture data for modelling the human mobility. Firstly, we investigate the antenna issue and the influence of the human body prox- imity on antenna radiation pattern. Besides, a simple model used to predict the antenna radiation pattern placed in proximity to a human body. Secondly, the physical sim- ulator is presented and the simulation approach is introduced. In order to check the proposed approach, preliminary simulations were carried out and a first comparison with available measurement data is made. Finally, a specific measurement campaign jointing radio data and motion capture data was exploited to validate and evaluate the simulation results
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography