To see the other types of publications on this topic, follow the link: Low-Latency applications.

Dissertations / Theses on the topic 'Low-Latency applications'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 dissertations / theses for your research on the topic 'Low-Latency applications.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

McCaffery, Duncan James. "Supporting Low Latency Interactive Distributed Collaborative Applications in Mobile Environments." Thesis, Lancaster University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.524740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Leber, Christian [Verfasser], and Ulrich [Akademischer Betreuer] Brüning. "Efficient hardware for low latency applications / Christian Leber. Betreuer: Ulrich Brüning." Mannheim : Universitätsbibliothek Mannheim, 2012. http://d-nb.info/1034315552/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tayarani, Najaran Mahdi. "Transport-level transactions : simple consistency for complex scalable low-latency cloud applications." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54520.

Full text
Abstract:
The classical move from single-server applications to scalable cloud services is to split the application state along certain dimensions into smaller partitions independently absorbable by a separate server in terms of size and load. Maintaining data consistency in the face of operations that cross partition boundaries imposes unwanted complexity on the application. While for most applications many ideal partitioning schemes readily exist, First-Person Shooter (FPS) games and Relational Database Management Systems (RDBMS) are instances of applications whose state can’t be trivially partitioned. For any partitioning scheme there exists an FPS/RDBMS workload that results in frequent cross-partition operations. In this thesis we propose that it is possible and effective to provide unpartitionable applications with a generic communication infrastructure that enforces strong consistency of the application’s data to simplify cross-partition communications. Using this framework the application can use a sub-optimal partitioning mechanism without having to worry about crossing boundaries. We apply our thesis to take a head-on approach at scaling our target applications. We build three scalable systems with competitive performances, used for storing data in a key/value datastore, scaling fast-paced FPS games to epic sized battles consisting of hundreds of players, and a scalable full-SQL compliant database used for storing tens of millions of items.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
4

Tarassu, Jonas. "GPU-Accelerated Frame Pre-Processing for Use in Low Latency Computer Vision Applications." Thesis, Linköpings universitet, Informationskodning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-142019.

Full text
Abstract:
The attention for low latency computer vision and video processing applications are growing for every year, not least the VR and AR applications. In this thesis the Contrast Limited Adaptive Histogram Equalization (CLAHE) and Radial Dis- tortion algorithms are implemented using both CUDA and OpenCL to determine whether these type of algorithms are suitable for implementations aimed to run at GPUs when low latency is of utmost importance. The result is an implemen- tation of the block versions of the CLAHE algorithm which utilizes the built in interpolation hardware that resides on the GPU to reduce block effects and an im- plementation of the Radial Distortion algorithm that corrects a 1920x1080 frame in 0.3 ms. Further this thesis concludes that the GPU-platform might be a good choice if the data to be processed can be transferred to and possibly from the GPU fast enough and that the choice of compute API mostly is a matter of taste.
APA, Harvard, Vancouver, ISO, and other styles
5

Ky, Joël Roman. "Anomaly Detection and Root Cause Diagnosis for Low-Latency Applications in Time-Varying Capacity Networks." Electronic Thesis or Diss., Université de Lorraine, 2025. http://www.theses.fr/2025LORR0026.

Full text
Abstract:
L'évolution des réseaux a conduit à l'émergence d'applications à faible latence (FL) telles que le cloud gaming (CG) et la réalité virtuelle basée sur le cloud (Cloud VR), qui exigent des conditions réseau strictes, notamment une faible latence et une bande passante élevée. Cependant, les réseaux à capacité variable introduisent des dégradations, telles que du délai, des fluctuations de bande passante et des pertes de paquets, qui peuvent significativement altérer l'expérience utilisateur sur les applications FL. Cette thèse vise à concevoir des méthodologies pour détecter et diagnostiquer les anomalies de performance des applications FL fonctionnant sur des réseaux cellulaires et Wi-Fi. Pour atteindre cet objectif, des bancs d'essai expérimentaux réalistes ont été mis en place pour collecter des bases de données caractérisant les performances du réseau et capturant les indicateurs clés de performance (KPI) des applications CG et Cloud VR dans des environnements 4G et Wi-Fi. Ces données constituent la base de l'évaluation et du développement d'algorithmes de détection d'anomalies et de diagnostic basés sur l'apprentissage automatique. Les principales contributions de cette thèse incluent le développement de CATS, une solution de détection d'anomalies basé sur l'apprentissage contrastif, capable d'identifier efficacement les dégradations de l'expérience utilisateur dans les applications CG tout en restant robuste face à la contamination des données. De plus, cette thèse introduit RAID, un système de diagnostic en deux étapes conçu pour identifier les causes racines des problèmes de performance dans le Cloud VR. RAID a démontré une grande efficacité dans le diagnostic des dégradations Wi-Fi, même avec un nombre limité de données annotées. Les résultats de ce travail font progresser les domaines de la détection d'anomalies et du diagnostic des causes racines, offrant des perspectives concrètes aux opérateurs de réseaux pour optimiser les performances de leurs réseaux et améliorer la fiabilité des services et mieux supporter les applications FL, qui sont appelées à révolutionner les technologies de communication et à stimuler l'innovation dans de nombreuses industries
The evolution of networks has driven the emergence of low-latency (LL) applications such as cloud gaming (CG) and cloud virtual reality (Cloud VR), which demand stringent network conditions, including low latency and high bandwidth. However, time-varying capacity networks introduce impairments such as delays, bandwidth fluctuations, and packet loss, which can significantly degrade user experience on LL applications. This research aims to design methodologies for detecting and diagnosing performance anomalies in LL applications operating over cellular and Wi- Fi networks. To achieve this, realistic experimental testbeds were established to collect datasets that characterize network performance and capture key performance indicators (KPIs) of CG and Cloud VR applications over 4G and Wi-Fi environments. These datasets serve as the foundation for evaluating and developing machine learning-based anomaly detection and diagnostic frameworks. The key contributions of this thesis include the development of CATS, a contrastive learning-based anomaly detection framework capable of efficiently identifying user experience degradation in CG applications while remaining robust to data contamination. Additionally, this research introduces RAID, a two-stage root causes diagnosis framework designed to pinpoint the root causes of performance issues in Cloud VR. RAID demonstrated high efficiency in diagnosing Wi-Fi impairments, even with limited labeled data. The findings of this work advance the fields of anomaly detection and root cause diagnosis, offering actionable insights for network operators to optimize network performance and enhance service reliability to support LL applications, which are set to revolutionize communication technologies and drive innovation across various industries
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Binxu. "On the design of a cost-efficient resource management framework for low latency applications." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10053739/.

Full text
Abstract:
The ability to offer low latency communications is one of the critical design requirements for the upcoming 5G era. The current practice for achieving low latency is to overprovision network resources (e.g., bandwidth and computing resources). However, this approach is not cost-efficient, and cannot be applied in large-scale. To solve this, more cost-efficient resource management is required to dynamically and efficiently exploit network resources to guarantee low latencies. The advent of network virtualization provides novel opportunities in achieving cost-efficient low latency communications. It decouples network resources from physical machines through virtualization, and groups resources in the form of virtual machines (VMs). By doing so, network resources can be flexibly increased at any network locations through VM auto-scaling to alleviate network delays due to lack of resources. At the same time, the operational cost can be largely reduced by shutting down low-utilized VMs (e.g., energy saving). Also, network virtualization enables the emerging concept of mobile edge-computing, whereby VMs can be utilized to host low latency applications at the network edge to shorten communication latency. Despite these advantages provided by virtualization, a key challenge is the optimal resource management of different physical and virtual resources for low latency communications. This thesis addresses the challenge by deploying a novel cost-efficient resource management framework that aims to solve the cost-efficient design of 1) low latency communication infrastructures; 2) dynamic resource management for low latency applications; and 3) fault-tolerant resource management. Compared to the current practices, the proposed framework achieves 80% of deployment cost reduction for the design of low latency communication infrastructures; continuously saves up to 33% of operational cost through dynamic resource management while always achieving low latencies; and succeeds in providing fault tolerance to low latency communications with a guaranteed operational cost.
APA, Harvard, Vancouver, ISO, and other styles
7

Tasiopoulos, A. "On the deployment of low latency network applications over third-party in-network computing resources." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10049954/.

Full text
Abstract:
An increasing number of Low Latency Applications (LLAs) in the entertainment (Virtual/Augmented Reality), Internet-of-Things (IoT), and automotive domains require response times that challenge the traditional application provisioning process into distant data centres. At the same time, there is a trend in deploying In-Network Computing Resources (INCRs) closer to end users either in the form of network equipment, with capabilities of performing general-purpose computations, and/or in the form of commercial off-the-self “data centres in a box”, i.e., cloudlets, placed at different locations of Internet Service Providers (ISPs). That is, INCRs extend cloud computing at the edge and middle-tier locations of the network, providing significantly smaller response times than those achieved by the current “client-to-cloud” network model. In this thesis, we argue about the necessity of exploiting INCRs for application provisioning with the purpose of improving LLAs’ Quality of Service (QoS) by essentially deploying applications closer to end users. To this end, this thesis investigates the deployment of LLAs over INCRs under fixed, mobile, and disrupted user connectivity environments. In order to fully reap the benefits of INCRs, we develop for each connectivity scenario algorithmic frameworks that are centred around the concept of a market, where LLAs lease existing INCRs. The proposed frameworks take into account the particular characteristics of INCRs, such as their limited capacity in hosting application instances, and LLAs, by addressing the number of instances each application should deploy at each computing resource over time. Furthermore, since typically the smooth operation of network applications is supported by Network Functions, such as load balancers, firewalls etc., we consider the deployment of complementary Virtual Network Functions for backing LLAs’ provisioning over INCRs. Overall, the key goal of this thesis is the investigation of using an enhanced Internet through INCRs as the communication platform for LLAs.
APA, Harvard, Vancouver, ISO, and other styles
8

Schuh, Fabian [Verfasser], and Johannes B. [Akademischer Betreuer] Huber. "Digital Communications for Low Latency and Applications for Constant Envelope Signalling / Fabian Schuh. Gutachter: Johannes B. Huber." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2016. http://d-nb.info/1083259539/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Masoumiyan, Farzaneh. "Low-latency communications for wide area control of energy systems." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/135660/1/Farzaneh_Masoumiyan_Thesis.pdf.

Full text
Abstract:
This project provides reliable and low-latency communications for wide area control in smart grid. For this purpose, a priority differentiation approach is presented. It is embedded with an application-layer acknowledgment mechanism for reliable transmission of time-critical data with high priority.
APA, Harvard, Vancouver, ISO, and other styles
10

Brunello, Davide. "L4S in 5G networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284554.

Full text
Abstract:
Low Latency Low Loss Scalable Throughput (L4S) is a technology which aims to provide high throughput and low latency for the IP traffic, lowering also the probability of packet loss. To reach this goal, it relies on Explicit Con- gestion Notification (ECN), a mechanism to signal congestion in the network avoiding packets drop. The congestion signals are then managed at sender and receiver side thanks to scalable congestion control algorithms. Initially, in this work the challenges to implement L4S in a 5G network have been analyzed. Using a proprietary state-of-the-art network simulator, L4S have been imple- mented at the Packed Data Convergence Protocol layer in a 5G network. The 5G network scenario represents a context where the physical layer has a carrier frequency of 600 MHz, a transmission bandwidth of 9 MHz, and the proto- col stack follows the New Radio (NR) specifications. L4S has been adopted to support Augmented Reality (AR) video gaming traffic, using the IETF ex- perimental standard Self-Clocked Rate Adaptation for Multimedia (SCReAM) for congestion control. The results showed that when supported by L4S, the video gaming traffic experiences lower delay than without L4S support. The improvement on latency comes with an intrinsic trade-off between throughput and latency. In all the cases analyzed, L4S yields to average application layer throughput above the minimum requirements of high-rate latency-critical ap- plication, even at high system load. Furthermore, the packet loss rate has been significantly reduced thanks to the introduction of L4S, and if used in combi- nation with a Delay Based Scheduler (DBS), a packet loss rate very close to zero has been reached.
Low Latency Low Loss Scalable Throughput (L4S) är en teknik som syftar till att ge hög bittakt och låg fördröjning för IP-trafik, vilket också minskar sanno- likheten för paketförluster. För att nå detta mål förlitar det sig på Explicit Cong- estion Notification (ECN), en mekanism för att signalera "congestion", det vill säga köuppbyggnad i nätverket för att undvika att paketet kastas. Congestion- signalerna hanteras sedan vid avsändare och mottagarsida där skalbar anpass- ning justerar bittakten efter rådande omständigheter. I detta arbete har utma- ningarna att implementera L4S i ett 5G-nätverk analyserats. Sedan har L4S implementerats på PDCP lagret i ett 5G-nätverkssammanhang genom att an- vända en proprietär nätverkssimulator. För att utvärdera fördelarna med imple- menteringen har L4S-funktionerna använts för att stödja Augmented Reality (AR) videospelstrafik, med IETF-experimentella standard Self-Clocked Rate Adaptation for Multimedia (SCReAM) för bitrate-kontroll. Resultaten visade att med stöd av L4S upplever videospelstrafiken lägre latens än utan stöd av L4S. Förbättringen av latens kommer med nackdelen av en minskning av bit- takt som dikteras av den inneboende avvägningen mellan bittakt och latens. I vilket fall som helst är kapacitetsminskningen med L4S rimlig, eftersom goda kapacitetsprestanda har uppnåtts även vid hög systembelastning. Vidare har paketförlustfrekvensen reducerats avsevärt tack vare införandet av L4S, och om den används i kombination med en Delay baserad schemaläggare (DBS) har en paketförluster mycket nära noll uppnåtts.
APA, Harvard, Vancouver, ISO, and other styles
11

Hunter, Timothy Jason. "Large-Scale, Low-Latency State Estimation Of Cyberphysical Systems With An Application To Traffic Estimation." Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3686329.

Full text
Abstract:

Large physical systems are increasingly prevalent, and designing estimation strategies for them has become both a practical necessity and a complicated problem. Their sensing infrastructure is usually ad-hoc, and the estimate of interest is often a complex function of the data. At the same time, computing power is rapidly becoming a commodity. We show with the study of two estimation tasks in urban transportation how the proper design of algorithms can lead to significant gains in scalability compared to existing solutions.

A common problem in trip planning is to make a given deadline such as arriving at the airport within an hour. Existing routing services optimize for the expected time of arrival, but do not provide the most reliable route, which accounts for the variability in travel times. Providing statistical information is even harder for trips in cities which undergo a lot of variability. This thesis aims at building scalable algorithms for inferring statistical distributions of travel time over very large road networks, using GPS points from vehicles in real-time. We consider two complementary algorithms that differ in the characteristics of the GPS data input, and in the complexity of the model: a simpler streaming Expectation-Maximization algorithm that leverages very large volumes of extremely noisy data, and a novel Markov Model-Gaussian Markov Random Field that extracts global statistical correlations from high-frequency, privacy-preserving trajectories.

These two algorithms have been implemented and deployed in a pipeline that takes streams of GPS data as input, and produces distributions of travel times accessible as output. This pipeline is shown to scale on a large cluster of machines and can process tens of millions of GPS observations from an area that comprises hundreds of thousands of road segments. This is to our knowledge the first research framework that considers in an integrated fashion the problem of statistical estimation of traffic at a very large scale from streams of GPS data.

APA, Harvard, Vancouver, ISO, and other styles
12

Harmassi, Mariem. "Thing-to-thing context-awareness at the edge." Thesis, La Rochelle, 2019. http://www.theses.fr/2019LAROS037.

Full text
Abstract:
L'Internet des objets (IdO) comprend aujourd'hui une riche offre d'objets connectés, qui permettent de collecter et de partager en continu des données hétérogènes se rapportant à leurs environnements. Ceci a permis l'émergence d'un nouveau type d'applications, qui sont basées sur ces données et permettent de faciliter la vie des citoyens. Ces applications de l'Internet des objets sont dites « sensibles au contexte ». Grâce aux données collectées sur le contexte de l'utilisateur, elles sont en mesure d'adapter leur comportement de manière autonome, sans intervention humaine. Dans cette thèse, nous proposons un nouveau paradigme autour des interactions objet-à-objet, nommé « Interactions objet-à-objet pour la sensibilité au contexte en bordure de réseaux ». Ce dernier, permet de tenir compte d'un nouveau type de contexte, paradoxalement à la notion conventionnelle de « sensibilité au contexte » qui se limite au contexte de l’utilisateur d’une application. Ainsi nous proposons de nous intéresser pour la première fois au contexte des objets en tant que composante même de l’application. Cette thèse vise à doter les objets connectés d’un certain degré d'intelligence, leur permettant de comprendre leur propre environnement et d’en tenir compte dans leurs interactions objet-à-objet. Les contributions majeures de cette thèse se focalisent sur deux modules principaux. Nous proposons, dans un premier temps, un module d’identification de contextes capable de capter les contextes des objets mobiles et de délivrer ce genre d’information de contexte de façon exacte et continue. Sur la base de cette information de contexte assurée par le premier module, nous proposons un deuxième module de collecte de données sensible aux contextes de déploiement des objets connectés. Afin que ceci soit possible, de nombreux verrous restent à lever. Concernant le premier module d’identification de contexte, le premier défi rencontré afin de permettre aux objets connectés de devenir sensibles au contexte est (i) Comment peut-on assurer une identification de contexte exacte pour des objets déployés dans des environnements incontrôlables ? Pour ce faire, nous proposons dans notre premier travail un raisonneur dédié à l'apprentissage et le raisonnement sur le contexte [1]. Le raisonneur proposé est fondé sur une stratégie coopérative entre les différents dispositifs IdO d'un même voisinage. Cette coopération vise à un échange mutuel des données parmi les ressources disponibles d'un même voisinage. La deuxième problématique rencontrée est (ii) Comment peut-on assurer une identification de contexte continue pour des nœuds mobiles appartenant à des réseaux opportunistes ? Nous devons tout d'abord leur permettre de découvrir un maximum de voisins afin d'établir un échange avec. Afin de répondre à cette deuxième problématique nous proposons WELCOME un protocole de découverte des voisinages éco énergétique et à faible latence [2] qui permettra de diminuer considérablement les collisions sur la base d’une découverte de voisinage à faible coût en termes de latence et d’énergie. La troisième problématique, se rapportant au module de collecte de données sensible au contexte, est (iii) Comment peut-on assurer une collecte efficace et précise sur la base du contexte physique de déploiement des capteurs. En effet, d’une part tenir compte de l’information de contexte des capteurs, permet d'éviter toutes transmissions inutiles ou redondante de données. D’autre part, la contextualisation des données implique un partage et donc des transmissions de messages. La question ici (iii) Comment peut-on contextualiser au mieux le plus grand nombre d'objets connectés tout en préservant au mieux leurs ressources énergétiques. Afin de répondre à cette question, nous proposons un Publish-Subscribe à la fois sensible au contexte et éco énergétique basé sur un jeu coalitionnel dynamique qui permet de résoudre ces conflits d’intérêts entre les sources dans un réseau
Internet of Things IoT (IoT) today comprises a plethora of different sensors and diverse connected objects, constantly collecting and sharing heterogeneous sensory data from their environment. This enables the emergence of new applications exploiting the collected data towards facilitating citizens lifestyle. These IoT applications are made context-aware thanks to data collected about user's context, to adapt their behavior autonomously without human intervention. In this Thesis, we propose a novel paradigm that concern Machine to Machine (M2M)/Thing To Thing (T2T) interactions to be aware of each other context named \T2T context-awareness at the edge", it brings conventional context-awareness from the application front end to the application back-end. More precisely, we propose to empower IoT devices with intelligence, allowing them to understand their environment and adapt their behaviors based on, and even act upon, the information captured by the neighboringdevices around, thus creating a collective intelligence. The first challenge we face in order to make IoT devices context-aware is (i) How can we extract such information without deploying any dedicated resources for this task? To do so we propose in our first work a context reasoner [1] based a cooperation among IoT devices located in the same surrounding. Such cooperation aims at mutually exchange data about each other context. To enable IoT devices to see, hear, and smell the physical world for themselves, we need firstly to make them connected to share their observations. For a mobile and energy- constrained device, the second challenge we face is (ii) How to discover as much neighbors as possible in its vicinity while preserving its energy resource? We propose Welcome [2] a Low latency and Energy efficient neighbor discovery scheme that is based on a single-delegate election method. Finally, a Publish-Subscribe that take into account the context at the edge of IoT devices, can greatly reduce the overhead and save the energy by avoiding unnecessary transmission of data that doesn't match application requirements. However, if not thought about properly building such T2T context-awareness could imply an overload of subscriptions to meet context-estimation needs. So our third contribution is (iii) How to make IoT devices context-aware while saving energy. To answer this, We propose an Energy efficient and context-aware Publish-Subscribe [3] that strike a balance between energy-consumption due to context estimation and energy-saving due to context-based filtering near to data sources
APA, Harvard, Vancouver, ISO, and other styles
13

Huang, Chia-jui, and 黃家瑞. "A Low Latency/Low Power Memory Controller for Multimedia/DSP Applications." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/66151810656115052838.

Full text
Abstract:
碩士
國立中正大學
資訊工程所
94
This thesis proposes a low latency/low power SDRAM memory controller based on AMBA AHB for multimedia and DSP applications. The proposed memory controller exploits the mechanisms of Burst Terminates Burst (BTB) and Anticipative Row Activation (ARA) for low latency memory access. In addition, a priority queue-based arbitration policy is also proposed for achieving low latency, equality and fixed priority multi-channel scheduling. From the experimental results, the proposed memory controller reduces 44% to 63% cycle count when accessing an eight by eight 2-D block. Besides, the proposed memory controller was applied on a MEPG-4 video decoding system for the system level verification. The proposed memory controller improves 16% to 37% performance of the MPEG-4 decoding system.
APA, Harvard, Vancouver, ISO, and other styles
14

Lai, Hsu-Te, and 賴旭德. "Low Latency and Efficient Packet Scheduling for Streaming Applications." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/30454995901658632809.

Full text
Abstract:
碩士
國立中央大學
資訊工程研究所
91
Adequate bandwidth allocations and strict delay requirements are critical for real time applications. Packet scheduling algorithms like Class Based Queue (CBQ), Nested Deficit Round Robin (Nested-DRR) are designed to ensure the bandwidth reservation function. However, they might cause unsteady packet latencies and introduce extra application handling overhead, such as allocating a large buffer for playing the media stream. High and unstable latency of packets might jeopardize the corresponding Quality of Service since real-time applications prefer low playback latency. Existing scheduling algorithms which keep latency of packets stable require knowing the details of individual flows. GPS (General Processor Sharing)-like algorithms does not consider the real behavior of a stream. A real stream is not perfectly smooth after forwarded by routers. GPS-like algorithms will introduce extra delay on the stream which is not perfectly smooth. This thesis presents an algorithm which provides low latency and efficient packet scheduling service for streaming applications called LLEPS.
APA, Harvard, Vancouver, ISO, and other styles
15

Liberatore, Marc D. "Low -latency anonymity systems: Statistical attacks and new applications." 2008. https://scholarworks.umass.edu/dissertations/AAI3315526.

Full text
Abstract:
In this dissertation, we study low-latency anonymity protocols and systems. Such systems enable anonymous communication where latency is not tolerated well, such as browsing the web, but also introduce new vulnerabilities not present in systems that hide timing information. We examine one such vulnerability, the profiling attack, as well as possible defenses to such an attack. We also examine the feasibility of using low-latency anonymity techniques to support a new application, Voice over IP (VoIP). First, we show that profiling attacks on low-latency anonymity systems are feasible. The attack we study is based upon pre-constructing profiles of communication and identifying the sender of encrypted, anonymized traffic on the basis of these profiles. Second, we present results from a large-scale measurement study and the application of this attack to the measured data. These results indicate that profiling is practical across sets of thousands of possible initiators and that such profiles remain valid for weeks at a time. Third, we evaluate defenses against the profiling attack and their effects upon system performance. We then demonstrate the feasibility of supporting anonymous VoIP; specifically, we show supporting measurement data and outline the changes current anonymity systems would require to carry such traffic. We also show how such systems are potentially more vulnerable to known attacks and start to examine the tradeoffs between VoIP performance and anonymity inherent in such systems.
APA, Harvard, Vancouver, ISO, and other styles
16

Hong, Hua-yi, and 洪華憶. "Implementation of Variable-Latency Floating-Point Multipliers for Low-Power Applications." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/69334154350259237827.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
96
Floating-point multipliers are typically power hungry which is undesirable in many embedded applications. This paper proposes a variable-latency floating-point multiplier architecture, which is suitable for low-power, high-performance, and high-accuracy applications. The architecture splits the significand multiplier into upper and lower parts, and predicts the required significand product and sticky bit from upper part. In the case of correct prediction, the computation of lower part is disabled and the rounding operation is significantly simplified so that floating-point multiplication can be completed early. Finally, detailed design and simulation of the floating-point multiplier is presented, together with its evaluation by comparing power consumption with the fast and conventional floating-point multipliers. Experimental results demonstrate that the proposed double-precision multiplier consumes up to 26.41% and 24.97% less power and energy than the fast floating-point multiplier respectively at the expense of only small area and delay overhead. In addition, the results also show that the performance of proposed floating-point multiplier is very approximate to that of fast floating-point multipliers.
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Lyu-han, and 陳律翰. "Credit-based Low Latency Packet Scheduling Algorithm for Real-time Applications." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/10927106907531708792.

Full text
Abstract:
碩士
國立中央大學
資訊工程研究所
95
Real-time traffic flows, such as streaming audio and video data, can not endure high or unsteady packet latencies. Unfortunately, some well-known scheduling algorithms such as Weighted Fair Queueing (WFQ), Start-Time Fair Queueing (SFQ)...etc. in real networks will be subjected to high and unsteady latencies due to the unsteady queueing delay problem, `the buffer underrun problem'', that we will explain later in this paper. A few scheduling algorithms address this problem in recent years like Low Latency Queueing (LLQ) which may suffer from low priority traffic starvation problem. The QoS guarantee only satisfies the flow with highest priority. Another one that addresses this problem is Low Latency and Efficient Packet Scheduling algorithm (LLEPS) which requires additional parameter, time slot, support accurately. And how to determine the time slot value exactly to enable LLEPS to work efficaciously is another problem and is not mentioned in LLEPS. Even if we use accurate time slot value, the queueing delay of LLEPS is not stable enough when packets come in burst in the real networks. Therefore, in this paper, we propose a packet scheduling algorithm, Credit-Based Low Latency Packet Scheduling (CBLLPS), using adaptive credit function to ensure low latency for streaming applications. Some simulation results are also presented.
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Po-Yun, and 晉伯芸. "Development of DVB-MHP Tool for Authoring Low-Latency Interactive TV Applications." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/zpyyz2.

Full text
Abstract:
碩士
國立臺北科技大學
電機工程系所
94
Thanks to rapid growth of digital television (DTV) technology, TV operator can now deliver applications for interactive services via television broadcasting system. DVB-MHP standard defines the delivery of interactive applications using DSM-CC carousel for the purpose of periodical transmission. For the transport stream carrying multiple services, the bandwidth allocated for the application becomes limited, and thus the application delivery time and start-up time in the receiver could be painfully slow if the large size applications are carried in the DSM-CC carousel, especially for the low-end receiver with limited cache space. To reduce this latency form the head end to the MHP receiver, an MHP multi-shot application framework is proposed to utilize the cache of the receiver more efficiently. This architecture makes a division of application resources into shot units, optimizes the application start up time by repeating the very initial shot in the carousel, and introduces a pre-loading method to cache the next shot candidates in advance. Furthermore, this architecture is integrated into our self-developed MHP authoring tool for rapid design in graphic user interface (GUI), and is successfully verified.
APA, Harvard, Vancouver, ISO, and other styles
19

Beuschel, Ralf Michael [Verfasser]. "Video compression systems for low-latency applications / von Ralf Michael Beuschel, geb. Schreier." 2010. http://d-nb.info/1001992385/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Chih-Wei, and 王志偉. "An Optimal Frame Data Mapping with Low Latency Memory Controller for Multimedia Applications." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/04452187380187187400.

Full text
Abstract:
碩士
國立中正大學
資訊工程所
95
The thesis proposed an optimal frame data mapping with low latency memory controller based on AMBA AHB interface for multimedia application. The proposed memory controller exploited optimized frame data mapping and the mechanisms of BTB and ARA for low latency memory access. The goal reduced the access cycle count and improved total system performance. The proposed optimal frame data mapping allocate frame data into memory address in block-based type according to access type of video operation. It will allocate frame data into suitable address for different block size and reduce the latency time of access frame data. For low latency mechanisms : BTB solves the limitation of AHB interface and non-sequential data access,It will increase bus utilization and reduce the latency time of access non-sequential data;ARA used the parallelism process of memory bank to reduce the overhead of cross memory row access. From the simulation result, the proposed memory controller improved 40 % system performance of the MPEG-4 and MPEG-2 decoder system.
APA, Harvard, Vancouver, ISO, and other styles
21

Cheng, Hung-Yi, and 鄭宏毅. "Scattering Channel Estimation and Multi-Connectivity Technology for 5G Low Latency Applications Under indoor mmWave Small Cells." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/fnettr.

Full text
Abstract:
博士
國立臺灣大學
電子工程學研究所
106
The next generation communication systems (5G) have three directions: enhanced mobile broadband (eMBB), massive machine type communications (mMTC), and ultra-reliable and low latency communications (URLLC). Low latency service delivery is probably one of the most challenging 5G goals, which may imply costly investments. The introduction of low latency applications represents substantial technical challenges but foresee major changes in the way businesses are made. That is, the upcoming 5G wireless communication standard supporting low latency creates new indoor business demands, such as the healthcare industry, transport industry, entertainment industry, and the manufacturing industry Along with the evolution of wireless technologies, users are expected to have indoor real-time applications. Under tight timing constraints, some applications require ultra-reliable communication, for instance mission-critical controls, while others involve high-throughput transmission, for instance augmented reality (AR). It is very challenging to fulfill the stringent timing requirements of various resource-hungry services. To achieve these goals, many possible solutions are proposed in ultra-dense millimeter-wave (mmWave) networks (UDN). Furthermore, to improve signal quality or reliability in UDN, multi-connectivity techniques emerge, where one device can be simultaneously connected to several small cells (SCs). Recently, the hybrid beamforming design for mmWave devices, which can simultaneously transmit several data streams, has been considered attractive for high-reliable or high-throughput communications. However, under low-latency criteria, this hybrid design must require instantaneous multi-path channel state information (MP-CSI). That is, an effective multi-beam steering and estimation algorithm are desired. In this dissertation, we concentrate on such indoor low-latency mmWave scenarios and develop fast mmWave channel estimations under limited training steps. There are two main scenarios. First, for high-throughput and low-latency required devices, we develop a novel ideal about progressive channel estimation at a single transceiver. Our algorithms make multiple coarse beams emerge within a few training steps. Hence, multiplexing gain can occur immediately. On the other side, for ultra-reliable and low-latency communications, our method aims to measure as many links as possible in a limited training time instead of all possible links at an mmWave UDN. Multi-connectivity is rapidly established. There are three main topics in this work. First, for high-throughput devices, we propose Progressive Multi-Beam Estimation which probes multiple channel gains concurrently instead of sequentially. This algorithm provides a preliminary concurrent multi-beam steering. Based on a DFT-based (Discrete Fourier Transform) codebook, this method is also called DFT-based PMBE. The PMBE takes only 3% training steps of exhaustive search to achieve 85% spectral efficiency of the exhaustive case. In the second part of this dissertation, we further enhance our multi-beam probing technique by combining a proposed FFT-based (Fast Fourier Transform) codebook with our PMBE. Meanwhile, an FFT-based hybrid beamforming design is proposed with a single-connected architecture, a hardware-efficient, and energy-efficient architecture. This FFT-based codebook can be considered as a DFT codebook (Discrete Fourier Transform) with built-in bit-reversal scrambling mechanism, so as to improve the multiplexing gain. In the third part of this dissertation, we focus on the ultra-reliable and low-latency devices which achieve high reliability by some multi-connectivity strategies. For low-latency requirements, an efficient multi-connectivity estimation under strict-limited training steps is indeed. We propose down_uplink multi-connectivity measurements using the FFT-based codebook estimating multiple links from different SCs. Simulation results show that our method can take less training steps to access more SCs and it also acquires high multi-link quality. In summary, for a high-throughput device or for a high-reliable device, the proposed algorithms are quite efficient to address their low-latency estimation problems.
APA, Harvard, Vancouver, ISO, and other styles
22

Kala, S. "ASIC Implementation of A High Throughput, Low Latency, Memory Optimized FFT Processor." Thesis, 2012. https://etd.iisc.ac.in/handle/2005/2557.

Full text
Abstract:
The rapid advancements in semiconductor technology have led to constant shrinking of transistor sizes as per Moore's Law. Wireless communications is one field which has seen explosive growth, thanks to the cramming of more transistors into a single chip. Design of these systems involve trade-offs between performance, area and power. Fast Fourier Transform is an important component in most of the wireless communication systems. FFTs are widely used in applications like OFDM transceivers, Spectrum sensing in Cognitive Radio, Image Processing, Radar Signal Processing etc. FFT is the most compute intensive and time consuming operation in most of the above applications. It is always a challenge to develop an architecture which gives high throughput while reducing the latency without much area overhead. Next generation wireless systems demand high transmission efficiency and hence FFT processor should be capable of doing computations much faster. Architectures based on smaller radices for computing longer FFTs are inefficient. In this thesis, a fully parallel unrolled FFT architecture based on novel radix-4 engine is proposed which is catered for wide range of applications. The radix-4 butterfly unit takes all four inputs in parallel and can selectively produce one out of the four outputs. The proposed architecture uses Radix-4^3 and Radix-4^4 algorithms for computation of various FFTs. The Radix-4^4 block can take all 256 inputs in parallel and can use the select control signals to generate one out of the 256 outputs. In existing Cooley-Tukey architectures, the output from each stage has to be reordered before the next stage can start computation. This needs intermediate storage after each stage. In our architecture, each stage can directly generate the reordered outputs and hence reduce these buffers. A solution for output reordering problem in Radix-4^3 and Radix-4^4 FFT architectures are also discussed in this work. Although the hardware complexity in terms of adders and multipliers are increased in our architecture, a significant reduction in intermediate memory requirement is achieved. FFTs of varying sizes starting from 64 point to 64K point have been implemented in ASIC using UMC 130nm CMOS technology. The data representation used in this work is fixed point format and selected word length is 16 bits to get maximum Signal to Quantization Noise Ratio (SQNR). The architecture has been found to be more suitable for computing FFT of large sizes. For 4096 point and 64K point FFTs, this design gives comparable throughput with considerable reduction in area and latency when compared to the state-of-art implementations. The 64K point FFT architecture resulted in a throughput of 1332 mega samples per second with an area of 171.78 mm^2 and total power of 10.7W at 333 MHz.
APA, Harvard, Vancouver, ISO, and other styles
23

Kala, S. "ASIC Implementation of A High Throughput, Low Latency, Memory Optimized FFT Processor." Thesis, 2012. http://etd.iisc.ernet.in/handle/2005/2557.

Full text
Abstract:
The rapid advancements in semiconductor technology have led to constant shrinking of transistor sizes as per Moore's Law. Wireless communications is one field which has seen explosive growth, thanks to the cramming of more transistors into a single chip. Design of these systems involve trade-offs between performance, area and power. Fast Fourier Transform is an important component in most of the wireless communication systems. FFTs are widely used in applications like OFDM transceivers, Spectrum sensing in Cognitive Radio, Image Processing, Radar Signal Processing etc. FFT is the most compute intensive and time consuming operation in most of the above applications. It is always a challenge to develop an architecture which gives high throughput while reducing the latency without much area overhead. Next generation wireless systems demand high transmission efficiency and hence FFT processor should be capable of doing computations much faster. Architectures based on smaller radices for computing longer FFTs are inefficient. In this thesis, a fully parallel unrolled FFT architecture based on novel radix-4 engine is proposed which is catered for wide range of applications. The radix-4 butterfly unit takes all four inputs in parallel and can selectively produce one out of the four outputs. The proposed architecture uses Radix-4^3 and Radix-4^4 algorithms for computation of various FFTs. The Radix-4^4 block can take all 256 inputs in parallel and can use the select control signals to generate one out of the 256 outputs. In existing Cooley-Tukey architectures, the output from each stage has to be reordered before the next stage can start computation. This needs intermediate storage after each stage. In our architecture, each stage can directly generate the reordered outputs and hence reduce these buffers. A solution for output reordering problem in Radix-4^3 and Radix-4^4 FFT architectures are also discussed in this work. Although the hardware complexity in terms of adders and multipliers are increased in our architecture, a significant reduction in intermediate memory requirement is achieved. FFTs of varying sizes starting from 64 point to 64K point have been implemented in ASIC using UMC 130nm CMOS technology. The data representation used in this work is fixed point format and selected word length is 16 bits to get maximum Signal to Quantization Noise Ratio (SQNR). The architecture has been found to be more suitable for computing FFT of large sizes. For 4096 point and 64K point FFTs, this design gives comparable throughput with considerable reduction in area and latency when compared to the state-of-art implementations. The 64K point FFT architecture resulted in a throughput of 1332 mega samples per second with an area of 171.78 mm^2 and total power of 10.7W at 333 MHz.
APA, Harvard, Vancouver, ISO, and other styles
24

Hu, Junhao. "Directed connectivity analysis and its application on LEO satellite backbone." Thesis, 2021. http://hdl.handle.net/1828/13369.

Full text
Abstract:
Network connectivity is a fundamental property affecting network performance. Given the reliability of each link, network connectivity determines the probability that a message can be delivered from the source to the destination. In this thesis, we study the directed network connectivity where the message will be forwarded toward the destination hop by hop, so long as the neighbor(s) is (are) closer to the destination. Directed connectivity, closely related to directed percolation, is very complicated to calculate. The existing state-of-the-art can only calculate directed connectivity for a lattice network up-to-the size of 10 × 10. In this thesis, we devise a new approach that is simpler and more scalable and can handle general network topology and heterogeneous links. The proposed approach uses an unambiguous hop count to divide the networks into hops and gives two steps of pre-process to transform hop-count ambiguous networks into unambiguous ones, and derive the end-to-end connectivity. Then, using the Markov property to obtain the state transition probability hop by hop. Second, with tens of thousands of Low Earth Orbit (LEO) satellites covering the Earth, LEO satellite networks can provide coverage and services that are otherwise not possible using terrestrial communication systems. The regular and dense LEO satellite constellation also provides new opportunities and challenges for network protocol design. In this thesis, we apply the directed connectivity analytical model on LEO satellite backbone networks to ensure ultra-reliable and low-latency (URLL) services using LEO networks, and propose a directed percolation routing (DPR) algorithm to lower the cost of transmission without sacrificing speed. Using Starlink constellation (with 1,584 satellites) as an example, the proposed DPR can achieve a few to tens of milliseconds latency reduction for inter-continental transmissions compared to the Internet backbone, while maintaining high reliability without link-layer retransmissions. Finally, considering the link redundancy overhead and delay/reliability tradeoff, DPR can control the size of percolation. In other words, we can choose a part of links to be active links considering the reliability and cost tradeoff.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography