To see the other types of publications on this topic, follow the link: Encoder.

Dissertations / Theses on the topic 'Encoder'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Encoder.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hudgins, Hayden. "Human Path Prediction using Auto Encoder LSTMs and Single Temporal Encoders." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2119.

Full text
Abstract:
Due to automation, the world is changing at a rapid pace. Autonomous agents have become more common over the last several years and, as a result, have created a need for improved software to back them up. The most important aspect of this greater software is path prediction, as robots need to be able to decide where to move in the future. In order to accomplish this, a robot must know how to avoid humans, putting frame prediction at the core of many modern day solutions. A popular way to solve this complex problem of frame prediction is Auto Encoder LSTMs. Though there are many implementations of this, at its core, it is a neural network comprised of a series of time sensitive processing blocks that shrink and then grow the data’s dimensions to make a prediction. The idea of using Auto Encoder styled networks to do frame prediction has also been adapted by others to make Temporal Encoders. These neural networks work much like traditional Auto Encoders, in which the data is reduced then expanded back up. These networks attempt to tease out a series of frames, including a predictive frame of the future. The problem with many of these networks is that they take an immense amount of computation power, and time to get them performing at an acceptable level. This thesis presents possible ways of pre-processing input frames to these networks in order to gain performance, in the best case seeing a 360x improvement in accuracy compared to the original models. This thesis also extends the work done with Temporal Encoders to create more precise prediction models, which showed consistent improvements of at least 50% for some metrics. All of the generated models were compared using a simulated data set collected from recordings of ground level viewpoints from Cities: Skylines. These predicted frames were then analyzed using a common perceptual distance metric, that is, Minkowski distance, as well as a custom metric that tracked distinct areas in frames. All of the following was run on a constrained system in order to see the effects of the changes as they pertain to systems with limited hardware access.
APA, Harvard, Vancouver, ISO, and other styles
2

Bondurant, Philip D., and Andrew Driesman. "Smart PCM Encoder." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/611601.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
In this paper, a new concept in PCM telemetry encoding equipment is described. Existing "programmable" PCM encoders allow only simple changes in the functionality of the hardware, such as input gain, offset, and word formatting. More importantly, these encoders do not provide capability for "in-flight" processing of signals and in general have not taken advantage of existing hardware and software digital signal processing technology. In-flight processing of signals can provide a significant reduction in the required transmission bandwidth, allowing additional data that may not have otherwise been transmitted to be sent on the telemetry channel. A modular digital signal processor (DSP) based PCM encoder architecture is described that has a set of on-board processing algorithms configurable via a simple-to-use graphical user interface. Algorithms included are compression (lossy and lossless), Fourier transforms of various resolutions (typically followed by peak detection to provide a data rate reduction), extreme values (max, min, rms), time filtering, regression, trajectory prediction, and serial data stream processing. Custom algorithms can be developed and included as part of the suite of processing algorithms. The preprocessing algorithms exist as firmware on the DSPs and can accommodate as many different signals as the processing bandwidth of the DSP can handle. Typically one DSP can handle many input signals and different algorithms. The encoder is programmable via a standard RS-232 serial interface allowing the signal input configuration, telemetry frame layout, and on-board processing algorithms to be changed quickly.
APA, Harvard, Vancouver, ISO, and other styles
3

Carr, John Peter. "Integrated optical encoder." Thesis, Heriot-Watt University, 2010. http://hdl.handle.net/10399/2520.

Full text
Abstract:
The three state contact process is the modi cation of the contact process at rate in which rst infections occur at rate instead. Chapters 2 and 3 consider the three state contact process on (graphs that have as set of sites) the integers with nearest neighbours interaction (that is, edges are placed among sites at Euclidean distance one apart). Results in Chapter 2 are meant to illustrate regularity of the growth of the process under the assumption that , that is, reverse immunization. While in Chapter 3 two results regarding the convergence rates of the process are given. Chapter 4 is concerned with the i.i.d. behaviour of the right endpoint of contact processes on the integers with symmetric, translation invariant interaction. Finally, Chapter 5 is concerned with two monotonicity properties of the three state contact process.
APA, Harvard, Vancouver, ISO, and other styles
4

Chan, Ming-Yan. "Video encoder complexity reduction /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202005%20CHANM.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Toutain, Philippe. "CCSDS PACKET TELECOMMAND ENCODER." International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/608885.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California
The European Space Agency (ESA) decided in March, 1991 to phase out the existing telecommand standard (PSS-45) and replaces it with the new CCSDS (Consultative Committee for Space Data Systems) compatible standard, the packet telecommand standard PSS-04-107. SCHLUMBERGER Industries has developed a telecommand encoder, the TC 3900, which complies with the packet telecommand standards. It belongs to a new family of modular products using new technologies and incorporates in only one single housing of 7 units high and 19" wide, the telecommand encoder, a PSKFSK sub-carrier modem, and WAN (Wide Area Network) and LAN (Local) interfaces. The CCSDS recommendations oblige to implement new functions, which were not used with previous standards : we propose to describe what are the new services provided by the packet telecommanding and how they have been implemented in the TC 3900 encoder.
APA, Harvard, Vancouver, ISO, and other styles
6

CONN, RAYMOND, and PHILLIP BREEDLOVE. "A MISSILE INSTRUMENTATION ENCODER." International Foundation for Telemetering, 1986. http://hdl.handle.net/10150/615423.

Full text
Abstract:
International Telemetering Conference Proceedings / October 13-16, 1986 / Riviera Hotel, Las Vegas, Nevada
The modern Pulse Code Modulation (PCM) telemetry system faces many unique challenges in terms of data acquisition diversity and specifically satisfy the demanding missile requirements. The engineering considerations and hardware implementation are presented in this paper.
APA, Harvard, Vancouver, ISO, and other styles
7

Kalchbrenner, Nal. "Encoder-decoder neural networks." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:d56e48db-008b-4814-bd82-a5d612000de9.

Full text
Abstract:
This thesis introduces the concept of an encoder-decoder neural network and develops architectures for the construction of such networks. Encoder-decoder neural networks are probabilistic conditional generative models of high-dimensional structured items such as natural language utterances and natural images. Encoder-decoder neural networks estimate a probability distribution over structured items belonging to a target set conditioned on structured items belonging to a source set. The distribution over structured items is factorized into a product of tractable conditional distributions over individual elements that compose the items. The networks estimate these conditional factors explicitly. We develop encoder-decoder neural networks for core tasks in natural language processing and natural image and video modelling. In Part I, we tackle the problem of sentence modelling and develop deep convolutional encoders to classify sentences; we extend these encoders to models of discourse. In Part II, we go beyond encoders to study the longstanding problem of translating from one human language to another. We lay the foundations of neural machine translation, a novel approach that views the entire translation process as a single encoder-decoder neural network. We propose a beam search procedure to search over the outputs of the decoder to produce a likely translation in the target language. Besides known recurrent decoders, we also propose a decoder architecture based solely on convolutional layers. Since the publication of these new foundations for machine translation in 2013, encoder-decoder translation models have been richly developed and have displaced traditional translation systems both in academic research and in large-scale industrial deployment. In services such as Google Translate these models process in the order of a billion translation queries a day. In Part III, we shift from the linguistic domain to the visual one to study distributions over natural images and videos. We describe two- and three- dimensional recurrent and convolutional decoder architectures and address the longstanding problem of learning a tractable distribution over high-dimensional natural images and videos, where the likely samples from the distribution are visually coherent. The empirical validation of encoder-decoder neural networks as state-of- the-art models of tasks ranging from machine translation to video prediction has a two-fold significance. On the one hand, it validates the notions of assigning probabilities to sentences or images and of learning a distribution over a natural language or a domain of natural images; it shows that a probabilistic principle of compositionality, whereby a high- dimensional item is composed from individual elements at the encoder side and whereby a corresponding item is decomposed into conditional factors over individual elements at the decoder side, is a general method for modelling cognition involving high-dimensional items; and it suggests that the relations between the elements are best learnt in an end-to-end fashion as non-linear functions in distributed space. On the other hand, the empirical success of the networks on the tasks characterizes the underlying cognitive processes themselves: a cognitive process as complex as translating from one language to another that takes a human a few seconds to perform correctly can be accurately modelled via a learnt non-linear deterministic function of distributed vectors in high-dimensional space.
APA, Harvard, Vancouver, ISO, and other styles
8

Boyd, Phillip L. "Recovery of unknown constraint length and encoder polynomials for rate 1/2 linear convolutional encoders." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA375935.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, December 1999.
"December 1999". Thesis advisor(s): Clark Robertson, Tri Ha, Ray Ramey. Includes bibliographical references (p. 79). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
9

Wilson, Brian George. "5-tone ZVEI encoder analyser." Thesis, Cape Technikon, 1993. http://hdl.handle.net/20.500.11838/1141.

Full text
Abstract:
Thesis (M.Diploma in Technology)--Cape Technikon, 1993
This thesis describes the development of a 5-Tone Zentral Verband Electrotechische Industrie eZVEI) Encoder Analyser. The 5-Tone ZVEI Encoder Analyser is used by the Radio Section of the Test and Metering Branch, which falls under the Electricity Department of the Cape Town City Council. It assists the Quality Assurance Technician in determining whether the 5 tone ZVEI encoder, of the radio under test, is operating within the manufacturers specifications. Various manufacturers of radio equipment tender for the supply of mobile radios fitted with ZVEI tone The Radio Section are now capable of testing encoders. all the various radios and comparing the analysed ZVEI specifications of each manufacturer's radio. The results can be used to assist management in deciding which radio would be the most suitable for purchasing. The development of the 5-Tone ZVEI Encoder Analyser involved the design and development of hardware and software. It was designed to be housed in a compact enclosure and to interface to a Motorola Communications System Analyser Model R-2001C. The RF output, from the radio under test, connects to the RF input of the Communications System Analyser. The demodulated output of the Communications System Analyser connects to the input of the 5-Tone ZVEI Encoder Analyser. The softwa~e was designed using PLM-51 high level language to p~ovide ~eal-time analysis of va~ious selective-calls (selcalls) ~eceived f~om the demodulated output of the Communications System Analyser. Once all 5 tones of the ZVEI selcall have been analysed the software background task is flagged and the analysed results a~e displayed as various MODES of display on a 16 cha~acte~ by 4 line dot matrix display. The following parameters of the ZVEI selcall a~e analysed: i) Frequency Digits. ii) Frequency fo~ each of the 5 tones. iii) Tone Duration for each of the 5 tones. iv) Frequency Error for the 5 tones. v) Tone Duration Er~or for the 5 tones. The design and development of the 5-Tone ZVEI Encode~ Analyser was conducted at the Computer Section of the Electricity Depa~tment, Cape Town City Council.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Chong. "Robust Auto-encoders." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/393.

Full text
Abstract:
In this thesis, our aim is to improve deep auto-encoders, an important topic in the deep learning area, which has shown connections to latent feature discovery models in the literature. Our model is inspired by robust principal component analysis, and we build an outlier filter on the top of basic deep auto-encoders. By adding this filter, we can split the input data X into two parts X=L+S, where the L could be better reconstructed by a deep auto-encoder and the S contains the anomalous parts of the original data X. Filtering out the anomalies increases the robustness of the standard auto-encoder, and thus we name our model ``Robust Auto-encoder'. We also propose a novel solver for the robust auto-encoder which alternatively optimizes the reconstruction cost of the deep auto-encoder and the sparsity of outlier filter in pursuit of finding the optimal solution. This solver is inspired by the Alternating Direction Method of Multipliers, Back-propagation and the Alternating Projection method, and we demonstrate the convergence properties of this algorithm and its superior performance in standard image recognition tasks. Last but not least, we apply our model to multiple domains, especially, the cyber-data analysis, where deep models are seldom currently used.
APA, Harvard, Vancouver, ISO, and other styles
11

Johansson, Robin. "Easier Encoder Installation with Signal Modulation." Thesis, Linköpings universitet, Fysik och elektroteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121129.

Full text
Abstract:
Överlagrad kommunikation är förmågan att kommunicera genom en elektrisk ledare vilken samtidigt fyller en annan funktion. För att upprätthålla kommunikationen krävs att en slags signalmodulering införs som kan slå samman information i ena änden av ledaren och urskilja samma information i den andra änden. I följande rapport kommer överlagrad kommunikation studeras för en matningsledare bestående av en DC-spänning, matningen spänningsätter en pulsgivare genom en kabel innehållande flera andra ledare. Mätningar har utförts på nuvarande system, både hur en typisk pulsgivare fungerar och hur dess installationskabel kan tänkas ha betydelse för slutresultatet. I en förstudie presenteras relevanta lösningsförslag på hur kommunikationen kan upprätthållas genom matningsledaren, förslagen med störst potential tas vidare genom att simuleringar och mätningar utförs. För att hitta de relevanta lösningarna undersöks grundläggande pulsgivarinformation tillsammans med en studie på den nuvarande marknaden. Till sist presenteras fördelar och nackdelar mellan tre olika kommunikationsexempel och tillsammans jämförs de med lösningen att dra en extra kabel vid installationen, framförallt är det prisskillnaden och platsen kretsarna tar som är av intresse. Eftersom beräkningskraftiga FPGA-kretsar var en tillgång som kunde utnyttjas skapades grunden till kommunikationen där, ingen programmering av dem beskrivs dock i rapporten. Slutgiltig lösning inkluderar kopplingen mellan en FPGA och installationskabeln. Resultatet är en robust och tillsynes säker FSK-kommunikation som har verifierats i simuleringar och fysiska uppkopplingar. Godtycklig data kan skapas i FPGA-kretsarna, skickas som halv-duplex och läsas i andra änden kabeln.
APA, Harvard, Vancouver, ISO, and other styles
12

Claman, Lawrence N. (Lawrence Nathan). "A two channel spatio-temporal encoder." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/33807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Rodriguez, Harry. "SPACEBOURNE VME BASED PCM ENCODER (VPE)." International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/608848.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The VME bus is used in a wide variety of airborne applications. The particular application of the VPE is for use in the MSTI satellite to provide spacecraft telemetry. The VME based PCM encoder can provide telemetry from any stand alone data acquisition system. This paper describes the VME based PCM encoder. Since this design is ruggedized to meet the launch and environmental requirements for space, it can be used in any airborne VME system.
APA, Harvard, Vancouver, ISO, and other styles
14

Esteki, Abolghasem. "Analisi dello stato dell'arte nello sviluppo di encoder ottici e magnetici." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Il lavoro di tesi proposto ha mirato ad analizzare le caratteristiche dei sensori di posizione esaminando la teoria e l'applicazione delle tecnologie utilizzate nei sensori per la misurazione della posizione lineare e angolare/rotatoria, fornendo informazioni in merito alla progettazione dei sensori in generale e agli ultimi sviluppi tecnologici.
APA, Harvard, Vancouver, ISO, and other styles
15

Padinjare, Sainath. "VLSI implementation of a turbo encoder/decoder /." Internet access available to MUN users only, 2003. http://collections.mun.ca/u?/theses,162832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Erdogan, Baran. "Real-time Video Encoder On Tmsc6000 Platform." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605594/index.pdf.

Full text
Abstract:
Technology is integrated into daily life more than before as it evolves through communication area. In the past, it started with audio devices that help us to communicate while far between two ends of communication line. Nowadays visual communication comes in front considering the communication technology. This became possible with the improvement in the compression techniques of visual data and increasing speed, optimized architecture of the new family processors. These type processors are named as Digital Signal Processors (DSP&rsquo
s). Texas Instruments TMS320C6000 Digital Signal Processor family offers one of the fastest DSP core in the market. TMS320C64x sub-family processors are newly developed under the TMS320C6000 family to overcome disadvantages of its predecessor family TMS320C62x. TMS320C64x family has optimized architecture for packed data processing, improved data paths and functional units,improved memory architecture and increased speed. These capabilities make this family of processors good candidate for real-time video processing applications. Advantages of this core are used for implementing newly established H.264 Recommendation. Highly optimizing C Compiler of TMS320C64x enabled fast running implementation of encoder blocks that bring heavy computational load to encoder. Such as fast implementation of Motion Estimation, Transform, Entropy Coding became possible. Simplified Densely Centered Uniform-P Search algorithm is used for fast estimation of motion vectors. Time taking parts enhanced to improve the performance of the encoder.
APA, Harvard, Vancouver, ISO, and other styles
17

Olyniec, Lee. "DESIGN OF A DIGITAL VOICE ENCODER CIRCUIT." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608404.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
This paper describes the design and characteristics of a digital voice encoding circuit that uses the continuously variable slope delta (CVSD) modulation/demodulation method. With digital voice encoding, the audio signal can be placed into the pulse code modulation (PCM) data stream. Some methods of digitizing voice can require a large amount of bandwidth. Using the CVSD method, an acceptable quality of audio signal is obtained with a minimum of bandwidth. Presently, there is a CVSD microchip commercially available; however, this paper will describe the design of a circuit based on individual components that apply the CVSD method. With the advances in data acquisition technology, increased bit rates, and introduction of a corresponding MIL-STD, CVSD modulated voice will become more utilized in the flight test programs and a good knowledge of CVSD will become increasingly important. This paper will present CVSD theory, supported by graphical investigations of a working circuit under different conditions. Finally, several subjects for further study into CVSD will be addressed.
APA, Harvard, Vancouver, ISO, and other styles
18

Milles, George T. "Simple Digital Encoder for NTSC Composite Video." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615042.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
The need exists to encode NTSC composite video into a serial digital bit stream for encryption prior to transmission. Further, this need exists in places where power and volume are at a premium. This paper describes a simple solution using the Continuously Variable Slope Delta Modulation technique of encoding all lines and fields in real time and is usable with clock rates from 5 to 25 MHz. The circuits presented use only a 5-volt power supply and two active devices: a comparator and either a dual flip-flop or serial shift register.
APA, Harvard, Vancouver, ISO, and other styles
19

Abbas, Naeem. "Runtime Parallelisation Switching for MPEG4 Encoder on MPSoC." Thesis, KTH, Elektronik- och datorsystem, ECS, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-144038.

Full text
Abstract:
The recent development for multimedia applications on mobile terminals raised the need for flexible and scalable computing platforms that are capable of providing considerable (application specific) computational performance within a low cost and a low energy budget. The MPSoC with multi-disciplinary approach, resolving application mapping, platform architecture and runtime management issues, provides such multiple heterogeneous, flexible processing elements. In MPSoC, the run-time manager takes the design time exploration information as an input and selects an active Pareto point based on quality requirement and available platform resources, where a Pareto point corresponds to a particular parallelization possibility of target application. To use system’s scalability at best and enhance application’s flexibility a step further, the resource management and Pareto point selection decisions need to be adjustable at run-time. This thesis work experiments run-time Pareto point switching for MPEG-4 encoder. The work involves design time exploration and then embedding of two parallelization possibilities of MPEG-4 encoder into one single component and enabling run-time switching between parallelizations, to give run-time control over adjusting performance-cost criteria and allocation de-allocation of hardware resources at run-time. The newer system has the capability to encode each video frame with different parallelization. The obtained results offer a number of operating points on Pareto curve in between the previous ones at sequence encoding level. The run-time manager can improve application performance up to 50% or can save memory bandwidth up to 15%, according to quality request.
APA, Harvard, Vancouver, ISO, and other styles
20

Oh, Han, and Yookyung Kim. "Low-Complexity Perceptual JPEG2000 Encoder for Aerial Images." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595684.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
A highly compressed image inevitably has visible compression artifacts. To minimize these artifacts, many compression algorithms exploit the varying sensitivity of the human visual system (HVS) to different frequencies. However, this sensitivity has typically been measured at the near-threshold level where distortion is just noticeable. Thus, it is unclear that the same sensitivity applies at the supra-threshold level where distortion is highly visible. In this paper, we measure the sensitivity of the HVS for several supra-threshold distortion levels based on our JPEG2000 distortion model. Then, a low-complexity JPEG2000 encoder using the measured sensitivity is described. For aerial images, the proposed encoder significantly reduces encoding time while maintaining superior visual quality compared with a conventional JPEG2000 encoder.
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Jung Sup, and Myung Jin Jang. "Implementation of A 30-Channel PCM Telemetry Encoder." International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/604960.

Full text
Abstract:
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California
The function of a PCM telemetry encoder, installed in moving vehicles such as automobiles, aircraft, missiles, and artillery projectiles, is to transform many physical variables, such as velocity, shock, temperature, vibration and pressure, into digital data. Also, the encoder is required to make a data frame composed of digital input signals and frame synchronous data. The framed data is supplied to the input of a transmitter. There are three critical considerations in developing a PCM telemetry encoder to be installed in an artillery projectile. The first is the performance consideration, such as sampling rate, data receiving rate and data transmission rate. The second is the size consideration due to the severely limited installation space in an artillery projectile and the last is the power consumption consideration due to limitations of the munition’s power supply. To meet these three considerations, the best alternative is a one-chip solution. Using a commercially available TMS320F2812 DSP chip, we have implemented a 30-channel PCM telemetry encoder to process randomized data frames, composed of 16-channel analog data, 14-channel digital data and 2 frame synchronization data per data frame, at 10Mbps transmission baud rate. This paper describes the structure of the 30-channel PCM telemetry encoder and its performance.
APA, Harvard, Vancouver, ISO, and other styles
22

Weitzman, Jonathan M. "SELECTABLE PERMUTATION ENCODER/DECODER FOR A QPSK MODEM." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/605817.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
An artifact of QPSK modems is ambiguity of the recovered data. There are four variations of the output data for a given input data stream. All are equally probable. To resolve this ambiguity, the QPSK data streams can be differentially encoded before modulation and differentially decoded after demodulation. The encoder maps each input data pair to a phase angle change of the QPSK carrier. In the demodulator, the inverse is performed - each phase change of the input QPSK carrier is mapped to an output data pair. This paper discusses a very simple and unique differential encoder/decoder that handles all possible data pair/phase change permutations.
APA, Harvard, Vancouver, ISO, and other styles
23

Rivera, Alan. "Telemetry Data Encoder with an Embedded GPS Receiver." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/606736.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
This paper outlines the GPS data acquisition of two PCM encoders. The design of the first PCM Encoder uses an embedded GPS Receiver module, the Thales G12- HDMA receiver. The G12 Receiver has been integrated into the electronics of the PCM Encoder to provide a seamless tool for the Telemetry Engineer to acquire GPS position and time data with the sensor data acquired from the PCM Encoder. The second telemetry encoder discussed in this paper adds the GPS Interface Module for the Time Space Position Unit (TSPI) currently under development at Herley Industries. The TSPI Unit will also be integrated with the PCM Encoder tools to create a seamless user interface. The TSPI unit is available in both the “Low Dynamic (JTU-I)” and the “High Dynamic” (JTU-II).
APA, Harvard, Vancouver, ISO, and other styles
24

Mejdi, Sami. "Encoder-Decoder Networks for Cloud Resource Consumption Forecasting." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291546.

Full text
Abstract:
Excessive resource allocation in telecommunications networks can be prevented by forecasting the resource demand when dimensioning the networks and the allocation the necessary resources accordingly, which is an ongoing effort to achieve a more sustainable development. In this work, traffic data from cloud environments that host deployed virtualized network functions (VNFs) of an IP Multimedia Subsystem (IMS) has been collected along with the computational resource consumption of the VNFs. A supervised learning approach was adopted to address the forecasting problem by considering encoder-decoder networks. These networks were applied to forecast future resource consumption of the VNFs by regarding the problem as a time series forecasting problem, and recasting it as a sequence-to-sequence (seq2seq) problem. Different encoder-decoder network architectures were then utilized to forecast the resource consumption. The encoder-decoder networks were compared against a widely deployed classical time series forecasting model that served as a baseline model. The results show that while the considered encoder-decoder models failed to outperform the baseline model in overall Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE), the forecasting capabilities were more resilient to degradation over time. This suggests that the encoder-decoder networks are more appropriate for long-term forecasting, which is an agreement with related literature. Furthermore, the encoder-decoder models achieved competitive performance when compared to the baseline, despite being treated with limited hyperparameter-tuning and the absence of more sophisticated functionality such as attention. This work has shown that there is indeed potential for deep learning applications in forecasting of cloud resource consumption.
Överflödig allokering av resurser I telekommunikationsnätverk kan förhindras genom att prognosera resursbehoven vid dimensionering av dessa nätverk. Detta görs i syfte att bidra till en mer hållbar utveckling. Inför detta prjekt har trafikdata från molnmiljön som hyser aktiva virtuella komponenter (VNFs) till ett IÅ Multimedia Subsystem (IMS) samlats in tillsammans med resursförbrukningen av dessa komponenter. Detta examensarbete avhandlar hur effektivt övervakad maskininlärning i form av encoder-decoder nätverk kan användas för att prognosera resursbehovet hos ovan nämnda VNFs. Encoder-decoder nätverken appliceras genom att betrakta den samlade datan som en tidsserie. Problemet med att förutspå utvecklingen av tidsserien formuleras sedan som ett sequence-2-sequence (seq2seq) problem. I detta arbete användes en samling encoder-decoder nätverk med olika arkitekturer för att prognosera resursförbrukningen och dessa jämfördes med en populär modell hämtad från klassisk tidsserieanalys. Resultaten visar att encoder-decoder nätverken misslyckades med att överträffa den klassiska tidsseriemodellen med avseende på Root Mean Squeared Error (RMSE) och Mean Absolut Error (MAE). Dock visar encoder-decoder nätverken en betydlig motståndskraft mot prestandaförfall över tid i jämförelse med den klassiska tidsseriemodellen. Detta indikerar att encoder-decoder nätverk är lämpliga för prognosering över en längre tidshorisont. Utöver detta visade encoder-decoder nätverken en konkurrenskraftig förmåga att förutspå det korrekta resursbehovet, trots en begränsad justering av disponeringsparametrarna och utan mer sofistikerad funktionalitet implementerad som exempelvis attention.
APA, Harvard, Vancouver, ISO, and other styles
25

Mejdi, Sami. "Encoder-Decoder Networks for Cloud Resource Consumption Forecasting." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-294066.

Full text
Abstract:
Excessive resource allocation in telecommunications networks can be prevented by forecasting the resource demand when dimensioning the networks and then allocating the necessary resources accordingly, which is an ongoing effort to achieve a more sustainable development. In this work, traffic data from cloud environments that host deployed virtualized network functions (VNFs) of an IP Multimedia Subsystem (IMS) has been collected along with the computational resource consumption of the VNFs. A supervised learning approach was adopted to address the forecasting problem by considering encoder-decoder networks. These networks were applied to forecast future resource consumption of the VNFs by regarding the problem as a time series forecasting problem, and recasting it as a sequence-to-sequence (seq2seq) problem. Different encoder-decoder network architectures were then utilized to forecast the resource consumption. The encoder-decoder networks were compared against a widely deployed classical time series forecasting model that served as a baseline model. The results show that while the considered encoder-decoder models failed to outperform the baseline model in overall Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE), the forecasting capabilities were more resilient to degradation over time. This suggests that the encoder-decoder networks are more appropriate for long-term forecasting, which is in agreement with related literature. Furthermore, the encoder-decoder models achieved competitive performance when compared to the baseline, despite being treated with limited hyperparameter-tuning and the absence of more sophisticated functionality such as attention. This work has shown that there is indeed potential for deep learning applications in forecasting of cloud resource consumption.
Överflödig allokering av resurser i telekommunikationsnätverk kan förhindras genom att prognosera resursbehoven vid dimensionering av dessa nätverk. Detta görs i syfte att bidra till en mer hållbar utveckling. Infor  detta  projekt har  trafikdata från molnmiljon som hyser aktiva virtuella komponenter (VNFs) till ett  IP Multimedia Subsystem (IMS) samlats in tillsammans med resursförbrukningen  av dessa komponenter. Detta examensarbete avhandlar hur effektivt övervakad maskininlärning i form av encoder-decoder natverk kan användas för att prognosera resursbehovet hos ovan nämnda VNFs. Encoder-decoder nätverken appliceras genom att betrakta den samlade datan som en tidsserie. Problemet med att förutspå utvecklingen av tidsserien formuleras sedan som ett sequence-to-sequence (seq2seq) problem. I detta arbete användes en samling encoder-decoder nätverk med olika arkitekturer for att prognosera resursförbrukningen och dessa jämfördes med en populär modell hämtad från klassisk tidsserieanalys. Resultaten visar att encoder- decoder nätverken misslyckades med att överträffa den klassiska tidsseriemodellen med avseende på Root Mean Squared Error (RMSE) och Mean Absolute Error (MAE). Dock visade encoder-decoder nätverken en betydlig motståndskraft mot prestandaförfall över tid i jämförelse med den klassiska tidsseriemodellen. Detta indikerar att encoder-decoder nätverk är lämpliga för prognosering över en längre tidshorisont. Utöver detta visade encoder-decoder nätverken en konkurrenskraftig förmåga att förutspå det korrekta resursbehovet, trots en begränsad justering av disponeringsparametrarna och utan mer sofistikerad funktionalitet implementerad som exempelvis attention.
APA, Harvard, Vancouver, ISO, and other styles
26

Luthra, Nikhil. "Finite State Machine Implementation of a Turbo Encoder." Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1134416479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Correia, Tiago Miguel Pina. "FPGA implementation of Alamouti encoder/decoder for LTE." Master's thesis, Universidade de Aveiro, 2013. http://hdl.handle.net/10773/12679.

Full text
Abstract:
Mestrado em Engenharia Electrónica e Telecomunicações
Motivados por transmissões mais rápidas e mais fiáveis num canal sem fios, os sistemas da 4G devem proporcionar processamento de dados mais rápido a baixa complexidade, elevadas taxas de dados, assim como robustez na performance reduzindo também, a latência e os custos de operação. LTE apresenta, na sua camada física, tecnologias como OFDM e MIMO que prometem alcançar elevadas taxas de dados e aumentar a eficiência espectral. Especificamente a camada física do LTE emprega OFDMA para downlink e SC-FDMA para uplink. A tecnologia MIMO permite também melhorar significativamente o desempenho dos sistemas OFDM com as vantagens de multiplexação e diversidade espacial diminuindo o efeito de desvanecimento de multi-percurso no canal. Nesta dissertação são implementados um codificador e um descodificador com base no algoritimo de Alamouti num sistema MISO nomeadamente para serem incluídos num OFDM transceiver que segue as especificações da camada física do LTE. A codificação/descodificação de Alamouti realiza-se no espaço e frequência e os blocos foram projetados e simulados em Matlab através do ambiente Simulink com o auxílio dos blocos da Xilinx inseridos no seu software System Generator para DSP. Pode-se concluir que os blocos baseados no algoritmo de Alamouti foram implementados em hardware com sucesso.
Motivated by faster transmissions and more reliable wireless channel, future 4G systems should provide faster data processing at low complexity, high data rates, as well as robustness in performance while also reducing the latency and operating costs. LTE presents in its physical layer technologies such as OFDM and MIMO that promise to achieve high data rates and increase spectral efficiency. Specifically the physical layer of LTE employs OFDMA on the downlink and SC-FDMA for uplink. MIMO technology also allows to significantly improve the performance of OFDM systems with the advantages of multiplexing and spatial diversity by decreasing the effect of multipath fading in the channel. In this thesis we implemented an encoder and a decoder based on an Alamouti algorithm in a MISO system namely to be added to an OFDM transceiver that follows closely the LTE physical layer specifications. Alamouti coding/decoding is performed in frequency and space and the blocks were projected and simulated in Matlab using Simulink environment through the Xilink's blocks in the System Generator for DSP. One can conclude that the blocks based on Alamouti algorithm were well-implemented.
APA, Harvard, Vancouver, ISO, and other styles
28

Acevedo-Hueso, Luis-Francisco. "Optical simulation and testing of an optical encoder." Thesis, Heriot-Watt University, 2015. http://hdl.handle.net/10399/2977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Rele, Bhushan. "Simulation of VSELP speech encoder for mobile channels." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-12052009-020230/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Grozman, Vladimir. "Evaluating the CU-tree algorithm in an HEVC encoder." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-176054.

Full text
Abstract:
CU-tree (Coding Unit tree) is an algorithm for adaptive QP (quantization parameter). It runs in the lookahead and decreases the QP of blocks that are heavily referenced by future blocks, taking into account the quality of the prediction and the complexity of the future blocks, approximated by the inter and intra residual. In this study, CU-tree is implemented in c65, an experimental HEVC encoder used internally by Ericsson. The effects of CU-tree are evaluated on the video clips in the HEVC Common test conditions and the performance is compared across c65, x265 and x264. The results are similar across all encoders, with average PSNR (peak signal-to-noise ratio) improvements of 3-10% depending on the fixed QP offsets that are replaced. The runtime is not impaired and improvements to visual quality are expected to be even greater. The algorithm works better at slow speed modes, low bitrates and with source material that is well suited for inter prediction.
CU-tree är en algoritm för adaptiv QP. Den körs under framåtblicken (lookahead) och minskar QP för block som refereras av många framtida block, med hänsyn tagen till prediktionens kvalitet och de framtida blockens komplexitet, approximerat av inter- och intra-skillnaden. I denna studie implementeras CU-tree i c65, en experimentell videokodare som används internt på Ericsson. Effekterna av algoritmen utvärderas på videoklippen i HEVC Common test conditions och prestandan jämförs mellan c65, x265 och x264. Resultaten är liknande i alla videokodare, med genomsnittliga PSNR-förbättringar på 3-10% beroende på vilka fasta QP-offsets som algoritmen ersätter. Körtiden påverkas inte nämnvärt och den subjektiva kvaliteten förbättras troligen ännu mer. Algoritmen fungerar bättre med långsamma hastighetsinställningar, låg bitrate samt videoinnehåll som lämpar sig väl för inter-prediktion.
APA, Harvard, Vancouver, ISO, and other styles
31

von, Wowern Per. "Design of an encoder converter forautomated non-destructive testing." Thesis, KTH, Maskinkonstruktion (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-226316.

Full text
Abstract:
WesDyne Sweden AB is a Non-Destructive Testing (NDT) company specializedin the examination methods ultrasonic testing, eddy current testing and visualinspection. To verify an examination procedure before the actual inspection atsite, a test rig consisting of a three or four axes motion system is used. WesDyne saw a need to be able to modify the position signals from the position encoders in order to increase the  exibility and in some case the accuracy when scanning objects with ultrasonic or eddy current probes. Thus, this thesis is regarding the design and evaluation of an encoder converter. The main task is to transform from Cartesian to polar coordinates and calculate the shortest distance between two points in space. Although, these calculations will introduce a delay. It is therefore of interest to look into how delays aect the NDT measurements.The selection of a microcontroller for the encoder converter was an importantpart of the thesis project. Initial tests were done with the Arduino Mega. Itwas concluded that more processing power was needed than the Arduino Mega could provide. The choice nally fell on the xCORE-200 eXplorerKIT fromXmos. The main tasks for the rmware developed for the xCORE-200 eXplorerKIT was to sample position signals, modify the signals and then output the modied signals. A printed circuit board was designed to act as an adapter card between the motorcontroller, measurement instrument and the xCORE- 200 eXplorerKIT. The encoder converter consisted of these two cards encased with supplementary components. A Windows graphical user interface application was developed to enable the change of settings of the encoder converter and overview of positions.Three tests with eddy current testing were done with a test block with emulated cracks in order to evaluate the performance of the encoder converter. The delay test showed that the Encoder converter had a maximal delay of 303 μs which corresponded to an average position error up to 0:12 mm. Two more tests with the test block were performed with the modied signals, polar coordinates and distance, from the encoder converter.The maximum average position error in these two test were 0:19 mm. The required accuracy depends on the circumstances but for most applications an error lower than 0:12 mm is acceptable. From the test results it can be concluded that conversion of position signals can improve accuracy in some cases of eddy current testing.
WesDyne Sweden AB är ett företag inom oförstörande provning, med specalitet inom metoderna virvelström, ultraljud och visuell provning. För att verifiera proceduren inför en provning används testriggar med tre och fyra axligt rörelsesystem. WesDyne såg ett behov av att kunna modifiera positionssignalerna från positionsenkodrarna för att öka flexibiliteten och i vissa fall förbättra noggrannheten vid virvelström och ultrljudsprovning. Positionssignalerna används för att trigga mätningar med mätinstrumentet. Detta examensarbete handlar därav om utveckling och testning av en encoderomvandlare. Dess huvuduppgift var att kunna räkna ut polära koordinater och kortaste distans mellan två punkter i rymden. Beräkningarna kom att medföra en fördröjning av postionssignalen. Det fanns därav anledning att även undersöka hur olika fördröjningar påverkar mätresultat.   En viktig del av exjobbet var valet av mikrodatorplattform för  encoderomvandlaren. Först gjordes tester med Arduino Mega. Dock upptäcktes det att denna inte var kraftfull nog för uppgiften. Valet föll slutligen på xCORE-200 eXplorerKIT från Xmos vilket ansågs bäst kunna uppfylla kraven. Huvuduppgifterna som ingick i den firmware som utvecklades till xCORE-200 eXplorerKIT var att läsa in postionssignaler, göra om signalerna och sedan skicka ut dessa. Ett kretskort designades för att fungera som länk mellan motorstyrenheten, mätinstrumentet och xCORE-200 eXplorerKIT. Detta byggdes in i en låda med kompletterande komponenter för att kunna utföra tester med encoderomvandlaren. Ett grafisk gränsitt för windows utvecklades för att kunna se positioner och kunna ändra inställningar i enkoderomvandlaren.   För att utvärdera enkoderomvandlaren gjordes tre olika typer av tester med virvelströmsprovning på testblock med emulerade sprickor. Fördröjningstestet visade att enkoderomvandlaren hade en maximal fördröjning av positionssignalen på 303 μs vilket gav ett genomsnittligt lokaliseringsfel av sprickorna på upp till 0:12 mm. Vid de två senare testerna då de modifierade positionsignalerna användes för att registrera virvelström mätningar, var det största genomsnittliga felet som uppmättes 0.19 mm. Nogrannheten som krävs varierar beroende på applikation men generellt är ett fel under 0.5mm godtagbart. Slutsatsen från testresultaten visar att omvandling av positionssignaler kan oka noggrannheten vid vissa fall av virvelströmsprovning.
APA, Harvard, Vancouver, ISO, and other styles
32

Mallikarachchi, Thanuja. "HEVC encoder optimization and decoding complexity-aware video encoding." Thesis, University of Surrey, 2017. http://epubs.surrey.ac.uk/841841/.

Full text
Abstract:
The increased demand for high quality video evidently elevates the bandwidth requirements of the communication channels being used, which in return demands for more efficient video coding algorithms within the media distribution tool chain. As such, High Efficiency Video Coding (HEVC) video coding standard is a potential solution that demonstrates a significant coding efficiency improvement over its predecessors. HEVC constitutes an assortment of novel coding tools and features that contribute towards its superior coding performance, yet at the same time demand more computational, processing and energy resources; a crucial bottleneck, especially in the case of resource constrained Consumer Electronic (CE) devices. In this context, the first contribution in this thesis presents a novel content adaptive Coding Unit (CU) size prediction algorithm for HEVC-based low-delay video encoding. In this case, two independent content adaptive CU size selection models are introduced while adopting a moving window-based feature selection process to ensure that the framework remains robust and dynamically adapts to any varying video content. The experimental results demonstrate a consistent average encoding time reduction ranging from 55% - 58% and 57% - 61% with average Bjøntegaard Delta Bit Rate (BDBR) increases of 1.93% - 2.26% and 2.14% - 2.33% compared to the HEVC 16.0 reference software for the low delay P and low delay B configurations, respectively, across a wide range of content types and bit rates. The video decoding complexity and the associated energy consumption are tightly coupled with the complexity of the codec as well as the content being decoded. Hence, video content adaptation is extensively considered as an application layer solution to reduce the decoding complexity and thereby the associated energy consumption. In this context, the second contribution in this thesis introduces a decoding complexity-aware video encoding algorithm for HEVC using a novel decoding complexity-rate-distortion model. The proposed algorithm demonstrates on average a 29.43% and 13.22% decoding complexity reductions for the same quality with only a 6.47% BDBR increase when using the HM 16.0 and openHEVC decoders, respectively. Moreover, decoder energy consumption analysis reveals an overall energy reduction of up to 20% for the same video quality. Adaptive video streaming is considered as a potential solution in the state-of-the-art to cope with the uncertain fluctuations in the network bandwidth. Yet, the simultaneous consideration of both bit rate and decoding complexity for content adaptation with minimal quality impact is extremely challenging due to the dynamics of the video content. In response, the final contribution in this thesis introduces a content adaptive decoding complexity and rate controlled encoding framework for HEVC. The experimental results reveal that the proposed algorithm achieves a stable rate and decoding complexity controlling performance with an average error of only 0.4% and 1.78%, respectively. Moreover, the proposed algorithm is capable of generating HEVC bit streams that exhibit up to 20.03 %/dB decoding complexity reduction which result in up to 7.02 %/dB decoder energy reduction per 1dB Peak Signal-to-Noise Ratio (PSNR) quality loss.
APA, Harvard, Vancouver, ISO, and other styles
33

Huang, Hung-lin, and 黃鴻麟. "Development of Optical Encoder with Large Enconder Gap." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/36583320640272766333.

Full text
Abstract:
碩士
逢甲大學
光電研究所
97
This study proposes a new optical encoder with a large gap between the index grating and main grating. Compared with conventional optical encoders, the large-gap encoder is easily assembled and provides displacement signals with good quality. Thus the proposed encoders are well suited for the accurate positioning of high-speed linear stages. Two different index gratings, one- (1D) and two-dimensional (2D) phase gratings, are designed according to the theory of image formation using Talbot effect. The profile and self-imaging quality of the phase gratings are investigated by a scanning white-light microscope and transmission microscope, respectively. Experimental results show that an optical encoder with the 1D or 2D phase grating can work successfully at an encoder gap of one- or three-quarter Talbot distance. However, the optical encoder with the 2D phase grating has superior capabilities to reject the common-mode noise and to eliminate the DC offset in the displacement signals. In addition, a high-precision gap measurement technology based on low-coherence interferometry is also presented and verified experimentally in this study.
APA, Harvard, Vancouver, ISO, and other styles
34

Yang, Zi-Yi, and 楊子毅. "Common-path Laser Encoder." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/43346913658281488435.

Full text
Abstract:
碩士
淡江大學
機械與機電工程學系碩士班
98
Diffractive laser encoders overcome the diffraction limit of optical waves. They can provide nano-scale displacement resolution, and will have great potential for nano-metrology applications. The optical configurations of used laser encoders are Michelson’s type with different paths between the measurement beam and reference beam. The environmental disturbance can directly enter the measured signals and cannot be essentially slashed. Thus the accuracy of the used laser encoders becomes dramatically worse. This paper brings up the construction of new common-path diffraction lsaer encoder. It can effectively slash the effect of the environmental disturbance and enhance the stability of measured signals. This work describe common-path diffractive laser encoder measurement principle. The basic theories affected by Doppler effect and grating interferometry. Design a facility for displacement measurement. The principle shows that the displacement is occurred by utilizing the grating and it leads the phase variate to each order diffraction beam, and it makes the different order diffraction beam overlap to produce the interference. Besides, through the application of Labview, it handles the signals to transform the phase difference into the displacement. We will also discuss the errors of the measurement in this paper, and the errors include the systematic error and the random error. These errors can be improved further for the research in the future. Experimental analyzes demonstrate that it has a sensitivity of 0.225 /nm and a theoretical predication displacement resolution of 0.0244 nm. It has promising potential for nanotechnology applications.
APA, Harvard, Vancouver, ISO, and other styles
35

Lin, Chien-sheng, and 林建勝. "A Perceptually Optimized JPEG2000 Encoder." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/56289061073509369148.

Full text
Abstract:
碩士
大同大學
電機工程學系(所)
93
Driven by a growing demand for transmission of visual data over media with limited capacity, increasing efforts have been made to strengthen compression techniques and maintain good visual quality of the compressed image by human visual model. JPEG2000 is the new ISO/ITU standard for still image compression. The multi-resolution wavelet decomposition and the two-tier coding structure of JPEG2000 make it suitable for incorporating the human visual model into the coding algorithm, but the JPEG2000 coder is intrinsically a rate-based distortion minimization algorithm, by which different images coded at the same bit rate always result in different visual qualities. The research will focus on enhancing the performance of the JPEG2000 coder by effectively excluding the perceptually redundant signals from the coding process such that color images encoded at low bit rates have consistent visual quality. By considering the varying sensitivities of the human visual perception to luminance and chrominance signals of different spatial frequencies, the full-band JND profile for each color channel will be decomposed into component JND profiles for different wavelet subbands. With error visibility thresholds provided by the JND profile of each subband, the perceptually insignificant wavelet coefficients in three color channels will be first removed. Without altering the format of the compressed bit stream, the encoder is modified in a way that the bit rate is inversely correlated with the perceptible distortion rather than the distortion of mean square errors. As compared to the JPEG2000 standard, the proposed algorithm can remove more perceptual redundancy from the original image, and the visual quality of the reconstructed image is much more acceptable at low rates.
APA, Harvard, Vancouver, ISO, and other styles
36

Lien, Guan-Kai, and 連冠凱. "Miniaturized Common-path Laser Encoder." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/8f9pub.

Full text
Abstract:
碩士
淡江大學
機械與機電工程學系碩士班
103
Commonly used laser encoders are of non-common-path configuration. The non-common-path configuration between the measurement and reference beams is susceptible to environmental disturbances, and thus produces additional error. Such an error is usually more than tens of nanometers. Under normal circumstances, the precision measurement, positioning process, and displacement measurement resolution and accuracy is greatly affected. Environmental disturbance factors makes laser optical device technology. This study proposes a micro common-path laser encoder device called (CPLE). It has lesser components, is easy to assemble, possesses high resistance to environmental disturbance, and is capable of high-resolution measurements with high accuracy and so on. CPLE shifts phase through the double slit technique (two-slit phase shifting) and interference signals can be adjusted to a phase difference of 90°signal. This technique reduces optical elements. With this technique, the effect of optical element error can be greatly reduced. For the purpose of the study, a miniaturized CPLE was designed and developed. In the long displacement performance test with HP5529A interferometer for offset evaluation, it is displayed in the analysis and the experimental results. The time dependent drift of CPLE was measured for a period of one hour and was found to be 17.7 ± 4.7 nm with 1.5 ± 0.5 nm resolution. It is therefore shown that the measurement error in nanometers for displacement sensing is small. This technique finds varied applications in ultra-precision mechanics and has enormous potential for future development.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Tzu-Ya, and 王姿雅. "Prototype Verification of JPEG2000 Encoder." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/30897921800195638512.

Full text
Abstract:
碩士
國立中正大學
電機工程所
97
In this thesis, we direct against Tier-2 part in the code procedure of JPEG2000 image process system, realize hardware structure, utilize access of person who store, is it take complexity of code to reduce to come. And set up a simulation AMBA behavior environment and carry on the function and prove, verify prototype of JPEG2000 encoder.
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Chien-Sheng, and 林建勝. "A Perceptually Optimized JPEG2000 Encoder." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/55151734289221938158.

Full text
Abstract:
碩士
大同大學
電機工程研究所
92
Driven by a growing demand for transmission of visual data over media with limited capacity, increasing efforts have been made to strengthen compression techniques and maintain good visual quality of the compressed image by human visual model. JPEG2000 is the new ISO/ITU standard for still image compression. The multi-resolution wavelet decomposition and the two-tier coding structure of JPEG2000 make it suitable for incorporating the human visual model into the coding algorithm, but the JPEG2000 coder is intrinsically a rate-based distortion minimization algorithm, by which different images coded at the same bit rate always result in different visual qualities. The research will focus on enhancing the performance of the JPEG2000 coder by effectively excluding the perceptually redundant signals from the coding process such that color images encoded at low bit rates have consistent visual quality. By considering the varying sensitivities of the human visual perception to luminance and chrominance signals of different spatial frequencies, the full-band JND profile for each color channel will be decomposed into component JND profiles for different wavelet subbands. With error visibility thresholds provided by the JND profile of each subband, the perceptually insignificant wavelet coefficients in three color channels will be first removed. Without altering the format of the compressed bit stream, the encoder is modified in a way that the bit rate is inversely correlated with the perceptible distortion rather than the distortion of mean square errors. As compared to the JPEG2000 standard, the proposed algorithm can remove more perceptual redundancy from the original image, and the visual quality of the reconstructed image is much more acceptable at low rates.
APA, Harvard, Vancouver, ISO, and other styles
39

FU, SHENG-ZONG, and 傅聖中. "Hardware Implementation of JPEG2000 Encoder." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/17910103004386463997.

Full text
Abstract:
碩士
國立聯合大學
電子工程學系碩士班
105
In 2000, the Joint Photographic Experts Group committee published an image compression standard JPEG2000 that is DWT-based and supports lossy and lossless compressions. It supports flexible image transmissions such as the progressive transmission and the scaling transmission according to the property of JPEG2000 has the packetized image compression data. The core of JPEG2000 consists of three schemes: DWT, Embedded Block Coding with Optimal Truncation (EBCOT) and MQ-coder. Previous works on the JPEG2000 architecture most of them were focus on the architecture alteration and performance improvement of the individual scheme. A portion of studies investigated the relationship of EBCOT and MQ-coder. There is no study investigates the overall core architecture. Thus, our work investigates the overall core of JPEG2000 architecture with architecture alteration as well as the performance improvement of individual schemes. In hardware design, due to the pipelined and the parallel processing techniques to reduce the amount of memory and to increase execution speed. Thus, we integrated the pipelined and the parallel techniques to design our 2-D DWT. Our 2-D DWT design can immediately deal the entire tile of image with the size of N*N. Moreover, the comparison with the other works shows that our design can greatly reduce the amount of memory and logic component counts. In EBCOT, we extended the pass-parallel method to develop our design. Because our EBCOT architecture can immediately handle the entire code block such that it reduces the mount of registers and the computing time. In previous works, the MQ code cost a lot of running time because those used individual scheme to perform each MQ code pass. However, our design has full pipelined and parallel property ensures that the execution speed can be increased. Because the proposed JPEG2000 encoder that integrates our 2-D DWT architecture, the novel EBCOT coder and MQ coder to process the whole code block. Therefore, the proposed JPEG2000 encder has better performance than that of other works.
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Hsin-Yi, and 林昕儀. "Design and Implementation of JPEG2000 Encoder." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/23353494450614424282.

Full text
Abstract:
碩士
國立交通大學
電子工程系所
92
The ability to have scalability in resolution as well as image quality is the main attractiveness of JPEG2000. DWT (Discrete Wavelet Transform) and EBCOT (Embedded Block Coding with Optimal Truncation) which are two major technologies enable it, however, are also the parts that demand huge storage and computations. To reduce memory requirement, we combine five different computing orders of DWT with level-by-level or mixed-level and find that level-by-level optimal-z scan can reduce the temporal buffer in DWT as well as the buffer between DWT and EBCOT. We also adopt the new stripe-based computation order of EBCOT to further reduce 93.8% buffer size between DWT and EBCOT. The total buffer for the JPEG2000 encoder can be reduced to 66% of the original design. However, the stripe-based computing order will increase 14% more computation time. Thus, we proposed the zero-stripe skipping technique to skip the all-zero-bitplane. With this approach, we can eliminate this overhead and reduce 0.22% computation time further. To reduce the computation complexity, we share the multipliers and adders of the two directional DWT kernels, so that 1/3 of the area of DWT module can be saved. For EBCOT, a pass-level parallelism is adopted to speed up 3 times of the traditional processing time and to reduce 2/3 memory accesses. The gate count of proposed context formation is 6.8% of others. Finally, we proposed a plan to use one DWT module with three embedded block coders to integrate our JPEG2000 encoding system. It can achieve a throughput of 55.6 Msamples/sec at 100 MHz clock rate with lower cost and less memory requirement.
APA, Harvard, Vancouver, ISO, and other styles
41

Yen, Wen-Chi, and 顏文祺. "A Hardware/Software-Concurrent JPEG2000 Encoder." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/24263295495852495855.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
92
We implement a JPEG2000 encoder based on an internally developed hardware/software codesign methodology. We emphasize on the concurrent execution of hardware accelerator IPs and software running on the CPU. In a programmable SOC platform, hardware acceleration of DWT and EBCOT Tier-1 sequentially gives us 70% reduction in total execution time. The proposed concurrent scheme achieves additional 14% saving. We describe our experience in bringing up such a system.
APA, Harvard, Vancouver, ISO, and other styles
42

Tsai, Hsin Wei, and 蔡信威. "Research of Magnetic Encoder System Integration." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/a3sak6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Wu, Yi-Hao, and 吳翊豪. "Non-focused Common-path Laser Encoder." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/msd42b.

Full text
Abstract:
碩士
淡江大學
機械與機電工程學系碩士班
104
Commonly usedcommon-path configuration.called (CPLE) has the advantages of high stability, high-resolution. ButCPLE almost is focused laser on grating.The focusedlaseron the grating is only about 100 mm in diameter. The optical scale is sensitive to grating manufacturing quality and grating dirty because of focused laser designed. This study proposes a non-focused common-path laser encoderdevice called (NFCPLE). NFCPLE can effectively overcome the problems in the past CPLE without focused laser design.NFCPLE compared with CPLE for the grating quality requirements is also much low.MoreoverIC-Haus LSCphotodetector reduces optical elements. With this technique, the effect of optical element error can be greatly reduced. In this study, NFCPLE conduct experimental evaluation of performance test and error analysis with HP5529A interferometer for offset evaluation, it is displayed in the analysis and the experimental results. In the displacement of 10 mm,the average differential of NFCPLE was 185.31nm with HP5529A interferometer. The time dependent drift of NFCPLE was measured for a period of three hour and was found to be 3.8 ± 0.5 nm with 1.6 ± 0.5 nm resolution. It is therefore shown that NFCPLE has enormous potential forfuture development.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Yen-hsiang, and 陳言祥. "Encoder Implementation of SFT LDPC Codes." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/02203938905517886924.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
101
Quasi-cyclic low-density parity-check code (QC-LDPC codes) is one of the subclasses of LDPC codes. The encoder implementations of these codes are simpler than other types of LDPC codes. The parity-check matrix of a QC-LDPC code is formed of circulant sub-matrices. By linear transformation, one can find the generator matrix of a QC-LDPC code in a systematic-circulant (SC) form from its parity-check matrix. Based on the generator matrix in SC form, the architecture of the encoder can be implemented with a simple shift register circuit. In this thesis, the encoder circuit of a SFT code, that is a QC-LDPC code, is implemented with TSMC 0.18um technology under operating frequency of 150MHz. The total number of gate count is about 39.362k.
APA, Harvard, Vancouver, ISO, and other styles
45

Jan, Kai-Ruei, and 詹凱瑞. "DSP Implementation of H.264 Encoder." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/53733409884956732890.

Full text
Abstract:
碩士
國立雲林科技大學
通訊工程研究所碩士班
99
The goal of this paper is to port the H.264 reference C code "T264" developed by Chinese Video Encoding Freeware Organization to Texas Instruments(TI) TMS320DM642 DSP development platform.Since H.264 encoder is high in computational complexity, the main challenge is to speed up encoding process for real-time (30 frames per second QCIF) video compression. In order to achieve this goal, we adjust parameters provided by TI Code Composer Studio''s (CCS) for option-level optimization, we also rewrite some functions in C code to linear assembly code and explore possible parallel processing. The experimental results show an average savings about 71% of computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
46

Pan, Shin-Fan, and 潘信汎. "MP3 Encoder Base on G4K VDSP." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/67385252938940654411.

Full text
Abstract:
碩士
國立高雄第一科技大學
電腦與通訊工程所
93
Multimedia applications are most popular and often associated with our life at present. It can be generally divide into two categories, videos and audios. However, the audio application products are much more demanding in the currently society. It is not hard to see almost all teenagers and mid-ages hold a MP3 player in their palm and this trend would continue in the future for everyone. To reach this goal, the overall product will be nothing more than the characteristics of small in size, high quality and reasonable price. Focusing on commercial market and to be competitive with cost, the goal of this thesis is to achieve MP3 encoding at lower clock frequency. Nevertheless, high efficiency hardware does not necessary to operate at high processing speed, but to possess a good architecture. A solution where a DSP processor supports both scalar and vector instructions (serial and parallel architecture) is the G4K VDSP. Taking the advantage of this DSP, it will leads to lower power, lower foundry cost and therefore lower commercial price.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Lee-Ming, and 陳禮民. "Design of hybrid vector quantization encoder." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/55622680859691434100.

Full text
Abstract:
碩士
國立交通大學
電子研究所
82
Digital image coding using vector quantization(VQ) based technique provides low-bit rates and high quality coded images , at the expence of computational demands. Many approaches have been used to alleviate the encoding search process. A novel method with DCT/VQ and BTC/VQ combined can chieve a high quality and low-bit rate compression of images. Since this method has the effect of reducing the codebook size, thus it has alleviated the encoding search loading. We modified this method for VLSI implementation. It preserves the good results in image quality and compression ratio and provides efficient architecture design. We designed an encoder for this modified method. In this design we do not use any multiplier element. The DCT part is approximated by some adders and shifts, and performs the partial discrete cosine transform(PDCT). With 50MHz clock being provided , this circuit can provide real-time encoding of 512x512 frames at 30 frames/sec frame rate.
APA, Harvard, Vancouver, ISO, and other styles
48

Ming, Shiuh Shieh, and 謝明旭. "Design of an MPEG-1 Video Encoder." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/84608607527864637235.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
89
This thesis designs a DCT(Discrete Cosine Transform), IDCT(Inverse Discrete Cosine Transform), Quantization, Inverse Quantization, RGB to YCbCr Conversion and Picture Re-Order Circuit for performing real-time encoding of MPEG-1 Video. In order to simplify circuit complexity, this research employs fixed-point arithmetic for circuit design. Taking a picture size with 352*240 pixels, as an example, the video stream contains at least 30 picture frames per second which implies a 352*240*30=2.5344M pixels, each requires 3 bytes for R, G and B colors, are required for encoding data. So the operation frequency of these circuits must be at least 2.5344*3=7.6031MHz. In this research, we use VHDL(VHSIC Hardware Description Language) for circuit design and Xilinx Foundation Series 3.1i for design verification.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Jan-Zen, and 王建仁. "Implementation of the 3GPP-LTE Turbo Encoder." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/38976640765030474690.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
97
In the 3GPP-LTE (Third Generation Partnership Project - Long Term Evolution), channel coding technique is framed to employ the turbo code, which is skilled in error correction. With the development of the mobile communication systems, the uplink and downlink speeds are 50Mbps and 100Mbps, respectively. It makes the multimedia functions in the wireless communication devices more flexible. The 3GPP-LTE turbo code, dividing into 188 levels, has the block sizes between 40 and 6144 bits. The interleaver address for every block is immediately computed by the interleaver address generator. Hardware implementation of the interleaver algorithm with quadratic polynomial permutation may lead to a waste of chip area and power consumption. Therefore, this thesis aims to calculate the interleaver address by the recursive computation. Only adders and multiplexers are needed during the recursive computation so that the effectiveness of hardware implementation is increased. However, when x 2K, the recursive computation containing (x mod K) will carry out the subtraction more than twice, which will affect the hardware performance of the interleaver. In this thesis, the recursive computation would be modified slightly so as to output one interleaver address for each clock cycle and achieve high throughput.
APA, Harvard, Vancouver, ISO, and other styles
50

Su, Wei-kai, and 蘇暐凱. "Research of high resolution optical rotary encoder." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/59521040171224854449.

Full text
Abstract:
碩士
國立中央大學
光電科學研究所
96
This paper employs the radiometry and non-sequential ray tracing simulation to analyze the optical properties of optical rotary encoders. Use the numerical integration solution based on radiometry and the ray tracing simulation result to do curve fitting. Then we verify our numerical integration solution and ray tracing simulation result by the experiment result. By the RMSE (root mean square) analysis, the RMSE of experimental result and simulation result is 0.00171, and the RMSE of experimental result and numerical solution is 0.00168. We also optimize the sensitivity analysis of ORE, and we find that the effect of variance of width of the code fringe p is most significant. The effect of distance between the disc and the mask d is secondary. Finally, This paper present a novel absolute addressing method for ORE, and this method also can be employed to other kings of encoder.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography