To see the other types of publications on this topic, follow the link: Encoder.

Journal articles on the topic 'Encoder'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Encoder.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gurauskis, Donatas, Krzysztof Przystupa, Artūras Kilikevičius, Mikołaj Skowron, Jonas Matijošius, Jacek Caban, and Kristina Kilikevičienė. "Development and Experimental Research of Different Mechanical Designs of an Optical Linear Encoder’s Reading Head." Sensors 22, no. 8 (April 13, 2022): 2977. http://dx.doi.org/10.3390/s22082977.

Full text
Abstract:
Optical linear encoders are widely used in manufacturing. They are accurate and have a relatively high resolution and good repeatability. However, there are a lot of side effects, which have an inevitable impact on the performance of an encoder. In general, the majority of these effects could be minimized by the appropriate design of an encoder’s reading head. This paper discusses the working principle of and commonly occurring errors in optical linear encoders. Three different mechanical designs are developed and implemented in the experimental reading head of the linear encoder in order to evaluate how mechanical construction influences the displacement measurement accuracy and total performance of the encoder.
APA, Harvard, Vancouver, ISO, and other styles
2

Herrojo, Cristian, Ferran Paredes, and Ferran Martín. "3D-Printed All-Dielectric Electromagnetic Encoders with Synchronous Reading for Measuring Displacements and Velocities." Sensors 20, no. 17 (August 27, 2020): 4837. http://dx.doi.org/10.3390/s20174837.

Full text
Abstract:
In this paper, 3D-printed electromagnetic (or microwave) encoders with synchronous reading based on permittivity contrast, and devoted to the measurement of displacements and velocities, are reported for the first time. The considered encoders are based on two chains of linearly shaped apertures made on a 3D-printed high-permittivity dielectric material. One such aperture chain contains the identification (ID) code, whereas the other chain provides the clock signal. Synchronous reading is necessary in order to determine the absolute position if the velocity between the encoder and the sensitive part of the reader is not constant. Such absolute position can be determined as long as the whole encoder is encoded with the so-called de Bruijn sequence. For encoder reading, a splitter/combiner structure with each branch loaded with a series gap and a slot resonator (each one tuned to a different frequency) is considered. Such a structure is able to detect the presence of the apertures when the encoder is displaced, at short distance, over the slots. Thus, by injecting two harmonic signals, conveniently tuned, at the input port of the splitter/combiner structure, two amplitude modulated (AM) signals are generated by tag motion at the output port of the sensitive part of the reader. One of the AM envelope functions provides the absolute position, whereas the other one provides the clock signal and the velocity of the encoder. These synchronous 3D-printed all-dielectric encoders based on permittivity contrast are a good alternative to microwave encoders based on metallic inclusions in those applications where low cost as well as major robustness against mechanical wearing and aging effects are the main concerns.
APA, Harvard, Vancouver, ISO, and other styles
3

Geng, Liming, Guohua Cao, Chunmin Shang, and Hongchang Ding. "Absolute Photoelectric Encoder Based on Position-Sensitive Detector Sensor." Electronics 13, no. 8 (April 11, 2024): 1446. http://dx.doi.org/10.3390/electronics13081446.

Full text
Abstract:
In response to the engineering, miniaturization, and high measurement accuracy requirements of encoders, this paper proposes a new type of absolute photoelectric encoder based on a position-sensitive detector (PSD). It breaks the traditional encoder’s code track design and adopts a continuous and transparent code track design, which has the advantages of small volume, high angle measurement accuracy, and easy engineering. The research content of this article mainly includes the design of a new code disk, decoding circuit, linear light source, and calibration method. The experimental results show that the encoder designed in this article has achieved miniaturization, simple installation and adjustment, and easy engineering. The volume of the encoder is Φ50 mm × 30 mm; after calibration, the resolution is better than 18 bits, and the accuracy reaches 5.4″, which further demonstrates the feasibility of the encoder’s encoding and decoding scheme.
APA, Harvard, Vancouver, ISO, and other styles
4

Siva Kumar, M., S. Syed Shameem, M. N. V. Raghu Sai, Dheeraj Nikhil, P. Kartheek, and K. Hari Kishore. "Efficient and low latency turbo encoder design using Verilog-Hdl." International Journal of Engineering & Technology 7, no. 1.5 (December 31, 2017): 37. http://dx.doi.org/10.14419/ijet.v7i1.5.9119.

Full text
Abstract:
Low complexity turbo-like codes based totally on the simple trellis or simple graph shape consequences in encoding with low complexity. Out of this Convolution, encoder and turbo codes are widely used due to the splendid errors control performance. The most famous communications encoding set of rules, the iterative deciphering calls for an exponential expansion in hardware complexity to acquire expanded encode accuracy. This paper makes a usage of Log-Map based Iterative decoding technique and specialty in the conclusion of the turbo encoder. The rapid codes are designed with the help of Recursive Systematic Convolution and are separated thru interleave, which (thing used to rearrange the bit collection) plays an essential position within the encoding technique. This paper offers the design of the parallel connection of Recursive Systematic Convolution (RSC) encoders and interleave to restrict postpone, results to form a turbo Encoder. The turbo Encoder is designed by way of Verilog-HDL and Synthesized through Xilinx ISE
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Bairui, Lin Ma, Wei Zhang, Wenhao Jiang, and Feng Zhang. "Hierarchical Photo-Scene Encoder for Album Storytelling." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8909–16. http://dx.doi.org/10.1609/aaai.v33i01.33018909.

Full text
Abstract:
In this paper, we propose a novel model with a hierarchical photo-scene encoder and a reconstructor for the task of album storytelling. The photo-scene encoder contains two subencoders, namely the photo and scene encoders, which are stacked together and behave hierarchically to fully exploit the structure information of the photos within an album. Specifically, the photo encoder generates semantic representation for each photo while exploiting temporal relationships among them. The scene encoder, relying on the obtained photo representations, is responsible for detecting the scene changes and generating scene representations. Subsequently, the decoder dynamically and attentively summarizes the encoded photo and scene representations to generate a sequence of album representations, based on which a story consisting of multiple coherent sentences is generated. In order to fully extract the useful semantic information from an album, a reconstructor is employed to reproduce the summarized album representations based on the hidden states of the decoder. The proposed model can be trained in an end-to-end manner, which results in an improved performance over the state-of-the-arts on the public visual storytelling (VIST) dataset. Ablation studies further demonstrate the effectiveness of the proposed hierarchical photo-scene encoder and reconstructor.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Fan, Xinji Lu, Artūras Kilikevičius, and Donatas Gurauskis. "Methods for Reducing Subdivision Error within One Signal Period of Single-Field Scanning Absolute Linear Encoder." Sensors 23, no. 2 (January 12, 2023): 865. http://dx.doi.org/10.3390/s23020865.

Full text
Abstract:
Optical encoders are widely used in accurate displacement measurement and motion-control technologies. Based on different measurement methods, optical encoders can be divided into absolute and incremental optical encoders. Absolute linear encoders are commonly used in advanced computer numerical control (CNC) machines. The subdivision error within one signal period (SDE) of the absolute linear encoder is vital to the positioning accuracy and low velocity control of CNC machines. In our paper, we study the working principle of the absolute linear encoder. We proposed two methods for reducing the SDE of the absolute linear encoder, a single-field scanning method based on the shutter-shaped Moiré fringe, as well as a method for suppressing harmonics through a phase shift of index grating. We established a SDE measuring device to determine the absolute linear encoder’s SDE, which we measured using a constant-speed approach. With our proposed methods, the SDE was reduced from ±0.218 μm to ±0.135 μm, which is a decrease of 38.07%. Our fast Fourier transformation (FFT) analysis of the collected Moiré fringe signals demonstrated that the third-, fifth-, and seventh-order harmonics were effectively suppressed.
APA, Harvard, Vancouver, ISO, and other styles
7

Ban, Jingxuan, Gang Chen, Lei Wang, and Yue Meng. "A calibration method for rotary optical encoder temperature error in a rotational inertial navigation system." Measurement Science and Technology 33, no. 6 (March 17, 2022): 065203. http://dx.doi.org/10.1088/1361-6501/ac4c67.

Full text
Abstract:
Abstract A rotary optical encoder is an important component in a rotational inertial navigation system (RINS). It is used to form a closed-loop motor control system and calculate the system attitude. The system performance will be affected by the encoder’s error. Ín addition to the installation errors, the working temperature variants can lead to encoder error. Therefore, in this paper we propose a method to calibrate and compensate the temperature errors of rotary optical encoders. First, an independent testing mechanism with position limitation and a rotatable platform is designed and produced to verify the temperature influence on encoders. Then, the temperature error of the rotary optical encoder used in RINS is calculated by a gyroscope whose sensitive axis is parallel to the same motor axis. The method is verified by a self-researched single-axis RINS. According to the experimental results, the measurement accuracy is increased by more than 47.9% compared to the traditional method.
APA, Harvard, Vancouver, ISO, and other styles
8

Jing, Yongcheng, Xiao Liu, Yukang Ding, Xinchao Wang, Errui Ding, Mingli Song, and Shilei Wen. "Dynamic Instance Normalization for Arbitrary Style Transfer." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4369–76. http://dx.doi.org/10.1609/aaai.v34i04.5862.

Full text
Abstract:
Prior normalization methods rely on affine transformations to produce arbitrary image style transfers, of which the parameters are computed in a pre-defined way. Such manually-defined nature eventually results in the high-cost and shared encoders for both style and content encoding, making style transfer systems cumbersome to be deployed in resource-constrained environments like on the mobile-terminal side. In this paper, we propose a new and generalized normalization module, termed as Dynamic Instance Normalization (DIN), that allows for flexible and more efficient arbitrary style transfers. Comprising an instance normalization and a dynamic convolution, DIN encodes a style image into learnable convolution parameters, upon which the content image is stylized. Unlike conventional methods that use shared complex encoders to encode content and style, the proposed DIN introduces a sophisticated style encoder, yet comes with a compact and lightweight content encoder for fast inference. Experimental results demonstrate that the proposed approach yields very encouraging results on challenging style patterns and, to our best knowledge, for the first time enables an arbitrary style transfer using MobileNet-based lightweight architecture, leading to a reduction factor of more than twenty in computational cost as compared to existing approaches. Furthermore, the proposed DIN provides flexible support for state-of-the-art convolutional operations, and thus triggers novel functionalities, such as uniform-stroke placement for non-natural images and automatic spatial-stroke control.
APA, Harvard, Vancouver, ISO, and other styles
9

Gutiérrez-Aguado, Juan, Raúl Peña-Ortiz, Miguel Garcia-Pineda, and Jose M. Claver. "A Cloud-Based Distributed Architecture to Accelerate Video Encoders." Applied Sciences 10, no. 15 (July 23, 2020): 5070. http://dx.doi.org/10.3390/app10155070.

Full text
Abstract:
Nowadays, video coding and transcoding have a great interest and important impact in areas such as high-definition video and entertainment, healthcare and elderly care, high-resolution video surveillance, self-driving cars, or e-learning. This growing demand for high-resolution video boosts the proposal of new codecs and the development of their encoders that require high computational requirements. Therefore, new strategies are needed to accelerate them. Cloud infrastructures offer interesting features for video coding, such as on-demand resource allocation, multitenancy, elasticity, and resiliency. This paper proposes a cloud-based distributed architecture, where the network and the storage layers have been tuned, to accelerate video encoders over an elastic number of worker encoder nodes. Moreover, an application is developed and executed in the proposed architecture to allow the creation of encoding jobs, their dynamic assignment, their execution in the worker encoder nodes, and the reprogramming of the failed ones. To validate the proposed architecture, the parallel execution of existing video encoders, x265 for H.265/HEVC and libvpx-vp9 for VP9, has been evaluated in terms of scalability, workload, and job distribution, varying the number of encoder nodes. The quality of the encoded videos has been analyzed for different bit rates and number of frames per job using the Peak Signal-to-Noise Ratio (PSNR). Results show that our proposal maintains video quality compared with the sequential encoding while improving encoding time, which can decrease near 90%, depending on the codec and the number of encoder nodes.
APA, Harvard, Vancouver, ISO, and other styles
10

Soleimani, Mohammad, and Siroos Toofan. "Improvement of Gray ROM-Based Encoder for Flash ADCs." Journal of Circuits, Systems and Computers 28, no. 06 (June 12, 2019): 1950097. http://dx.doi.org/10.1142/s021812661950097x.

Full text
Abstract:
In this paper, gray ROM-based encoder is proposed for the implementation of flash ADCs encoder block based on converting the conventional 1-of-[Formula: see text] thermometer codes to 2-of-[Formula: see text] codes ([Formula: see text]). The proposed gray ROM-based encoder is composed of three stages. In the first stage, the thermometer codes are converted to 2-of-[Formula: see text] codes by the use of two-input AND and four-input merged AND–OR gates. In the second stage, 2-of-[Formula: see text] codes are turned to [Formula: see text] gray codes and a binary code by a quasi-gray ROM encoder and a binary ROM encoder, respectively. Finally, in the third stage, [Formula: see text] MSB bits and LSB bit are determined by a quasi-gray-to-binary converter and a CMOS inverter, respectively. The advantages of the proposed encoder over the conventional encoder are higher speed of second stage, low power, low area and low latency with the same bubble and meta-stability errors removing capability. To demonstrate the mentioned specifications, two 5-bit flash ADCs with the conventional and proposed encoders in their encoder blocks are analyzed and simulated at 2-GS/s and 3.2-GS/s sampling rates in 0.18-[Formula: see text]m CMOS process. Simulation results show that the ENOBs of flash ADCs with the conventional and proposed encoders are equal. In this case, the proposed encoder outputs are determined to be approximately 30[Formula: see text]ps faster than the conventional encoder at 2 GS/s. The power dissipations of the conventional and proposed encoders were 19.50[Formula: see text]mW and 13.90[Formula: see text]mW at 3.2-GS/s sampling rate from a 1.8-V supply and also the latencies of the encoders were 4 ADC clocks and 3 ADC clocks, respectively. In this case, the number of D-FFs and logic gates of the proposed encoder is decreased approximately by 37% when compared to the conventional encoder.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Shouhuan. "Research on the analog-to-digital hybrid square wave encoder signal error compensator." Applied and Computational Engineering 49, no. 1 (March 22, 2024): 220–28. http://dx.doi.org/10.54254/2755-2721/49/20241194.

Full text
Abstract:
Photoelectric encoders are important electronic components, widely used in motion control systems, position measurement systems, and can measure angular displacement or displacement with high accuracy. Encoders widely used today output analog sine waves or digital square waves. The resolution of the encoder determines the measurement accuracy of the encoder, because the sine wave encoder can be more easily subdivided, which gradually replaces the square wave encoder in many occasions, but the square wave encoder is still widely used in the system with low demand for accuracy and low cost. The prerequisite for high subdivision of the output of the square wave encoder is good signal quality. Through the study of the signal output by the ideal encoder, this paper proposes the characteristics of distinguishing the error signal from the ideal signal, divides the error into two categories, designs a filter with two steps to compensate separately, and uses Simulink to establish a simulation model for verification.
APA, Harvard, Vancouver, ISO, and other styles
12

Ren, Zhe, Xizhong Qin, and Wensheng Ran. "SLNER: Chinese Few-Shot Named Entity Recognition with Enhanced Span and Label Semantics." Applied Sciences 13, no. 15 (July 26, 2023): 8609. http://dx.doi.org/10.3390/app13158609.

Full text
Abstract:
Few-shot named entity recognition requires sufficient prior knowledge to transfer valuable knowledge to the target domain with only a few labeled examples. Existing Chinese few-shot named entity recognition methods suffer from inadequate prior knowledge and limitations in feature representation. In this paper, we utilize enhanced Span and Label semantic representations for Chinese few-shot Named Entity Recognition (SLNER) to address the problem. Specifically, SLNER utilizes two encoders. One encoder is used to encode the text and its spans, and we employ the biaffine attention mechanism and self-attention to obtain enhanced span representations. This approach fully leverages the internal composition of entity mentions, leading to more accurate feature representations. The other encoder encodes the full label names to obtain label representations. Label names are broad representations of specific entity categories and share similar semantic meanings with entities. This similarity allows label names to offer valuable prior knowledge in few-shot scenarios. Finally, our model learns to match span representations with label representations. We conducted extensive experiments on three sampling benchmark Chinese datasets and a self-built food safety risk domain dataset. The experimental results show that our model outperforms the F1 scores of 0.20–6.57% of previous state-of-the-art methods in few-shot settings.
APA, Harvard, Vancouver, ISO, and other styles
13

Theunissen, Carl Daniel, Steven Martin Bradshaw, Lidia Auret, and Tobias Muller Louw. "One-Dimensional Convolutional Auto-Encoder for Predicting Furnace Blowback Events from Multivariate Time Series Process Data—A Case Study." Minerals 11, no. 10 (October 9, 2021): 1106. http://dx.doi.org/10.3390/min11101106.

Full text
Abstract:
Modern industrial mining and mineral processing applications are characterized by large volumes of historical process data. Hazardous events occurring in these processes compromise process safety and therefore overall viability. These events are recorded in historical data and are often preceded by characteristic patterns. Reconstruction-based data-driven models are trained to reconstruct the characteristic patterns of hazardous event-preceding process data with minimal residuals, facilitating effective event prediction based on reconstruction residuals. This investigation evaluated one-dimensional convolutional auto-encoders as reconstruction-based data-driven models for predicting positive pressure events in industrial furnaces. A simple furnace model was used to generate dynamic multivariate process data with simulated positive pressure events to use as a case study. A one-dimensional convolutional auto-encoder was trained as a reconstruction-based model to recognize the data preceding the hazardous events, and its performance was evaluated by comparing it to a fully-connected auto-encoder as well as a principal component analysis reconstruction model. This investigation found that one-dimensional convolutional auto-encoders recognized event-preceding patterns with lower detection delays, higher specificities, and lower missed alarm rates, suggesting that the one-dimensional convolutional auto-encoder layout is superior to the fully connected auto-encoder layout for use as a reconstruction-based event prediction model. This investigation also found that the nonlinear auto-encoder models outperformed the linear principal component model investigated. While the one-dimensional auto-encoder was evaluated comparatively on a simulated furnace case study, the methodology used in this evaluation can be applied to industrial furnaces and other mineral processing applications. Further investigation using industrial data will allow for a view of the convolutional auto-encoder’s absolute performance as a reconstruction-based hazardous event prediction model.
APA, Harvard, Vancouver, ISO, and other styles
14

Miljkovic, Goran S., and Dragan B. Denic. "Redundant and Flexible Pseudorandom Optical Rotary Encoder." Elektronika ir Elektrotechnika 26, no. 6 (December 18, 2020): 10–16. http://dx.doi.org/10.5755/j01.eie.26.6.25476.

Full text
Abstract:
Optical encoders are mainly used in modern motion servo systems for high-resolution and reliable position and velocity feedback. Pseudorandom optical rotary encoders are single-track and use a serial pseudorandom binary code to measure absolute position. The realization and analysis of such a rotary encoder with advanced code scanning and error detection techniques, as well as an improved redundancy in operation, are presented. A presented serial code reading solution uses two phase-shifted code tracks and two optical encoder modules. So, the realized encoder, hybrid in nature, provides “output on demand” and more or less reliable position information using very efficient error checking. Compared to a standard absolute encoder, this encoder requires a smaller code disc, facilitates installation, has greater flexibility in operation, and is less sensitive to external influences.
APA, Harvard, Vancouver, ISO, and other styles
15

Bai, Wenjun, Changqin Quan, and Zhi-Wei Luo. "Learning Flexible Latent Representations via Encapsulated Variational Encoders." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9913–14. http://dx.doi.org/10.1609/aaai.v33i01.33019913.

Full text
Abstract:
Learning flexible latent representation of observed data is an important precursor for most downstream AI applications. To this end, we propose a novel form of variational encoder, i.e., encapsulated variational encoders (EVE) to exert direct control over encoded latent representations along with its learning algorithm, i.e., the EVE compatible automatic variational differentiation inference algorithm. Armed with this property, our derived EVE is capable of learning converged and diverged latent representations. Using CIFAR-10 as an example, we show that the learning of converged latent representations brings a considerable improvement on the discriminative performance of the semi-supervised EVE. Using MNIST as a demonstration, the generative modelling performance of the EVE induced variational auto-encoder (EVAE) can be largely enhanced with the help of learned diverged latent representations.
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Shuangshuang, and Wei Guo. "Auto-Encoders in Deep Learning—A Review with New Perspectives." Mathematics 11, no. 8 (April 7, 2023): 1777. http://dx.doi.org/10.3390/math11081777.

Full text
Abstract:
Deep learning, which is a subfield of machine learning, has opened a new era for the development of neural networks. The auto-encoder is a key component of deep structure, which can be used to realize transfer learning and plays an important role in both unsupervised learning and non-linear feature extraction. By highlighting the contributions and challenges of recent research papers, this work aims to review state-of-the-art auto-encoder algorithms. Firstly, we introduce the basic auto-encoder as well as its basic concept and structure. Secondly, we present a comprehensive summarization of different variants of the auto-encoder. Thirdly, we analyze and study auto-encoders from three different perspectives. We also discuss the relationships between auto-encoders, shallow models and other deep learning models. The auto-encoder and its variants have successfully been applied in a wide range of fields, such as pattern recognition, computer vision, data generation, recommender systems, etc. Then, we focus on the available toolkits for auto-encoders. Finally, this paper summarizes the future trends and challenges in designing and training auto-encoders. We hope that this survey will provide a good reference when using and designing AE models.
APA, Harvard, Vancouver, ISO, and other styles
17

Song, Zheng, and Qing Sheng Hu. "10Gb/s RS-BCH Concatenated Encoder with Pipelined Strategies for Fiber Communication." Advanced Materials Research 429 (January 2012): 154–58. http://dx.doi.org/10.4028/www.scientific.net/amr.429.154.

Full text
Abstract:
This paper presents a 10Gb/s concatenated encoder compatible with the protocol of G.975. To achieve the high data rate, 8 RS encoders work based on the pipelined pattern. After the interleaving realized with 8 RAM blocks, the output of RS encoders are sent to 64 BCH encoders which work parallel. This concatenated encoder has been implemented in Xilinx Vertex5 FPGA, and the measurement results show that the data rate of 10Gb/b can be realized under the working frequency of 156MHz. About 9711 registers, 6984 LUTs and 40 Block-RAMs are utilized for the whole encoder.
APA, Harvard, Vancouver, ISO, and other styles
18

Alharbi, Majed, Ahmed Stohy, Mohammed Elhenawy, Mahmoud Masoud, and Hamiden El-Wahed Khalifa. "Solving Traveling Salesman Problem with Time Windows Using Hybrid Pointer Networks with Time Features." Sustainability 13, no. 22 (November 22, 2021): 12906. http://dx.doi.org/10.3390/su132212906.

Full text
Abstract:
This paper introduces a time efficient deep learning-based solution to the traveling salesman problem with time window (TSPTW). Our goal is to reduce the total tour length traveled by -*the agent without violating any time limitations. This will aid in decreasing the time required to supply any type of service, as well as lowering the emissions produced by automobiles, allowing our planet to recover from air pollution emissions. The proposed model is a variation of the pointer networks that has a better ability to encode the TSPTW problems. The model proposed in this paper is inspired from our previous work that introduces a hybrid context encoder and a multi attention decoder. The hybrid encoder primarily comprises the transformer encoder and the graph encoder; these encoders encode the feature vector before passing it to the attention decoder layer. The decoder consists of transformer context and graph context as well. The output attentions from the two decoders are aggregated and used to select the following step in the trip. To the best of our knowledge, our network is the first neural model that will be able to solve medium-size TSPTW problems. Moreover, we conducted sensitivity analysis to explore how the model performance changes as the time window width in the training and testing data changes. The experimental work shows that our proposed model outperforms the state-of-the-art model for TSPTW of sizes 20, 50 and 100 nodes/cities. We expect that our model will become state-of-the-art methodology for solving the TSPTW problems.
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Li, and Dechun Zheng. "Data Acquisition and Performance Analysis of Image-Based Photonic Encoder Using Field-Programmable Gate Array (FPGA)." Journal of Nanoelectronics and Optoelectronics 18, no. 12 (December 1, 2023): 1475–83. http://dx.doi.org/10.1166/jno.2023.3542.

Full text
Abstract:
With the continuous advancement of numerical control technology, the requirements for the position detection resolution, precision, and size of photoelectric encoders in computer numerical control machine tools are increasingly stringent. In the pursuit of high resolution and precision, this work investigates the principles of electronic subdivision and embedded hardware. It designs a high-precision image-based photonic encoder using a Field-Programmable Gate Array (FPGA). This photonic encoder captures the pattern of a rotating code disk using a complementary metal-oxide-semiconductor (CMOS) image sensor. The encoder’s core is the XC6SLX25T chip from the Spartan-6 series, with peripheral circuits including only A/D sampling and low-pass signal processing circuits. The FPGA module handles the digital signal reception, waveform conversion, quadrature frequency coarse count calculation, fine count subdivision calculation, and final position calculation of the encoder. In experiments, the output signal of the photonic encoder contains many impurities. After processing by the signal processing module, the A and B phase signals are not affected by previous interference, with a phase difference of 90°, meeting the requirements for subsequent signal processing modules. After fine count subdivision processing, the waveform graph significantly increases within one cycle, and after quadrupling the frequency, 30 subdivisions are performed within each cycle. Noise is introduced into graphic positioning or graphics are positioned under different noise conditions. Experimental results show that utilizing an improved centroid algorithm helps further suppress noise and enhance measurement accuracy in the design of image-based photonic encoders.
APA, Harvard, Vancouver, ISO, and other styles
20

Sujatha, E., Dr C. Subhas, and Dr M. N. Giri Prasad. "High performance turbo encoder using mealy FSM state encoding technique." International Journal of Engineering & Technology 7, no. 3.3 (June 8, 2018): 255. http://dx.doi.org/10.14419/ijet.v7i2.33.14163.

Full text
Abstract:
Error-correction Coding plays a vital role to obtain efficient and high quality data transmission, in today’s high speed wireless communication system. Considering the requirement of using high data rates by Long Term Evolution (LTE) system, parallel concatenation of two convolutional encoders were used to design turbo encoder. In this research task a high speed turbo encoder, which is a key component in the transmitter of wireless communication System, with memory based interleaver has been designed and implemented on FPGA for 3rd Generation Partnership Project (3GPP) defined Long Term Evolution – Advanced (LTE-A) standard using Finite state Machine(FSM) encoding technique. Memory based quadratic permutation polynomial (QPP) interleaver shuffles a sequence of binary data and supports any of the 188 block sizes from N= 40 to N= 6144. The proposed turbo encoder is implemented using 28nm CMOS technology and achieved 300 Mbps data rate by using 1% of available total hardware logic. By using the proposed technique, encoded data can be released continuously with the help of two parallel memories to write/read the input using pipelining concept.
APA, Harvard, Vancouver, ISO, and other styles
21

Kang, Byungseok, and Youngjae Jo. "A Semantic Segment Encoder (SSE): Improving human face inversion quality through minimized learning space." PLOS ONE 18, no. 12 (December 5, 2023): e0295316. http://dx.doi.org/10.1371/journal.pone.0295316.

Full text
Abstract:
Recently, Generative Adversarial Networks (GAN) has been greatly developed and widely used in image synthesis. A Style-Based Generator Architecture for Generative Adversarial Networks (StyleGAN) which is the foremost, continues to develop human face inversion domain. StyleGAN uses insufficient vector space to express more than one million pixels. It is difficult to apply in real business due to distortion-edit tradeoff problem in latent space. To overcome this, we propose a novel semantic segment encoder (SSE) with improved face inversion quality by narrowing the size of restoration latent space. Encoder’s learning area is minimized to logical semantic-segment units that can be recognized by humans. The proposed encoder does not affect other segments because only one segment is edited at a time. To verify the face inversion quality, we compared with the latest encoders both Pixel2style2Pixel and RestyleEncoder. Experimental result shows that the proposed encoder improved distortion quality around 20% while maintain editing performance.
APA, Harvard, Vancouver, ISO, and other styles
22

Nguyen, Quoc Toan. "Defective sewing stitch semantic segmentation using DeeplabV3+ and EfficientNet." Inteligencia Artificial 25, no. 70 (November 24, 2022): 64–76. http://dx.doi.org/10.4114/intartif.vol25iss70pp64-76.

Full text
Abstract:
Defective stitch inspection is an essential part of garment manufacturing quality assurance. Traditional mechanical defect detection systems are effective, but they are usually customized with handcrafted features that must be operated by a human. Deep learning approaches have recently demonstrated exceptional performance in a wide range of computer vision applications. The requirement for precise detail evaluation, combined with the small size of the patterns, undoubtedly increases the difficulty of identification. Therefore, image segmentation (semantic segmentation) was employed for this task. It is identified as a vital research topic in the field of computer vision, being indispensable in a wide range of real-world applications. Semantic segmentation is a method of labeling each pixel in an image. This is in direct contrast to classification, which assigns a single label to the entire image. And multiple objects of the same class are defined as a single entity. DeepLabV3+ architecture, with encoder-decoder architecture, is the proposed technique. EfficientNet models (B0-B2) were applied as encoders for experimental processes. The encoder is utilized to encode feature maps from the input image. The encoder's significant information is used by the decoder for upsampling and reconstruction of output. Finally, the best model is DeeplabV3+ with EfficientNetB1 which can classify segmented defective sewing stitches with superior performance (MeanIoU: 94.14%).
APA, Harvard, Vancouver, ISO, and other styles
23

Bous, Frederik, and Axel Roebel. "A Bottleneck Auto-Encoder for F0 Transformations on Speech and Singing Voice." Information 13, no. 3 (February 23, 2022): 102. http://dx.doi.org/10.3390/info13030102.

Full text
Abstract:
In this publication, we present a deep learning-based method to transform the f0 in speech and singing voice recordings. f0 transformation is performed by training an auto-encoder on the voice signal’s mel-spectrogram and conditioning the auto-encoder on the f0. Inspired by AutoVC/F0, we apply an information bottleneck to it to disentangle the f0 from its latent code. The resulting model successfully applies the desired f0 to the input mel-spectrograms and adapts the speaker identity when necessary, e.g., if the requested f0 falls out of the range of the source speaker/singer. Using the mean f0 error in the transformed mel-spectrograms, we define a disentanglement measure and perform a study over the required bottleneck size. The study reveals that to remove the f0 from the auto-encoder’s latent code, the bottleneck size should be smaller than four for singing and smaller than nine for speech. Through a perceptive test, we compare the audio quality of the proposed auto-encoder to f0 transformations obtained with a classical vocoder. The perceptive test confirms that the audio quality is better for the auto-encoder than for the classical vocoder. Finally, a visual analysis of the latent code for the two-dimensional case is carried out. We observe that the auto-encoder encodes phonemes as repeated discontinuous temporal gestures within the latent code.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Yufei, Zuocheng Xing, Zerun Li, Yang Zhang, and Yifan Hu. "High Area-Efficient Parallel Encoder with Compatible Architecture for 5G LDPC Codes." Symmetry 13, no. 4 (April 16, 2021): 700. http://dx.doi.org/10.3390/sym13040700.

Full text
Abstract:
This paper presents a novel parallel quasi-cyclic low-density parity-check (QC-LDPC) encoding algorithm with low complexity, which is compatible with the 5th generation (5G) new radio (NR). Basing on the algorithm, we propose a high area-efficient parallel encoder with compatible architecture. The proposed encoder has the advantages of parallel encoding and pipelined operations. Furthermore, it is designed as a configurable encoding structure, which is fully compatible with different base graphs of 5G LDPC. Thus, the encoder architecture has flexible adaptability for various 5G LDPC codes. The proposed encoder was synthesized in a 65 nm CMOS technology. According to the encoder architecture, we implemented nine encoders for distributed lifting sizes of two base graphs. The eperimental results show that the encoder has high performance and significant area-efficiency, which is better than related prior art. This work includes a whole set of encoding algorithm and the compatible encoders, which are fully compatible with different base graphs of 5G LDPC codes. Therefore, it has more flexible adaptability for various 5G application scenarios.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Jixuan, Kai Wei, Martin Radfar, Weiwei Zhang, and Clement Chung. "Encoding Syntactic Knowledge in Transformer Encoder for Intent Detection and Slot Filling." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 13943–51. http://dx.doi.org/10.1609/aaai.v35i16.17642.

Full text
Abstract:
We propose a novel Transformer encoder-based architecture with syntactical knowledge encoded for intent detection and slot filling. Specifically, we encode syntactic knowledge into the Transformer encoder by jointly training it to predict syntactic parse ancestors and part-of-speech of each token via multi-task learning. Our model is based on self-attention and feed-forward layers and does not require external syntactic information to be available at inference time. Experiments show that on two benchmark datasets, our models with only two Transformer encoder layers achieve state-of-the-art results. Compared to the previously best performed model without pre-training, our models achieve absolute F1 score and accuracy improvement of 1.59 % and 0.85 % for slot filling and intent detection on the SNIPS dataset, respectively. Our models also achieve absolute F1 score and accuracy improvement of 0.1 % and 0.34 % for slot filling and intent detection on the ATIS dataset, respectively, over the previously best performed model. Furthermore, the visualization of the self-attention weights illustrates the benefits of incorporating syntactic information during training.
APA, Harvard, Vancouver, ISO, and other styles
26

Du, S., and R. B. Randall. "Encoder error analysis in gear transmission error measurement." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 212, no. 4 (April 1, 1998): 277–85. http://dx.doi.org/10.1243/0954406981521213.

Full text
Abstract:
This paper investigates encoder measurement error and introduces a method of reducing or cancelling the encoder error from gear transmission error signals. The time domain and frequency domain analysis of the combined encoder error signals has been carried out to show the validity of the method. This method could be useful where the error of lower cost encoders is larger and undocumented and where there is limited space and only a smaller and less accurate encoder will fit.
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Zhizhong, Lei Zhao, Zhiwen Zuo, Ailin Li, Haibo Chen, Wei Xing, and Dongming Lu. "MicroAST: Towards Super-fast Ultra-Resolution Arbitrary Style Transfer." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (June 26, 2023): 2742–50. http://dx.doi.org/10.1609/aaai.v37i3.25374.

Full text
Abstract:
Arbitrary style transfer (AST) transfers arbitrary artistic styles onto content images. Despite the recent rapid progress, existing AST methods are either incapable or too slow to run at ultra-resolutions (e.g., 4K) with limited resources, which heavily hinders their further applications. In this paper, we tackle this dilemma by learning a straightforward and lightweight model, dubbed MicroAST. The key insight is to completely abandon the use of cumbersome pre-trained Deep Convolutional Neural Networks (e.g., VGG) at inference. Instead, we design two micro encoders (content and style encoders) and one micro decoder for style transfer. The content encoder aims at extracting the main structure of the content image. The style encoder, coupled with a modulator, encodes the style image into learnable dual-modulation signals that modulate both intermediate features and convolutional filters of the decoder, thus injecting more sophisticated and flexible style signals to guide the stylizations. In addition, to boost the ability of the style encoder to extract more distinct and representative style signals, we also introduce a new style signal contrastive loss in our model. Compared to the state of the art, our MicroAST not only produces visually superior results but also is 5-73 times smaller and 6-18 times faster, for the first time enabling super-fast (about 0.5 seconds) AST at 4K ultra-resolutions.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Lei, Qimin Ren, Jingang Jiang, Hongxin Zhang, and Yongde Zhang. "Recent Patents on Magnetic Encoder and its use in Rotating Mechanism." Recent Patents on Engineering 13, no. 3 (September 19, 2019): 194–200. http://dx.doi.org/10.2174/1872212112666180628145856.

Full text
Abstract:
Background: The application of magnetic encoder relieves the problem of reliable application of servo system in vibration field. The magnetic encoder raises the efficiency and reliability of the system, and from structural considerations, the magnetic encoder is divided into two parts: signal conversion and structural support. Objective: In order to improve the accuracy of the magnetic encoder, its structure is constantly improving. To evaluate a magnetic encoder, the accuracy is a factor, meanwhile, the structure of magnetic encoder is one of the key factors that make difference in the accuracy of magnetic encoder. The purpose of this paper is to study the accuracy of different structures of magnetic encoder. Methods: This paper reviews various representative patents related to magnetic encoder. Results: The differences in different types of magnetic encoders were compared and analyzed and the characteristics were concluded. The main problems in its development were analyzed, the development trend forecasted, and the current and future developments of the patents on magnetic encoder were discussed. Conclusion: The optimization of the magnetic encoder structure improves the accuracy of magnetic encoder. In the future, for wide popularization of magnetic encoder, modularization, generalization, and reliability are the factors that practitioner should pay attention to, and more patents on magnetic encoder should be invented.
APA, Harvard, Vancouver, ISO, and other styles
29

Zheng, Chuanpan, Xiaoliang Fan, Cheng Wang, and Jianzhong Qi. "GMAN: A Graph Multi-Attention Network for Traffic Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 1234–41. http://dx.doi.org/10.1609/aaai.v34i01.5477.

Full text
Abstract:
Long-term traffic prediction is highly challenging due to the complexity of traffic systems and the constantly changing nature of many impacting factors. In this paper, we focus on the spatio-temporal factors, and propose a graph multi-attention network (GMAN) to predict traffic conditions for time steps ahead at different locations on a road network graph. GMAN adapts an encoder-decoder architecture, where both the encoder and the decoder consist of multiple spatio-temporal attention blocks to model the impact of the spatio-temporal factors on traffic conditions. The encoder encodes the input traffic features and the decoder predicts the output sequence. Between the encoder and the decoder, a transform attention layer is applied to convert the encoded traffic features to generate the sequence representations of future time steps as the input of the decoder. The transform attention mechanism models the direct relationships between historical and future time steps that helps to alleviate the error propagation problem among prediction time steps. Experimental results on two real-world traffic prediction tasks (i.e., traffic volume prediction and traffic speed prediction) demonstrate the superiority of GMAN. In particular, in the 1 hour ahead prediction, GMAN outperforms state-of-the-art methods by up to 4% improvement in MAE measure. The source code is available at https://github.com/zhengchuanpan/GMAN.
APA, Harvard, Vancouver, ISO, and other styles
30

Abbas, H. H., W. A. Mahmoud, and S. K. Omran. "THE EFFECT OF TRELLIS TERMINATION ON THE PERFORMANCE OF TURBO CODE." Journal of Engineering 9, no. 01 (March 1, 2003): 25–34. http://dx.doi.org/10.31026/j.eng.2003.01.03.

Full text
Abstract:
This paper introduces a new class of convolutional codes, which is called Turbo Code. Turbo Code was shown to achieve performance in terms of Bit-Error-Rate (BER), which is near Shannon limit. Turbo Code encoder is built using a parallel concatenation of two Recursive Systematic Convolutional (RSC) codes. In this paper, two solutions to the trellis termination problem are presented. The first solution encoder uses terminated Upper RSC encoder and unterminated Lower RSC encoder. On the other side, the second solution encoder uses terminated Upper and Lower RSC encoders. The performance of the two solutions is tested for different circumstances and the results are interesting.
APA, Harvard, Vancouver, ISO, and other styles
31

Shang, Zengqiang, Peiyang Shi, Pengyuan Zhang, Li Wang, and Guangying Zhao. "HierTTS: Expressive End-to-End Text-to-Waveform Using a Multi-Scale Hierarchical Variational Auto-Encoder." Applied Sciences 13, no. 2 (January 8, 2023): 868. http://dx.doi.org/10.3390/app13020868.

Full text
Abstract:
End-to-end text-to-speech (TTS) models that directly generate waveforms from text are gaining popularity. However, existing end-to-end models are still not natural enough in their prosodic expressiveness. Additionally, previous studies on improving the expressiveness of TTS have mainly focused on acoustic models. There is a lack of research on enhancing expressiveness in an end-to-end framework. Therefore, we propose HierTTS, a highly expressive end-to-end text-to-waveform generation model. It deeply couples the hierarchical properties of speech with hierarchical variational auto-encoders and models multi-scale latent variables, at the frame, phone, subword, word, and sentence levels. The hierarchical encoder encodes the speech signal from fine-grained features into coarse-grained latent variables. In contrast, the hierarchical decoder generates fine-grained features conditioned on the coarse-grained latent variables. We propose a staged KL-weighted annealing strategy to prevent hierarchical posterior collapse. Furthermore, we employ a hierarchical text encoder to extract linguistic information at different levels and act on both the encoder and the decoder. Experiments show that our model performs closer to natural speech in prosody expressiveness and has better generative diversity.
APA, Harvard, Vancouver, ISO, and other styles
32

Dligach, Dmitriy, Majid Afshar, and Timothy Miller. "Toward a clinical text encoder: pretraining for clinical natural language processing with applications to substance misuse." Journal of the American Medical Informatics Association 26, no. 11 (June 24, 2019): 1272–78. http://dx.doi.org/10.1093/jamia/ocz072.

Full text
Abstract:
Abstract Objective Our objective is to develop algorithms for encoding clinical text into representations that can be used for a variety of phenotyping tasks. Materials and Methods Obtaining large datasets to take advantage of highly expressive deep learning methods is difficult in clinical natural language processing (NLP). We address this difficulty by pretraining a clinical text encoder on billing code data, which is typically available in abundance. We explore several neural encoder architectures and deploy the text representations obtained from these encoders in the context of clinical text classification tasks. While our ultimate goal is learning a universal clinical text encoder, we also experiment with training a phenotype-specific encoder. A universal encoder would be more practical, but a phenotype-specific encoder could perform better for a specific task. Results We successfully train several clinical text encoders, establish a new state-of-the-art on comorbidity data, and observe good performance gains on substance misuse data. Discussion We find that pretraining using billing codes is a promising research direction. The representations generated by this type of pretraining have universal properties, as they are highly beneficial for many phenotyping tasks. Phenotype-specific pretraining is a viable route for trading the generality of the pretrained encoder for better performance on a specific phenotyping task. Conclusions We successfully applied our approach to many phenotyping tasks. We conclude by discussing potential limitations of our approach.
APA, Harvard, Vancouver, ISO, and other styles
33

Premananda, B. S., T. N. Dhanush, and Vaishnavi S. Parashar. "Area and Energy Efficient QCA Based Compact Serial Concatenated Convolutional Code Encoder." Journal of Physics: Conference Series 2161, no. 1 (January 1, 2022): 012025. http://dx.doi.org/10.1088/1742-6596/2161/1/012025.

Full text
Abstract:
Abstract Quantum-dot Cellular Automata (QCA) is a transistor-less technology known for its low power consumption and higher clock rate. Serial Concatenated Convolutional Coding (SCCC) encoder is a class of forward error correction. This paper picturizes the implementation of the outer encoder as a (7, 4, 1) Bose Chaudhary Hocquenghem encoder that serves the purpose of burst error correction, a pseudo-random inter-leaver used for permuting of systematic code words and finally the inner encoder which is used for the correction of random errors in QCA. Two different architectures of the SCCC encoder have been proposed and discussed in this study. In the proposed two architectures, the first based on external clock signals whereas the second based on internal clock generation. The sub-blocks outer encoder, pseudo-random inter-leaver and inner encoder of the SCCC encoder are optimized, implemented and simulated using QCADesigner and then integrated to design a compact SCCC encoder. The energy dissipation is computed using QCADesigner-E. The proposed SCCC encoder reduced the total area by 46% and energy dissipation by 50% when compared to the reference SCCC encoder. The proposed encoders are more efficient in terms of cell count, energy dissipation and area occupancy respectively.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Yuan, Zikang Liu, and Juwei Zhang. "Neural Machine Translation of Electrical Engineering with Fusion of Memory Information." Applied Sciences 13, no. 18 (September 13, 2023): 10279. http://dx.doi.org/10.3390/app131810279.

Full text
Abstract:
This paper proposes a new neural machine translation model of electrical engineering that combines a transformer with gated recurrent unit (GRU) networks. By fusing global information and memory information, the model effectively improves the performance of low-resource neural machine translation. Unlike traditional transformers, our proposed model includes two different encoders: one is the global information encoder, which focuses on contextual information, and the other is the memory encoder, which is responsible for capturing recurrent memory information. The model with these two types of attention can encode both global and memory information and learn richer semantic knowledge. Because transformers require global attention calculation for each word position, the time and space complexity are both squared with the length of the source language sequence. When the length of the source language sequence becomes too long, the performance of the transformer will sharply decline. Therefore, we propose a memory information encoder based on the GRU to improve this drawback. The model proposed in this paper has a maximum improvement of 2.04 BLEU points over the baseline model in the field of electrical engineering with low resources.
APA, Harvard, Vancouver, ISO, and other styles
35

Kim, Bumsoo, Jinhyung Kim, Yeonsik Jo, and Seung Hwan Kim. "Expediting Contrastive Language-Image Pretraining via Self-Distilled Encoders." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2732–40. http://dx.doi.org/10.1609/aaai.v38i3.28052.

Full text
Abstract:
Recent advances in vision language pretraining (VLP) have been largely attributed to the large-scale data collected from the web. However, uncurated dataset contains weakly correlated image-text pairs, causing data inefficiency. To address the issue, knowledge distillation have been explored at the expense of extra image and text momentum encoders to generate teaching signals for misaligned image-text pairs. In this paper, our goal is to resolve the misalignment problem with an efficient distillation framework. To this end, we propose ECLIPSE: Expediting Contrastive Language-Image Pretraining with Self-distilled Encoders. ECLIPSE features a distinctive distillation architecture wherein a shared text encoder is utilized between an online image encoder and a momentum image encoder. This strategic design choice enables the distillation to operate within a unified projected space of text embedding, resulting in better performance. Based on the unified text embedding space, ECLIPSE compensates for the additional computational cost of the momentum image encoder by expediting the online image encoder. Through our extensive experiments, we validate that there is a sweet spot between expedition and distillation where the partial view from the expedited online image encoder interacts complementarily with the momentum teacher. As a result, ECLIPSE outperforms its counterparts while achieving substantial acceleration in inference speed.
APA, Harvard, Vancouver, ISO, and other styles
36

Sam, D. S. Shylu, P. Sam Paul, Jennifer ,. Elizah, Nithyasri Nithyasri, Snehitha Snehitha, Akansha Singh, and Vijendra Vijendra. "A Novel low power 2-D to 3-D Array Priority Encoder using Split-Logic Technique for Data Path Applications." WSEAS TRANSACTIONS ON SYSTEMS AND CONTROL 17 (January 7, 2022): 42–49. http://dx.doi.org/10.37394/23203.2022.17.5.

Full text
Abstract:
In this work, an ascendable low power 64-bit priority encoder is designed using a two-directional array to three-directional array conversion, and Split-logic technique and 6-bit is obtained as the output. By using this method, the high performance priority encoder can be achieved. In the conventional priority encoder, a single bit is set as an input, but for a priority encoder with 3-Darray, every input are specified in the matrix form. The I-bit input file is split hooked on M × N bits, similar to 2-D Matrix. In priority encoder with 3-Darray, three directional output comes out, unlike traditional priority encoder, where the output is received from one direction. The development can be achieved by implementing the two-directional array to three-directional array technique. Simulation results show that the proposed 2-D and 3-D priority encoder consumes 0.087039mW and 0.184014mW which is less when compared with the conventional priority encoder. The priority encoders are simulated and synthesized using VHDL in Xilinx Vivado version 2019.2 and the Oasys synthesis tool.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhao, Changhai, Qiuhua Wan, Lihui Liang, and Ying Sun. "Full Digital Processing System of Photoelectric Encoder." Sensors 19, no. 22 (November 9, 2019): 4892. http://dx.doi.org/10.3390/s19224892.

Full text
Abstract:
A photoelectric signal, output by a photoelectric receiver, may detrimentally change after the photoelectric encoder is used for a period of time or when the environment changes; this will directly affect the accuracy of the encoder and lead to fatal errors in the encoder. To maintain its high accuracy, we propose an encoder that can work in a variety of environments and that adopts full digital processing. A signal current that travels from the receiver of a photoelectric encoder is converted into a voltage signal via current limiting resistance. All signals are directly processed in the data processor component of the system. The encoder converts all the signals into its normalized counterpart. Then, the angle of the encoder is calculated using the normalized value. The calculated encoder angle compensates for any error. The final encoder angle is obtained, and the encoder angle is output accordingly. Experiments show that this method can greatly reduce the encoder’s volume. This method also reduces the encoder error from 167 arcseconds to 53 arcseconds. The encoder can still maintain a high accuracy during environmental changes, especially in harsh environments where there are higher accuracy requirements.
APA, Harvard, Vancouver, ISO, and other styles
38

Gurauskis, Donatas, Artūras Kilikevičius, and Sergejus Borodinas. "Experimental Investigation of Linear Encoder’s Subdivisional Errors under Different Scanning Speeds." Applied Sciences 10, no. 5 (March 4, 2020): 1766. http://dx.doi.org/10.3390/app10051766.

Full text
Abstract:
Optical encoders are widely used in applications requiring precise displacement measurement and fluent motion control. To reach high positioning accuracy and repeatability, and to create a more stable speed-control loop, essential attention must be directed to the subdivisional error (SDE) of the used encoder. This error influences the interpolation process and restricts the ability to achieve a high resolution. The SDE could be caused by various factors, such as the particular design of the reading head and the optical scanning principle, quality of the measuring scale, any kind of relative orientation changes between the optical components caused by mechanical vibrations or deformations, or scanning speed. If the distorted analog signals are not corrected before interpolation, it is very important to know the limitations of the used encoder. The methodology described in this paper could be used to determine the magnitude of an SDE and its trend. This method is based on a constant-speed test and does not require high-accuracy reference. The performed experimental investigation of the standard optical linear encoder SDE under different scanning speeds revealed the linear relationship between the tested encoder’s traversing velocity and the error value. A more detailed investigation of the obtained results was done on the basis of fast Fourier transformation (FFT) to understand the physical nature of the SDE, and to consider how to improve the performance of the encoder.
APA, Harvard, Vancouver, ISO, and other styles
39

Ramanna, Dasari, and V. Ganesan. "Low-Power VLSI Implementation of Novel Hybrid Adaptive Variable-Rate and Recursive Systematic Convolutional Encoder for Resource Constrained Wireless Communication Systems." International Journal of Electrical and Electronics Research 10, no. 3 (September 30, 2022): 523–28. http://dx.doi.org/10.37391/ijeer.100320.

Full text
Abstract:
In the modern wireless communication system, digital technology has tremendous growth, and all the communication channels are slowly moving towards digital form. Wireless communication has to provide the reliable and efficient transfer of information between transmitter and receiver over a wireless channel. The channel coding technique is the best practical approach to delivering reliable communication for the end-users. Many conventional encoder and decoder units are used as error detection and correction codes in the digital communication system to overcome the multiple transient errors. The proposed convolutional encoder consists of both Recursive Systematic Convolutional (RSC) Encoder and Adaptive Variable-Rate Convolutional (AVRC) encoder. Adaptive Variable-Rate Convolutional encoder improves the bit error rate performance and is more suitable for a power-constrained wireless system to transfer the data. Recursive Systematic Convolutional encoder also reduces the bit error rate and improves the throughput by employing the trellis termination strategy. Here, AVRC encoder ultimately acquires the channel state information and feeds the data into a fixed rate convolutional encoder and rate adaptor followed by a buffer device. A hybrid encoder combines the AVRC encoder and RSC encoder output serially and parallel, producing the solid encoded data for the modulator in the communication system. A modified turbo code is also obtained by placing interleaver between the two encoder units and building the stronger code word for the system. Finally, the conventional encoder system is compared and analyzed with the proposed method regarding the number of LUT’s, gates, clock cycle, slices, area, power, bit error rate, and throughput.
APA, Harvard, Vancouver, ISO, and other styles
40

Gurauskis, Donatas, Krzysztof Przystupa, Artūras Kilikevičius, Mikołaj Skowron, Matijošius Jonas, Joanna Michałowska, and Kristina Kilikevičienė. "Performance Analysis of an Experimental Linear Encoder’s Reading Head under Different Mounting and Dynamic Conditions." Energies 15, no. 16 (August 22, 2022): 6088. http://dx.doi.org/10.3390/en15166088.

Full text
Abstract:
The performance of an optical linear encoder is described and evaluated by certain parameters such as its resolution, accuracy and repeatability. The best encoder for a particular application, just like other sensors, is usually selected according these parameters. There are, however, many side effects that have a direct influence on the optimal operation of an encoder. In order to understand how to minimize these harmful effects, a deeper knowledge of an encoder’s performance and a method for determining these factors are necessary. The main aspects of an encoder’s accuracy, resolution and repeatability are briefly reviewed in this paper. Discussed and developed in previous work, the experimental reading head for a Moiré effect-based optical linear encoder is used for the experimental analysis of the influence of different reading head designs on an encoder’s performance under various mounting inaccuracies and dynamic conditions.
APA, Harvard, Vancouver, ISO, and other styles
41

Sandoval-Ruiz, Cecilia. "RS(n,k) Encoder based on LFCS." Revista Facultad de Ingeniería Universidad de Antioquia, no. 64 (October 3, 2012): 68–78. http://dx.doi.org/10.17533/udea.redin.13116.

Full text
Abstract:
This article presents the design of a Reed Solomon encoder circuit based on a concurrent LFCS -Linear Structure Concurrent Feedback-allowing the generation of code redundancy symbols in parallel, provided that you supply the k information symbols to encode simultaneously, the encoder provides at its output corresponding redundancy symbols. To achieve this development was generalized mathematical model describing the behavior of the encoder, the configuration was done in VHDL hardware description language of a Reed Solomon encoder, taking as case study the RS (7.3), the design was simulated validating the proposed operation, and finally the comparison of the encoder implementation between the sequential version and the version based on LFCS, obtaining a reduction of hardware components and optimizing the speed of response and power consumption. In conclusion, the proposed encoder design validates the concurrent model generalized from the correspondence with the architecture of LFCS.
APA, Harvard, Vancouver, ISO, and other styles
42

Yuan, Ruijia, Tianjiao Xie, and Jianhua Zhang. "Implementation of Rate-Compatible QC-LDPC Encoder Based on FPGA." Highlights in Science, Engineering and Technology 1 (June 14, 2022): 453–58. http://dx.doi.org/10.54097/hset.v1i.516.

Full text
Abstract:
A rate-compatible LDPC encoder based on quasi-cyclic generation matrix is proposed. The encoder partitions and controls access to the ROM address, so it can be compatible with a variety of LDPC codes to generate matrix cyclic shift vectors. By adding routing options in the register cyclic shift circuit, it is compatible with matrix blocks of different sizes. Due to the adjustment of the initial shift count and the truncation of the check bit output, the virtual filling and shortening of the LDPC code is realized, which further expands the code length and code rate of the encoder. An LDPC encoder compatible with 4 code rates is implemented on a Xilinx Virtex5 xc5vfx130t FPGA, Compared with the existing design, this encoder requires only slightly more hardware resources than a single encoder to achieve the same data throughput. Compared with implementing all four different encoders, more than 40% of the hardware resources can be saved.
APA, Harvard, Vancouver, ISO, and other styles
43

Sun, Jun, Junbo Zhang, Xuesong Gao, Mantao Wang, Dinghua Ou, Xiaobo Wu, and Dejun Zhang. "Fusing Spatial Attention with Spectral-Channel Attention Mechanism for Hyperspectral Image Classification via Encoder–Decoder Networks." Remote Sensing 14, no. 9 (April 19, 2022): 1968. http://dx.doi.org/10.3390/rs14091968.

Full text
Abstract:
In recent years, convolutional neural networks (CNNs) have been widely used in hyperspectral image (HSI) classification. However, feature extraction on hyperspectral data still faces numerous challenges. Existing methods cannot extract spatial and spectral-channel contextual information in a targeted manner. In this paper, we propose an encoder–decoder network that fuses spatial attention and spectral-channel attention for HSI classification from three public HSI datasets to tackle these issues. In terms of feature information fusion, a multi-source attention mechanism including spatial and spectral-channel attention is proposed to encode the spatial and spectral multi-channels contextual information. Moreover, three fusion strategies are proposed to effectively utilize spatial and spectral-channel attention. They are direct aggregation, aggregation on feature space, and Hadamard product. In terms of network development, an encoder–decoder framework is employed for hyperspectral image classification. The encoder is a hierarchical transformer pipeline that can extract long-range context information. Both shallow local features and rich global semantic information are encoded through hierarchical feature expressions. The decoder consists of suitable upsampling, skip connection, and convolution blocks, which fuse multi-scale features efficiently. Compared with other state-of-the-art methods, our approach has greater performance in hyperspectral image classification.
APA, Harvard, Vancouver, ISO, and other styles
44

Augustine, Jeena. "Emotion Recognition in Speech Using with SVM, DSVM and Auto-Encoder." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 1021–26. http://dx.doi.org/10.22214/ijraset.2021.37545.

Full text
Abstract:
Abstract: Emotions recognition from the speech is one of the foremost vital subdomains within the sphere of signal process. during this work, our system may be a two-stage approach, particularly feature extraction, and classification engine. Firstly, 2 sets of options square measure investigated that are: thirty-nine Mel-frequency Cepstral coefficients (MFCC) and sixty-five MFCC options extracted supported the work of [20]. Secondly, we've got a bent to use the Support Vector Machine (SVM) because the most classifier engine since it is the foremost common technique within the sector of speech recognition. Besides that, we've a tendency to research the importance of the recent advances in machine learning along with the deep kerne learning, further because the numerous types of auto-encoders (the basic auto-encoder and also the stacked autoencoder). an oversized set of experiments unit conducted on the SAVEE audio information. The experimental results show that the DSVM technique outperforms the standard SVM with a classification rate of sixty-nine. 84% and 68.25% victimization thirty-nine MFCC, severally. To boot, the auto encoder technique outperforms the standard SVM, yielding a classification rate of 73.01%. Keywords: Emotion recognition, MFCC, SVM, Deep Support Vector Machine, Basic auto-encoder, Stacked Auto encode
APA, Harvard, Vancouver, ISO, and other styles
45

Bashmal, Laila, Yakoub Bazi, Mohamad Mahmoud Al Rahhal, Mansour Zuair, and Farid Melgani. "CapERA: Captioning Events in Aerial Videos." Remote Sensing 15, no. 8 (April 18, 2023): 2139. http://dx.doi.org/10.3390/rs15082139.

Full text
Abstract:
In this paper, we introduce the CapERA dataset, which upgrades the Event Recognition in Aerial Videos (ERA) dataset to aerial video captioning. The newly proposed dataset aims to advance visual–language-understanding tasks for UAV videos by providing each video with diverse textual descriptions. To build the dataset, 2864 aerial videos are manually annotated with a caption that includes information such as the main event, object, place, action, numbers, and time. More captions are automatically generated from the manual annotation to take into account as much as possible the variation in describing the same video. Furthermore, we propose a captioning model for the CapERA dataset to provide benchmark results for UAV video captioning. The proposed model is based on the encoder–decoder paradigm with two configurations to encode the video. The first configuration encodes the video frames independently by an image encoder. Then, a temporal attention module is added on the top to consider the temporal dynamics between features derived from the video frames. In the second configuration, we directly encode the input video using a video encoder that employs factorized space–time attention to capture the dependencies within and between the frames. For generating captions, a language decoder is utilized to autoregressively produce the captions from the visual tokens. The experimental results under different evaluation criteria show the challenges of generating captions from aerial videos. We expect that the introduction of CapERA will open interesting new research avenues for integrating natural language processing (NLP) with UAV video understandings.
APA, Harvard, Vancouver, ISO, and other styles
46

Xu, Xiao, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, and Nan Duan. "BridgeTower: Building Bridges between Encoders in Vision-Language Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10637–47. http://dx.doi.org/10.1609/aaai.v37i9.26263.

Full text
Abstract:
Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets. Code and checkpoints are available at https://github.com/microsoft/BridgeTower.
APA, Harvard, Vancouver, ISO, and other styles
47

Sadiq, B. J. S., V. Yu Tsviatkou, and M. N. Bobov. "Combined coding of bit planes of images." «System analysis and applied information science», no. 4 (December 30, 2019): 32–37. http://dx.doi.org/10.21122/2309-4923-2019-4-32-37.

Full text
Abstract:
The aim of this work is to reduce the computational complexity of lossless compression in the spatial domain due to the combined coding (arithmetic and Run-Length Encoding) of a series of bits of bit planes. Known effective compression encoders separately encode the bit planes of the image or transform coefficients, which leads to an increase in computational complexity due to multiple processing of each pixel. The paper proposes the rules for combined coding and combined encoders for bit planes of pixel differences of images with a tunable and constant structure, which have lower computational complexity and the same compression ratio as compared to an arithmetic encoder of bit planes.
APA, Harvard, Vancouver, ISO, and other styles
48

Dakwale, Praveen, and Christof Monz. "Convolutional over Recurrent Encoder for Neural Machine Translation." Prague Bulletin of Mathematical Linguistics 108, no. 1 (June 1, 2017): 37–48. http://dx.doi.org/10.1515/pralin-2017-0007.

Full text
Abstract:
AbstractNeural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Standard neural MT is an end-to-end neural network where the source sentence is encoded by a recurrent neural network (RNN) called encoder and the target words are predicted using another RNN known as decoder. Recently, various models have been proposed which replace the RNN encoder with a convolutional neural network (CNN). In this paper, we propose to augment the standard RNN encoder in NMT with additional convolutional layers in order to capture wider context in the encoder output. Experiments on English to German translation demonstrate that our approach can achieve significant improvements over a standard RNN-based baseline.
APA, Harvard, Vancouver, ISO, and other styles
49

Paredes, Ferran, Cristian Herrojo, and Ferran Martín. "Position Sensors for Industrial Applications Based on Electromagnetic Encoders." Sensors 21, no. 8 (April 13, 2021): 2738. http://dx.doi.org/10.3390/s21082738.

Full text
Abstract:
Optical and magnetic linear/rotary encoders are well-known systems traditionally used in industry for the accurate measurement of linear/angular displacements and velocities. Recently, a different approach for the implementation of linear/rotary encoders has been proposed. Such an approach uses electromagnetic signals, and the working principle of these electromagnetic encoders is very similar to the one of optical encoders, i.e., pulse counting. Specifically, a transmission line based structure fed by a harmonic signal tuned to a certain frequency, the stator, is perturbed by encoder motion. Such encoder consists in a linear or circular chain (or chains) of inclusions (metallic, dielectric, or apertures) on a dielectric substrate, rigid or flexible, and made of different materials, including plastics, organic materials, rubber, etc. The harmonic signal is amplitude modulated by the encoder chain, and the envelope function contains the information relative to the position and velocity. The paper mainly focuses on linear encoders based on metallic and dielectric inclusions. Moreover, it is shown that synchronous electromagnetic encoders, able to provide the quasi-absolute position (plus the velocity and direction of motion in some cases), can be implemented. Several prototype examples are reviewed in the paper, including encoders implemented by means of additive process, such as 3D printed and screen-printed encoders.
APA, Harvard, Vancouver, ISO, and other styles
50

Khursheed, Shahzad, Nasreen Badruddin, Varun Jeoti, Dejan Vukobratovic, and Manzoor Ahmed Hashmani. "Low Computational Coding-Efficient Distributed Video Coding: Adding a Decision Mode to Limit Channel Coding Load." Entropy 25, no. 2 (January 28, 2023): 241. http://dx.doi.org/10.3390/e25020241.

Full text
Abstract:
Distributed video coding (DVC) is based on distributed source coding (DSC) concepts in which video statistics are used partially or completely at the decoder rather than the encoder. The rate-distortion (RD) performance of distributed video codecs substantially lags the conventional predictive video coding. Several techniques and methods are employed in DVC to overcome this performance gap and achieve high coding efficiency while maintaining low encoder computational complexity. However, it is still challenging to achieve coding efficiency and limit the computational complexity of the encoding and decoding process. The deployment of distributed residual video coding (DRVC) improves coding efficiency, but significant enhancements are still required to reduce these gaps. This paper proposes the QUAntized Transform ResIdual Decision (QUATRID) scheme that improves the coding efficiency by deploying the Quantized Transform Decision Mode (QUAM) at the encoder. The proposed QUATRID scheme’s main contribution is a design and integration of a novel QUAM method into DRVC that effectively skips the zero quantized transform (QT) blocks, thus limiting the number of input bit planes to be channel encoded and consequently reducing both the channel encoding and decoding computational complexity. Moreover, an online correlation noise model (CNM) is specifically designed for the QUATRID scheme and implemented at its decoder. This online CNM improves the channel decoding process and contributes to the bit rate reduction. Finally, a methodology for the reconstruction of the residual frame (R^) is developed that utilizes the decision mode information passed by the encoder, decoded quantized bin, and transformed estimated residual frame. The Bjøntegaard delta analysis of experimental results shows that the QUATRID achieves better performance over the DISCOVER by attaining the PSNR between 0.06 dB and 0.32 dB and coding efficiency, which varies from 5.4 to 10.48 percent. In addition to this, results determine that for all types of motion videos, the proposed QUATRID scheme outperforms the DISCOVER in terms of reducing the number of input bit-planes to be channel encoded and the entire encoder’s computational complexity. The number of bit plane reduction exceeds 97%, while the entire Wyner-Ziv encoder and channel coding computational complexity reduce more than nine-fold and 34-fold, respectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography