To see the other types of publications on this topic, follow the link: Character Error Rate (CER).

Journal articles on the topic 'Character Error Rate (CER)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Character Error Rate (CER).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Abdallah, Abdelrahman, Mohamed Hamada, and Daniyar Nurseitov. "Attention-Based Fully Gated CNN-BGRU for Russian Handwritten Text." Journal of Imaging 6, no. 12 (December 18, 2020): 141. http://dx.doi.org/10.3390/jimaging6120141.

Full text
Abstract:
This article considers the task of handwritten text recognition using attention-based encoder–decoder networks trained in the Kazakh and Russian languages. We have developed a novel deep neural network model based on a fully gated CNN, supported by multiple bidirectional gated recurrent unit (BGRU) and attention mechanisms to manipulate sophisticated features that achieve 0.045 Character Error Rate (CER), 0.192 Word Error Rate (WER), and 0.253 Sequence Error Rate (SER) for the first test dataset and 0.064 CER, 0.24 WER and 0.361 SER for the second test dataset. Our proposed model is the first work to handle handwriting recognition models in Kazakh and Russian languages. Our results confirm the importance of our proposed Attention-Gated-CNN-BGRU approach for training handwriting text recognition and indicate that it can lead to statistically significant improvements (p-value < 0.05) in the sensitivity (recall) over the tests dataset. The proposed method’s performance was evaluated using handwritten text databases of three languages: English, Russian, and Kazakh. It demonstrates better results on the Handwritten Kazakh and Russian (HKR) dataset than the other well-known models.
APA, Harvard, Vancouver, ISO, and other styles
2

Drobac, Senka, and Krister Lindén. "Optical character recognition with neural networks and post-correction with finite state methods." International Journal on Document Analysis and Recognition (IJDAR) 23, no. 4 (August 20, 2020): 279–95. http://dx.doi.org/10.1007/s10032-020-00359-9.

Full text
Abstract:
Abstract The optical character recognition (OCR) quality of the historical part of the Finnish newspaper and journal corpus is rather low for reliable search and scientific research on the OCRed data. The estimated character error rate (CER) of the corpus, achieved with commercial software, is between 8 and 13%. There have been earlier attempts to train high-quality OCR models with open-source software, like Ocropy (https://github.com/tmbdev/ocropy) and Tesseract (https://github.com/tesseract-ocr/tesseract), but so far, none of the methods have managed to successfully train a mixed model that recognizes all of the data in the corpus, which would be essential for an efficient re-OCRing of the corpus. The difficulty lies in the fact that the corpus is printed in the two main languages of Finland (Finnish and Swedish) and in two font families (Blackletter and Antiqua). In this paper, we explore the training of a variety of OCR models with deep neural networks (DNN). First, we find an optimal DNN for our data and, with additional training data, successfully train high-quality mixed-language models. Furthermore, we revisit the effect of confidence voting on the OCR results with different model combinations. Finally, we perform post-correction on the new OCR results and perform error analysis. The results show a significant boost in accuracy, resulting in 1.7% CER on the Finnish and 2.7% CER on the Swedish test set. The greatest accomplishment of the study is the successful training of one mixed language model for the entire corpus and finding a voting setup that further improves the results.
APA, Harvard, Vancouver, ISO, and other styles
3

Jeong, Jiho, S. I. M. M. Raton Mondol, Yeon Wook Kim, and Sangmin Lee. "An Effective Learning Method for Automatic Speech Recognition in Korean CI Patients’ Speech." Electronics 10, no. 7 (March 29, 2021): 807. http://dx.doi.org/10.3390/electronics10070807.

Full text
Abstract:
The automatic speech recognition (ASR) model usually requires a large amount of training data to provide better results compared with the ASR models trained with a small amount of training data. It is difficult to apply the ASR model to non-standard speech such as that of cochlear implant (CI) patients, owing to privacy concerns or difficulty of access. In this paper, an effective finetuning and augmentation ASR model is proposed. Experiments compare the character error rate (CER) after training the ASR model with the basic and the proposed method. The proposed method achieved a CER of 36.03% on the CI patient’s speech test dataset using only 2 h and 30 min of training data, which is a 62% improvement over the basic method.
APA, Harvard, Vancouver, ISO, and other styles
4

Kubiak, Ireneusz. "Font Design—Shape Processing of Text Information Structures in the Process of Non-Invasive Data Acquisition." Computers 8, no. 4 (September 23, 2019): 70. http://dx.doi.org/10.3390/computers8040070.

Full text
Abstract:
Computer fonts can be a solution that supports the protection of information against electromagnetic penetration; however, not every font has features that counteract this process. The distinctive features of a font’s characters define the font. This article presents two new sets of computer fonts. These fonts are fully usable in everyday work. Additionally, they make it impossible to obtain information using non-invasive methods. The names of these fonts are directly related to the shapes of their characters. Each character in these fonts is built using only vertical and horizontal lines. The differences between the fonts lie in the widths of the vertical lines. The Safe Symmetrical font is built from vertical lines with the same width. The Safe Asymmetrical font is built from vertical lines with two different line widths. However, the appropriate proportions of the widths of the lines and clearances of each character need to be met for the safe fonts. The structures of the characters of the safe fonts ensure a high level of similarity between the characters. Additionally, these fonts do not make it difficult to read text in its primary form. However, sensitive transmissions are free from distinctive features, and the recognition of each character in reconstructed images is very difficult in contrast to traditional fonts, such as the Sang Mun font and Null Pointer font, which have many distinctive features. The usefulness of the computer fonts was assessed by the character error rate (CER); an analysis of this parameter was conducted in this work. The CER obtained very high values for the safe fonts; the values for traditional fonts were much lower. This article aims to presentat of a new solution in the area of protecting information against electromagnetic penetration. This is a new approach that could replace old solutions by incorporating heavy shielding, power and signal filters, and electromagnetic gaskets. Additionally, the application of these new fonts is very easy, as a user only needs to ensure that either the Safe Asymmetrical font or the Safe Symmetrical font is installed on the computer station that processes the text data.
APA, Harvard, Vancouver, ISO, and other styles
5

Silber Varod, Vered, Ingo Siegert, Oliver Jokisch, Yamini Sinha, and Nitza Geri. "A cross-language study of speech recognition systems for English, German, and Hebrew." Online Journal of Applied Knowledge Management 9, no. 1 (July 26, 2021): 1–15. http://dx.doi.org/10.36965/ojakm.2021.9(1)1-15.

Full text
Abstract:
Despite the growing importance of Automatic Speech Recognition (ASR), its application is still challenging, limited, language-dependent, and requires considerable resources. The resources required for ASR are not only technical, they also need to reflect technological trends and cultural diversity. The purpose of this research is to explore ASR performance gaps by a comparative study of American English, German, and Hebrew. Apart from different languages, we also investigate different speaking styles – utterances from spontaneous dialogues and utterances from frontal lectures (TED-like genre). The analysis includes a comparison of the performance of four ASR engines (Google Cloud, Google Search, IBM Watson, and WIT.ai) using four commonly used metrics: Word Error Rate (WER); Character Error Rate (CER); Word Information Lost (WIL); and Match Error Rate (MER). As expected, findings suggest that English ASR systems provide the best results. Contrary to our hypothesis regarding ASR’s low performance for under-resourced languages, we found that the Hebrew and German ASR systems have similar performance. Overall, our findings suggest that ASR performance is language-dependent and system-dependent. Furthermore, ASR may be genre-sensitive, as our results showed for German. This research contributes a valuable insight for improving ubiquitous global consumption and management of knowledge and calls for corporate social responsibility of commercial companies, to develop ASR under Fair, Reasonable, and Non-Discriminatory (FRAND) terms
APA, Harvard, Vancouver, ISO, and other styles
6

Fang, Fuming, Takahiro Shinozaki, Yasuo Horiuchi, Shingo Kuroiwa, Sadaoki Furui, and Toshimitsu Musha. "Improving Eye Motion Sequence Recognition Using Electrooculography Based on Context-Dependent HMM." Computational Intelligence and Neuroscience 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/6898031.

Full text
Abstract:
Eye motion-based human-machine interfaces are used to provide a means of communication for those who can move nothing but their eyes because of injury or disease. To detect eye motions, electrooculography (EOG) is used. For efficient communication, the input speed is critical. However, it is difficult for conventional EOG recognition methods to accurately recognize fast, sequentially input eye motions because adjacent eye motions influence each other. In this paper, we propose a context-dependent hidden Markov model- (HMM-) based EOG modeling approach that uses separate models for identical eye motions with different contexts. Because the influence of adjacent eye motions is explicitly modeled, higher recognition accuracy is achieved. Additionally, we propose a method of user adaptation based on a user-independent EOG model to investigate the trade-off between recognition accuracy and the amount of user-dependent data required for HMM training. Experimental results show that when the proposed context-dependent HMMs are used, the character error rate (CER) is significantly reduced compared with the conventional baseline under user-dependent conditions, from 36.0 to 1.3%. Although the CER increases again to 17.3% when the context-dependent but user-independent HMMs are used, it can be reduced to 7.3% by applying the proposed user adaptation method.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Qian, Dong Wang, Run Zhao, Yinggang Yu, and JiaZhen Jing. "Write, Attend and Spell." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, no. 3 (September 9, 2021): 1–25. http://dx.doi.org/10.1145/3478100.

Full text
Abstract:
Text entry on a smartwatch is challenging due to its small form factor. Handwriting recognition using the built-in sensors of the watch (motion sensors, microphones, etc.) provides an efficient and natural solution to deal with this issue. However, prior works mainly focus on individual letter recognition rather than word recognition. Therefore, they need users to pause between adjacent letters for segmentation, which is counter-intuitive and significantly decreases the input speed. In this paper, we present 'Write, Attend and Spell' (WriteAS), a word-level text-entry system which enables free-style handwriting recognition using the motion signals of the smartwatch. First, we design a multimodal convolutional neural network (CNN) to abstract motion features across modalities. After that, a stacked dilated convolutional network with an encoder-decoder network is applied to get around letter segmentation and output words in an end-to-end way. More importantly, we leverage a multi-task sequence learning method to enable handwriting recognition in a streaming way. We construct the first sequence-to-sequence handwriting dataset using smartwatch. WriteAS can yield 9.3% character error rate (CER) on 250 words for new users and 3.8% CER for words unseen in the training set. In addition, WriteAS can handle various writing conditions very well. Given the promising performance, we envision that WriteAS can be a fast and accurate input tool for smartwatch.
APA, Harvard, Vancouver, ISO, and other styles
8

Laptev, Aleksandr, Andrei Andrusenko, Ivan Podluzhny, Anton Mitrofanov, Ivan Medennikov, and Yuri Matveev. "Dynamic Acoustic Unit Augmentation with BPE-Dropout for Low-Resource End-to-End Speech Recognition." Sensors 21, no. 9 (April 28, 2021): 3063. http://dx.doi.org/10.3390/s21093063.

Full text
Abstract:
With the rapid development of speech assistants, adapting server-intended automatic speech recognition (ASR) solutions to a direct device has become crucial. For on-device speech recognition tasks, researchers and industry prefer end-to-end ASR systems as they can be made resource-efficient while maintaining a higher quality compared to hybrid systems. However, building end-to-end models requires a significant amount of speech data. Personalization, which is mainly handling out-of-vocabulary (OOV) words, is another challenging task associated with speech assistants. In this work, we consider building an effective end-to-end ASR system in low-resource setups with a high OOV rate, embodied in Babel Turkish and Babel Georgian tasks. We propose a method of dynamic acoustic unit augmentation based on the Byte Pair Encoding with dropout (BPE-dropout) technique. The method non-deterministically tokenizes utterances to extend the token’s contexts and to regularize their distribution for the model’s recognition of unseen words. It also reduces the need for optimal subword vocabulary size search. The technique provides a steady improvement in regular and personalized (OOV-oriented) speech recognition tasks (at least 6% relative word error rate (WER) and 25% relative F-score) at no additional computational cost. Owing to the BPE-dropout use, our monolingual Turkish Conformer has achieved a competitive result with 22.2% character error rate (CER) and 38.9% WER, which is close to the best published multilingual system.
APA, Harvard, Vancouver, ISO, and other styles
9

Masasi, Gianino, James Purnama, and Maulahikmah Galinium. "Development of an on-Premise Indonesian Handwriting Recognition Backend System Using Open Source Deep Learning Solution For Mobile User." Journal of Applied Information, Communication and Technology 7, no. 2 (March 17, 2021): 91–97. http://dx.doi.org/10.33555/jaict.v7i2.109.

Full text
Abstract:
Existing handwriting recognition solution on mobile app provides off premise service which means the handwriting is processed in overseas servers. Data sent to abroad servers are not under our control and could be possibly mishandled or misused. As recognizing handwriting is a complex problem, deep learning is needed. This research has the objective of developing an on premise Indonesian handwriting recognition using open source deep learning solution. Comparison of various deep learning solution to be used in the development are done. The deep learning solution will be used to build architectures. Various database format are also compared to decide which format is suitable to gather Indonesian handwriting database. The gathered Indonesian handwriting database and built architectures are used for experiments which consists of number of Convolutional Neural Network (CNN) layers, rotation and noise data augmentation, and Gated Recurrent Unit (GRU) vs Long Short Term Memory (LSTM). Experiment results shows that rotation data augmentation is the parameter to be change to improve word accuracy and Character Error Rate (CER). The improvement is 64.8% and 23.2% to 69.6% and 20.6% respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Kaur, Jagroop, and Jaswinder Singh. "Roman to Gurmukhi Social Media Text Normalization." International Journal of Intelligent Computing and Cybernetics 13, no. 4 (October 12, 2020): 407–35. http://dx.doi.org/10.1108/ijicc-08-2020-0096.

Full text
Abstract:
PurposeNormalization is an important step in all the natural language processing applications that are handling social media text. The text from social media poses a different kind of problems that are not present in regular text. Recently, a considerable amount of work has been done in this direction, but mostly in the English language. People who do not speak English code mixed the text with their native language and posted text on social media using the Roman script. This kind of text further aggravates the problem of normalizing. This paper aims to discuss the concept of normalization with respect to code-mixed social media text, and a model has been proposed to normalize such text.Design/methodology/approachThe system is divided into two phases – candidate generation and most probable sentence selection. Candidate generation task is treated as machine translation task where the Roman text is treated as source language and Gurmukhi text is treated as the target language. Character-based translation system has been proposed to generate candidate tokens. Once candidates are generated, the second phase uses the beam search method for selecting the most probable sentence based on hidden Markov model.FindingsCharacter error rate (CER) and bilingual evaluation understudy (BLEU) score are reported. The proposed system has been compared with Akhar software and RB\_R2G system, which are also capable of transliterating Roman text to Gurmukhi. The performance of the system outperforms Akhar software. The CER and BLEU scores are 0.268121 and 0.6807939, respectively, for ill-formed text.Research limitations/implicationsIt was observed that the system produces dialectical variations of a word or the word with minor errors like diacritic missing. Spell checker can improve the output of the system by correcting these minor errors. Extensive experimentation is needed for optimizing language identifier, which will further help in improving the output. The language model also seeks further exploration. Inclusion of wider context, particularly from social media text, is an important area that deserves further investigation.Practical implicationsThe practical implications of this study are: (1) development of parallel dataset containing Roman and Gurmukhi text; (2) development of dataset annotated with language tag; (3) development of the normalizing system, which is first of its kind and proposes translation based solution for normalizing noisy social media text from Roman to Gurmukhi. It can be extended for any pair of scripts. (4) The proposed system can be used for better analysis of social media text. Theoretically, our study helps in better understanding of text normalization in social media context and opens the doors for further research in multilingual social media text normalization.Originality/valueExisting research work focus on normalizing monolingual text. This study contributes towards the development of a normalization system for multilingual text.
APA, Harvard, Vancouver, ISO, and other styles
11

Susanto, Ronny, Farica P. Putri, and Y. Widya Wiratama. "Skew detection based on vertical projection in latin character recognition of text document image." International Journal of Engineering & Technology 7, no. 4.44 (December 1, 2018): 198. http://dx.doi.org/10.14419/ijet.v7i4.44.26983.

Full text
Abstract:
The accuracy of Optical Character Recognition is deeply affected by the skew of the image. Skew detection & correction is one of the steps in OCR preprocessing to detect and correct the skew of document image. This research measures the effect of Combined Vertical Projection skew detection method to the accuracy of OCR. Accuracy of OCR is measured in Character Error Rate, Word Error Rate, and Word Error Rate (Order Independent). This research also measures the computational time needed in Combined Vertical Projection with different iteration. The experiment of Combined Vertical Projection is conducted by using iteration 0.5, 1, and 2 with rotation angle within -10 until 10 degrees. The experiment results show that the use of Combined Vertical Projection could lower the Character Error Rate, Word Error Rate, and Word Error Rate (Order Independent) up to 35.53, 34.51, and 32.74 percent, respectively. Using higher iteration value could lower the computational time but also decrease the accuracy of OCR.
APA, Harvard, Vancouver, ISO, and other styles
12

SHLIEN, SEYMOUR. "MULTIFONT CHARACTER RECOGNITION FOR TYPESET DOCUMENTS." International Journal of Pattern Recognition and Artificial Intelligence 02, no. 04 (December 1988): 603–20. http://dx.doi.org/10.1142/s0218001488000388.

Full text
Abstract:
An optical character reader for processing typeset documents must be able to handle proportional spacing, the presence of touching characters and a wide variety of type fonts. This paper describes the design of a multifont character recognizer which uses a binary decision tree to classify a character on the basis of 197 geometric features. The algorithm for designing the decision tree is based upon an entropy minimization procedure, and makes no assumptions on the distribution or independence of the binary features. The decision tree classifier provides confidence measures which may be used to reduce the substitution error rate at the expense of higher rejection rates. Methods of reducing the overall error rate by combining the decision tree classifier with other classifiers were examined. In particular, the paper evaluates the performance of a classifier using a combination of multiple decision trees, template matching and contextual post-processing. Error rates were highly sensitive to typeface and varied between 10 percent and 0.1 percent. Computer processing times for the various stages of the system are presented.
APA, Harvard, Vancouver, ISO, and other styles
13

Kurach, Radosław, and Daniel Papla. "Does Risk Aversion Matter for Foreign Asset Holdings of Pension Funds – The Case of Poland." Comparative Economic Research. Central and Eastern Europe 17, no. 2 (July 10, 2014): 139–53. http://dx.doi.org/10.2478/cer-2014-0018.

Full text
Abstract:
In this study we explore the issue of foreign assets in mandatory pension funds portfolios. First we provide an overview of the regulatory policies regarding international assets and indicate the externalitieswhich may account for the observed differences among the CEE states. Then, taking the perspective of portfolio theory, we run a simulation study to measure the diversification benefits that may be achieved by greater international asset allocation. By applying the specific constraints and exchange rate volatility to our optimization procedure, the study reflects the perspective of the Polish pensioner. However, the findings regarding risk aversion intensity and the discussed directions of further research should be of a universal character.
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Ge, Si Lu Xie, and Jing Huang. "A Character Recognizer Based on BP Network." Applied Mechanics and Materials 738-739 (March 2015): 546–50. http://dx.doi.org/10.4028/www.scientific.net/amm.738-739.546.

Full text
Abstract:
A way to recognize printed characters based on BP network was proposed in this paper. It was implemented with C language. After a lot of experiments, the experimental results show that the character recognizer has good validity and correctness. The printed characters can be successfully recognized within the reasonable range of error rate.
APA, Harvard, Vancouver, ISO, and other styles
15

Ali, Aree, and Bayan Omer. "Invarianceness for Character Recognition Using Geo-Discretization Features." Computer and Information Science 9, no. 2 (March 17, 2016): 1. http://dx.doi.org/10.5539/cis.v9n2p1.

Full text
Abstract:
<span style="font-size: 10pt; font-family: 'Times New Roman','serif'; mso-bidi-font-size: 11.0pt; mso-fareast-font-family: 宋体; mso-font-kerning: 1.0pt; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">Recognition rate of characters in the handwritten is still a big challenge for the research because of a shape variation, scale and format in a given handwritten character. A more complicated handwritten character recognition system needs a better feature extraction technique that deal with such variation of hand writing. In other hand, to obtain efficient and accurate recognition rely on off-line English handwriting character, the similarity in the character traits is an important issue to be differentiated in an off-line English handwriting to. In recognizing a character, character handwriting format could be implicitly analyzed to make the representation of the unique hidden features of the individual's character is allowable. Unique features can be used in recognizing characters which can be considerable when the similarity between two characters is high. However, the problem of the similarity in off-line English character handwritten was not taken into account thus, leaving a high possibility of degrading the similarity error for intra-class [same character] with the decrease of the similarity error for inter-class [different character]. Therefore, in order to achieve better performance, this paper proposes a discretization feature algorithm to reduce the similarity error for intra-class [same character]. The mean absolute error is used as a parameter to calculate the similarity between inter and/or intra class characters. Test results show that the identification rate give a better result with the proposed hybrid Geo-Discretization method.</span>
APA, Harvard, Vancouver, ISO, and other styles
16

MacKenzie, I. Scott, R. Blair Nonnecke, J. Craig McQueen, Stan Riddersma, and Malcolm Meltz. "A Comparison of three Methods of Character Entry on Pen-Based Computers." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 38, no. 4 (October 1994): 330–34. http://dx.doi.org/10.1177/154193129403800430.

Full text
Abstract:
Methods for entering text on pen-based computers were compared with respect to speed, accuracy, and user preference. Fifteen subjects entered text on a digitizing display tablet using three methods: hand printing, QWERTY-tapping, and ABC-tapping. The tapping methods used display-based keyboards, one with a QWERTY layout, the other with two alphabetic rows of 13 characters. ABC-tapping had the lowest error rate (0.6%) but was the slowest entry method (12.9 wpm). It was also the least preferred input method. The QWERTY-tapping condition was the most preferred, the fastest (22.9 wpm), and had a low error rate (1.1%). Although subjects also liked hand printing, it was 41% slower than QWERTY-tapping and had a very high error rate (8.1%). The results suggest that character recognition on pen-based computers must improve to attract walk-up users, and that alternatives such as tapping on a QWERTY soft keyboard are effective input methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Hlaing, Thin Thin, May Phyo Oo, and Thaint Zarli Myint. "Analyzing Word Error Rate on Optical Character Recognition (OCR) for Myanmar Printed Document Image." International Journal of Computer Trends and Technology 67, no. 8 (August 25, 2019): 51–57. http://dx.doi.org/10.14445/22312803/ijctt-v67i8p109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

TAPPERT, C. C. "SPEED, ACCURACY, AND FLEXIBILITY TRADE-OFFS IN ON-LINE CHARACTER RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 05, no. 01n02 (June 1991): 79–95. http://dx.doi.org/10.1142/s0218001491000077.

Full text
Abstract:
An on-line character recognizer was improved by making trade-offs on computation speed, recognition accuracy, and flexibility. Firstly, the error rate was halved by increasing computation precision at the expense of speed, achieving a character recognition accuracy of 97.3 percent. Secondly, without loss of accuracy, computation speed was increased by an order of magnitude over the original speed, to 85 characters/second on the IBM System 370/Model 3081. This was done by using a fast linear match to narrow the character choices for a slow but accurate elastic (non-linear) match. Some loss in flexibility resulted, however, because the linear match requires the same number of handwritten strokes for input and prototype characters. Also, the error rate of elastic matching was found to be about half that of linear matching.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Jie, Wu Jun Yao, and Hai Bin Yang. "An Adaptive Error Control Scheme in Wireless Sensor Networks Based on LQI." Applied Mechanics and Materials 263-266 (December 2012): 915–19. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.915.

Full text
Abstract:
Aiming at the character of high bit error rate and energy constraints on WSN, this paper proposed an adaptive error control scheme based on link quality indicator(LQI). The PHY specification of IEEE802.15.4 provided accurate measurement of channel quality for WSN, according to the quantitative relationship between LQI and bit error rate, this paper divided the channel quality into eight levels non-uniformly, furthermore, eight different BCH code were chosen correspondingly. The motes choose optimal BCH code as its error correction scheme in real time. Experimental results show the scheme is high in energy efficiency, meanwhile, drops the error probability effectively.
APA, Harvard, Vancouver, ISO, and other styles
20

Wičar, Stanislav, Martin Svozil, and Pavel Šimík. "Automated low gas flow-rate calibrator." Collection of Czechoslovak Chemical Communications 54, no. 11 (1989): 3025–30. http://dx.doi.org/10.1135/cccc19893025.

Full text
Abstract:
A new method of the absolute low gas flow measurement was developed. The method is based on the comparison of the known rate of a piston movement in a calibrated cylinder with the measured gas flow rate. Due to its compensating character, the method is extremely sensitive, and the relative error is given merely by the sensitivity of determining the pressure difference between the cylinder and atmosphere. The method is absolute as the apparatus constant is determined by such operations as weighing and frequency measurement.
APA, Harvard, Vancouver, ISO, and other styles
21

Benchaou, Soukaina, M'Barek Nasri, and Ouafae El Melhaoui. "Feature Selection Based on Evolution Strategy for Character Recognition." International Journal of Image and Graphics 18, no. 03 (July 2018): 1850014. http://dx.doi.org/10.1142/s0219467818500146.

Full text
Abstract:
Handwriting, printed character recognition is an interesting area in image processing and pattern recognition. It consists of a number of phases which are preprocessing, feature extraction and classification. The phase of feature extraction is carried out by different techniques; zoning, profile projection, and ameliored Freeman. The high number of features vector can increase the error rate and the training time. So, to solve this problem, we present in this paper a new method of selecting attributes based on the evolution strategy in order to reduce the feature vector dimension and to improve the recognition rate. The proposed model has been applied to recognize numerals and it obtained a better results and showed more robustness than without the selection system.
APA, Harvard, Vancouver, ISO, and other styles
22

Jadoon, Arshad Ullah, Yangda Guang, Anwar Ahmad, and Sajad Ali. "Determinants of Pakistan’s Exports: An Econometric Analysis." Comparative Economic Research. Central and Eastern Europe 21, no. 3 (September 18, 2018): 95–108. http://dx.doi.org/10.2478/cer-2018-0021.

Full text
Abstract:
The research investigated the determinants of Pakistan’s exports by using time series data from 1990–2016. Certain econometric tests were also applied to check cointegration among variables. A unit root test was used to check the stationarity of selected variables. After the stationarity of the data, a vector error correction model is used to estimate the effect of regressors, like foreign direct investment, gross domestic product, employment level, and consumption expenditures on a dependent variable, i.e. exports in the short run. The result shows the positive relationships that foreign direct investment, gross domestic product and employment level have on exports, and the adverse impact of consumption expenditures on the dependent variable. The study uses Johansen’s cointegration test for the long run. The results show that all the variables are co‑integrated in the long run. It is suggested that the government should encourage foreign direct investment and gross domestic product, which would help accelerate Pakistan’s exports. It is also suggested that whenever policymakers provide a trade policy, in particular, in relation to exports, then the adverse effect of exchange rate depreciation, external debt burdens, taxes, sanctions and protectionism should be quantified, and necessary measures be suggested so as to minimize any repercussions.
APA, Harvard, Vancouver, ISO, and other styles
23

Nagylaki, T. "The evolution of multilocus systems under weak selection." Genetics 134, no. 2 (June 1, 1993): 627–47. http://dx.doi.org/10.1093/genetics/134.2.627.

Full text
Abstract:
Abstract The evolution of multilocus systems under weak selection is investigated. Generations are discrete and nonoverlapping; the monoecious population mates at random. The number of multi-allelic loci, the linkage map, dominance, and epistasis are arbitrary. The genotypic fitnesses may depend on the gametic frequencies and time. The results hold for s &lt; cmin, where s and cmin denote the selection intensity and the smallest two-locus recombination frequency, respectively. After an evolutionarily short time of t1 approximately (ln s)/ln(1 - cmin) generations, all the multilocus linkage disequilibria are of the order of s [i.e., O(s) as s--&gt;0], and then the population evolves approximately as if it were in linkage equilibrium, the error in the gametic frequencies being O(s). Suppose the explicit time dependence (if any) of the genotypic fitnesses is O(s2). Then after a time t2 approximately 2t1, the linkage disequilibria are nearly constant, their rate of change being O(s2). Furthermore, with an error of O(s2), each linkage disequilibrium is proportional to the corresponding epistatic deviation for the interaction of additive effects on fitness. If the genotypic fitnesses change no faster than at the rate O(s3), then the single-generation change in the mean fitness is delta W = W-1Vg+O(s3), where Vg designates the genic (or additive genetic) variance in fitness. The mean of a character with genotypic values whose single-generation change does not exceed O(s2) evolves at the rate delta Z = W-1Cg+O(s2), where Cg represents the genic covariance of the character and fitness (i.e., the covariance of the average effect on the character and the average excess for fitness of every allele that affects the character). Thus, after a short time t2, the absolute error in the fundamental and secondary theorems of natural selection is small, though the relative error may be large.
APA, Harvard, Vancouver, ISO, and other styles
24

Kidder, Jeffrey N., and Daniel Seligson. "Fast Recognition of Noisy Digits." Neural Computation 5, no. 6 (November 1993): 885–92. http://dx.doi.org/10.1162/neco.1993.5.6.885.

Full text
Abstract:
We describe a hardware solution to a high-speed optical character recognition (OCR) problem. Noisy 15 × 10 binary images of machine written digits were processed and applied as input to Intel's Electrically Trainable Analog Neural Network (ETANN). In software simulation, we trained an 80 × 54 × 10 feedforward network using a modified version of backprop. We then downloaded the synaptic weights of the trained network to ETANN and tweaked them to account for differences between the simulation and the chip itself. The best recognition error rate was 0.9% in hardware with a 3.7% rejection rate on a 1000-character test set.
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Zhi, Xiao Fei Feng, and Cai Hong Xu. "Designing and Implementation of 2D Character Barcode." Advanced Materials Research 710 (June 2013): 696–99. http://dx.doi.org/10.4028/www.scientific.net/amr.710.696.

Full text
Abstract:
Base on 2D barcode DM and QR, a new2D character barcode, is proposed in this paper. It is different with the traditional 2D barcode. It is a character matrix instead of the image barcode. The encoding rules, decoding rules and symbol structure of this character barcode are designed and implemented. Then the error-correcting code is studied and added into the encoded information. Finally the 2D character barcode is verified on PC. The experiments show that the 2D character barcode has very small capacity and can be transmitted as the character message among the mobile terminals. It can well overcome high loss rate when the image of 2D barcode is transmitted as a multimedia message and avoid the problem that some mobile terminals can not support the multimedia messages.
APA, Harvard, Vancouver, ISO, and other styles
26

Haviluddin, Haviluddin, Rayner Alfred, Ni’mah Moham, Herman Santoso Pakpahan, Islamiyah Islamiyah, and Hario Jati Setyadi. "Handwriting Character Recognition using Vector Quantization Technique." Knowledge Engineering and Data Science 2, no. 2 (December 23, 2019): 82. http://dx.doi.org/10.17977/um018v2i22019p82-89.

Full text
Abstract:
This paper seeks to explore Learning Vector Quantization (LVQ) processing stage to recognize The Buginese Lontara script from Makassar as well as explaining its accuracy. The testing results of LVQ obtained an accuracy degree of 66.66 %. The most optimal variant of network architecture in the recognition process is a variation of learning rate of 0.02, a maximum epoch of 5000 and a hidden layer of 90 neurons which was the result of recognition based on feature 8. Based on these variations, the obtained performance with a mean square error (MSE) of 0.0306 and the time required during the learning process was quite short, 6 minutes and 38 seconds. Based on the results of the testing, the LVQ method has not been able to provide good recognition results and still requires development to generate better recognition results.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhou, Qiao Xi, and Ye Cai Guo. "Constant Modulus Algorithm Based on Different Error Functions Weighted by Variable Coefficient." Applied Mechanics and Materials 66-68 (July 2011): 1579–85. http://dx.doi.org/10.4028/www.scientific.net/amm.66-68.1579.

Full text
Abstract:
To improve equalization performance of the constant modulus algorithm (CMA), we study that error functions have an influence on the performance of the algorithm in this paper. Aiming at the character of different error functions, a new style of error function weighted by a variable coefficient is proposed. And a new CMA based on the new error function (VCMA) is proposed too. Because of variable-coefficient adjustability, the value of this new error function can become larger at the beginning of iteration and smaller at the end of iteration in the new algorithm. From gradient descent method, VCMA can have faster convergence rate and lower residual error than the CMA. Both theoretical analysis and experimental results have shown the effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
28

Kallimani, Jagadish, Chandrika Prasad, D. Keerthana, Manoj J. Shet, Prasada Hegde, and S. H. Ajeya. "Performance Analysis of Open Source Optical Character Recognition." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4267–75. http://dx.doi.org/10.1166/jctn.2020.9060.

Full text
Abstract:
Optical character recognition is the process of conversion of images of text into machine-encoded text electronically or mechanically. The text on image can be handwritten, typed or printed. Some of the examples of image source can be a picture of a document, a scanned document or a text which is superimposed on an image. Most optical character recognition system does not give a 100% accurate result. This project aims at analyzing the error rate of a few open source optical character recognition systems (Boxoft OCR, ABBY, Tesseract, Free Online OCR etc.) on a set of diverse documents and makes a comparative study of the same. By this, we can study which OCR is the best suited for a document.
APA, Harvard, Vancouver, ISO, and other styles
29

DRUCKER, HARRIS, ROBERT SCHAPIRE, and PATRICE SIMARD. "BOOSTING PERFORMANCE IN NEURAL NETWORKS." International Journal of Pattern Recognition and Artificial Intelligence 07, no. 04 (August 1993): 705–19. http://dx.doi.org/10.1142/s0218001493000352.

Full text
Abstract:
A boosting algorithm, based on the probably approximately correct (PAC) learning model is used to construct an ensemble of neural networks that significantly improves performance (compared to a single network) in optical character recognition (OCR) problems. The effect of boosting is reported on four handwritten image databases consisting of 12000 digits from segmented ZIP Codes from the United States Postal Service and the following from the National Institute of Standards and Technology: 220000 digits, 45000 upper case letters, and 45000 lower case letters. We use two performance measures: the raw error rate (no rejects) and the reject rate required to achieve a 1% error rate on the patterns not rejected. Boosting improved performance significantly, and, in some cases, dramatically.
APA, Harvard, Vancouver, ISO, and other styles
30

DOWNTON, A. C., R. W. S. TREGIDGO, and E. KABIR. "RECOGNITION AND VERIFICATION OF HARDWRITTEN AND HAND-PRINTER BRITISH POSTAL ADDRESSES." International Journal of Pattern Recognition and Artificial Intelligence 05, no. 01n02 (June 1991): 265–91. http://dx.doi.org/10.1142/s0218001491000168.

Full text
Abstract:
An algorithmic architecture for a high-performance optical character recognition (OCR) system for hand-printed and handwritten addresses is proposed. The architecture integrates syntactic and contextual post-processing with character recognition to optimise postcode recognition performance, and verifies the postcode against simple features extracted from the remainder of the address to ensure a low error rate. An enhanced version of the characteristic loci character recognition algorithm was chosen for the system to make it tolerant of variations in writing style. Feature selection for the classifier is performed automatically using the B/W algorithm. Syntactic and contextual information for hand-printed British postcodes have been integrated into the system by combining low-level postcode syntax information with a dictionary trie structure. A full implementation of the postcode dictionary trie is described. Features which define the town name effectively, and can easily be extracted from a handwritten or hand-printed town name are used for postcode verification. A database totalling 3473 postcode/address image has used to evaluate the performance of the complete postcode recognition process. The basic character recognition rate for the full unconstrained alphanumeric character set is 63.1%, compared with an expected maximum attainable 75–80%. The addition of the syntactic and contextual knowledge stages produces an overall postcode recognition rate which is equivalent to an alphanumeric character recognition rate of 86–90%. Separate verification experiments on a subset of 820 address images show that, with the first-order features chosen, an overall correct address feature code extraction rate of around 35% is achieved.
APA, Harvard, Vancouver, ISO, and other styles
31

Mustika, Dea, and Febrina Dafit. "Analisis Pemahaman Mahasiswa PGSD Terhadap Nilai Karakter Bangsa Dalam Mata Kuliah Pendidikan Karakter." JURNAL INOVASI PENDIDIKAN DAN PEMBELAJARAN SEKOLAH DASAR 3, no. 1 (October 14, 2019): 92. http://dx.doi.org/10.24036/jippsd.v3i1.106373.

Full text
Abstract:
This research aims to investigate the students' understanding of the values of national character in the subject of character education in elementary school. This research used a descriptive quantitative approach. This research was conducted at the PGSD study program, FKIP, Riau Islamic University. The population of this research included 128 sstudents of class 2017. 56 students were taken as research samples using proportional simple random techniques for an error rate of 10%. The method of data analysis uses descriptive analysis by describing the opinions of respondents based on the answers of the questionnaire instruments tested. The results of the study indicate that PGSD students' understanding of national character values is good. This is indicated by the overall end result for eighteen national character values is 82% in the good category.
APA, Harvard, Vancouver, ISO, and other styles
32

Mahardika, I. Kadek Eman Giyana, Torib Hamzah, Triana Rahmawati, and Liliek Soetjiatie. "Measuring Respiration Rate Based Android." Indonesian Journal of electronics, electromedical engineering, and medical informatics 1, no. 1 (August 22, 2019): 39–44. http://dx.doi.org/10.35882/ijeeemi.v1i1.7.

Full text
Abstract:
Respiratory rate measurement tool is a technique used to determine the number of respiratory activity a person every minute. In the classification of the number of breathing per minute someone, can be divided into three groups, namely the so-called eupnea/normal, above average breathing called tachypnea, while below the average so-called bradypnea. This method is highly dependent on the concentration of the mind and senses actor sensitivity measurement / observation. Therefore human nature is easy to forget, tired and bored, so now developed a method of measurement or observation of respiratory rate electronically. In this study, respiratory rate measurement making use flex sensor by placing the sensor in the patient's stomach and will detect the curvature of the patient's stomach. Results from the patient's respiratory displayed on the LCD Character and android using HC-05 Bluetooth as the media sender. The results of the measurement data of the 10 respondents indicated the average - average error of 3.2%. After testing and data collection can be concluded that the appliance is eligible to use because it is still within the tolerance range of 10%.
APA, Harvard, Vancouver, ISO, and other styles
33

Bodnaryk, R. P. "Leaf epicuticular wax, an antixenotic factor in Brassicaceae that affects the rate and pattern of feeding of flea beetles, Phyllotreta cruciferae (Goeze)." Canadian Journal of Plant Science 72, no. 4 (October 1, 1992): 1295–303. http://dx.doi.org/10.4141/cjps92-163.

Full text
Abstract:
Crop brassicas with waxy leaves (> 1000 mg kg−1) were fed upon by flea beetles, Phyllotreta cruciferae (Goeze) (Coleoptera: Chrysomelidae), at a low rate, and feeding occurred predominantly at the edges of leaves. Species with non-waxy leaves (< 240 mg kg−1) were fed upon at a high rate, and feeding occurred randomly throughout the leaf. The regression of feeding rate upon amount of epicuticular wax had an R2 = 0.64, indicating that 64% of the feeding variation of flea beetles on diverse species and cultivars of Brassicaceae was explained by a single factor regression. Feeding studies on low-wax (eceriferum, cer) Brassica mutants confirmed that leaf epicuticular wax is an important antixenotic factor that affects the rate and pattern of feeding of flea beetles. The CC genome of B. oleraceae was identified as the source of the waxy-leaf character that gives rise to the low feeding rate and edge-feeding pattern of flea beetles. The digenomic amphidiploid B. napus (AACC genome), derived from the monogenomic diploids B. oleraceae (CC genome) and B. rapa (AA genome), and the digenomic amphidiploid B. carinata (BBCC genome), derived from monogenomic diploids B. oleraceae and B. nigra (BB genome), had waxy leaves and an edge-feeding pattern and rate similar to members of the B. oleraceae group. All other monogenomic diploids (AA, BB, DD, SS, RR) and digenomic amphidiploids (AABB) not possessing the CC genome had non-waxy leaves, a high rate of feeding and a random feeding pattern by flea beetles.Key words: Brassica, epicuticular wax, feeding, resistance, Phyllotreta cruciferae
APA, Harvard, Vancouver, ISO, and other styles
34

Lee, Luan L., Miguel G. Lizarraga, Natanael R. Gomes, and Alessandro L. Koerich. "A Prototype for Brazilian Bankcheck Recognition." International Journal of Pattern Recognition and Artificial Intelligence 11, no. 04 (June 1997): 549–69. http://dx.doi.org/10.1142/s0218001497000238.

Full text
Abstract:
This paper describes a prototype for Brazilian bankcheck recognition. The description is divided into three topics: bankcheck information extraction, digit amount recognition and signature verification. In bankcheck information extraction, our algorithms provide signature and digit amount images free of background patterns and bankcheck printed information. In digit amount recognition, we dealt with the digit amount segmentation and implementation of a complete numeral character recognition system involving image processing, feature extraction and neural classification. In signature verification, we designed and implemented a static signature verification system suitable for banking and commercial applications. Our signature verification algorithm is capable of detecting both simple, random and skilled forgeries. The proposed automatic bankcheck recognition prototype was intensively tested by real bankcheck data as well as simulated data providing the following performance results: for skilled forgeries, 4.7% equal error rate; for random forgeries, zero Type I error and 7.3% Type II error; for bankcheck numerals, 92.7% correct recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
35

MOON, BYUNG YOUNG, SOO YOUNG KIM, and GYUNG JU KANG. "INITIAL SHIP DESIGN USING A PEARSON CORRELATION COEFFICIENT AND ARTIFICIAL INTELLIGENCE TECHNIQUES." International Journal of Modern Physics B 22, no. 09n11 (April 30, 2008): 1801–6. http://dx.doi.org/10.1142/s0217979208047444.

Full text
Abstract:
In this paper we analyzed correlation between geometrical character and resistance, and effective horse power by using Pearson correlation coefficient which is one of the data mining methods. Also we made input data to ship's geometrical character which has strong correlation with output data. We calculated effective horse power and resistance by using Neuro-Fuzzy system. To verify the calculation, 9 of 11 container ships' data were improved as data of Neuro-Fuzzy system and the others were improved as verification data. After analyzing rate of error between existing data and calculation data, we concluded that calculation data have sound agreement with existing data.
APA, Harvard, Vancouver, ISO, and other styles
36

Popović, Branislav, Edvin Pakoci, and Darko Pekar. "Transfer learning for domain and environment adaptation in Serbian ASR." Telfor Journal 12, no. 2 (2020): 110–15. http://dx.doi.org/10.5937/telfor2002110p.

Full text
Abstract:
In automatic speech recognition systems, the training data used for system development and the data actually obtained from the users of the system sometimes significantly differ in practice. However, other, more similar data may be available. Transfer learning can help to exploit such similar data for training in order to boost the automatic speech recognizer's performance for a certain domain. This paper presents a few applications of transfer learning in the context of speech recognition, specifically for the Serbian language. Several methods are proposed, with the goal of optimizing system performance on a specific part of the existing speech database for Serbian, or in a noisy environment. The experimental results evaluated on a test set from the desired domain show significant improvement in both word error rate and character error rate.
APA, Harvard, Vancouver, ISO, and other styles
37

Qu, Zhong, Qing-li Chang, Chang-zhi Chen, and Li-dan Lin. "An Improved Character Recognition Algorithm for License Plate Based on BP Neural Network." Open Electrical & Electronic Engineering Journal 8, no. 1 (December 31, 2014): 202–7. http://dx.doi.org/10.2174/1874129001408010202.

Full text
Abstract:
License plate character recognition is the basis of automatic license plate recognition (LPR) and it plays an important role in LPR. In this paper, we considered the advantages and disadvantages of the neural network method and proposed an improved approach of character recognition for license plates. In our approach, firstly, license plates were segmented into character pictures by using the algorithm which combines the projection and morphology. Secondly, with a focus on each character picture, recognition results determined by the calculation of the new recognition algorithm were as a reflection of the different features of every kind of character image. Then, character image samples were classified according to different light environment and character type itself. Finally, we used extracted features vectors to train the BP (error back propagation) neural network with adding noise relatively. Due to the influence of environmental factors or character images themselves will bring font discrepancy, font slant, stroke connection and so on, compared with template matching recognition method, neural network method has relatively great space to enhance the recognition effect. In the experiment, we used 1000 license plates images that had been successfully located. Of which, 11800 character images have been successfully identified, and the identification rate of our new algorithm is 91.2%. The experiment results prove that the improved character recognition method is accurate and highly consistent.
APA, Harvard, Vancouver, ISO, and other styles
38

Dong, Yan, Yong Sheng Zhu, and Qiang Li. "Research on License Plate Recognition Based on Information Fusion." Advanced Materials Research 433-440 (January 2012): 7067–72. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.7067.

Full text
Abstract:
The information capacity of the characters on the license plate images affects the accuracy of recognition directly. To improve the recognition rate of vehicle license, considering the low cost of installing cameras nowadays, this thesis put forwards that, adopting images from two cameras in different angles. the license plate location, character division and feature extraction process are done separately, and then information fusion technique is used to confirm the more reliable recognition result, which can reduce the error recognition rate of characters. The contrast experiments show that this method can improve the accuracy of license plate recognition.
APA, Harvard, Vancouver, ISO, and other styles
39

Okmayura, Finanta, and Noverta Effendi. "Design of Expert System for Early Identification for Suspect Bullying On Vocational Students by Using Dempster Shafer Theory." CIRCUIT: Jurnal Ilmiah Pendidikan Teknik Elektro 3, no. 1 (July 5, 2019): 48. http://dx.doi.org/10.22373/crc.v3i1.4691.

Full text
Abstract:
Bullying is negatif agresif character to cover and hurt someone physically or physicology continoustly to act hard to other people who is lower than him. Some parents ignore this problem because of unknowing about the result of this problem that give negative effect to their children and other people. This system will identificate early the begining bullying character for teenager by knowing the kinds of the bullying character base on rate of presentation of higest quartel who has bullying character. This system designed by using Dempster-hafer teory to know the begining of bullying character by using basic knowledge and forward chaining technic to know more about basic knowledge. Counting of this metodh by combining some symptoms that happen on children by calculating the possibillity disturbing by rating the symptoms from 0 to 1. Implementation of this system use PHP program and My SQL database. The black box trial on the consultation modul is done by using 12 instruments of tria and found error value 0,88 % on the system and result of the trial expert system have suitable accuration 84 % , so we can conclude that this expert system for early identification for suspect bullying on teenager good for use.
APA, Harvard, Vancouver, ISO, and other styles
40

Lyu, Lijun, Maria Koutraki, Martin Krickl, and Besnik Fetahu. "Neural OCR Post-Hoc Correction of Historical Corpora." Transactions of the Association for Computational Linguistics 9 (2021): 479–93. http://dx.doi.org/10.1162/tacl_a_00379.

Full text
Abstract:
Abstract Optical character recognition (OCR) is crucial for a deeper access to historical collections. OCR needs to account for orthographic variations, typefaces, or language evolution (i.e., new letters, word spellings), as the main source of character, word, or word segmentation transcription errors. For digital corpora of historical prints, the errors are further exacerbated due to low scan quality and lack of language standardization. For the task of OCR post-hoc correction, we propose a neural approach based on a combination of recurrent (RNN) and deep convolutional network (ConvNet) to correct OCR transcription errors. At character level we flexibly capture errors, and decode the corrected output based on a novel attention mechanism. Accounting for the input and output similarity, we propose a new loss function that rewards the model’s correcting behavior. Evaluation on a historical book corpus in German language shows that our models are robust in capturing diverse OCR transcription errors and reduce the word error rate of 32.3% by more than 89%.
APA, Harvard, Vancouver, ISO, and other styles
41

Alfina Nadhirotussolikah, Andjar Pudji, and Muhammad Ridha Mak'ruf. "Fetal Doppler Simulator Based on Arduino." Journal of Electronics, Electromedical Engineering, and Medical Informatics 2, no. 1 (January 6, 2020): 28–32. http://dx.doi.org/10.35882/jeeemi.v2i1.6.

Full text
Abstract:
Heart rate of the fetal is the main indicator of the fetal life in the womb. Monitoring fetal heart rate can’t be done, so a tool is needed to monitoring fetal heart rate. Fetal heart rate can be monitored with fetal doppler. To test the accuracy of Fetal Doppler, a calibration is needed with the Fetal Doppler Simulator. This tool will simulate the fetal heart rate with a BPM value that can be adjusted according to the settings on the device. This module using Arduino as the brain system. On the module there is a selection of BPM from 60 to 240 BPM with an increase of 30 BPM displayed on 2x16 character LCDs. Based on BPM measurement 6 times using Fetal Doppler, the measurement error in a BPM of 60 to BPM 210 is 0%, while at BPM 240 an error is 0.2%. This module has been compared with the standard devices (Fetal Simulator Brand Fluke Biomedical Ps320), the results of the comparison modules with the comparison tool has the same error value in 240 BPM is 0.2% and in BPM 210 there is a difference in the result of module Fetal Doppler reading of 210 BPM while in the comparison tool is 209 BPM. Of the measurement data and analysis, it can be concluded that the tool can work and the tool has the same accuracy as the standard device.
APA, Harvard, Vancouver, ISO, and other styles
42

Drucker, Harris, Corinna Cortes, L. D. Jackel, Yann LeCun, and Vladimir Vapnik. "Boosting and Other Ensemble Methods." Neural Computation 6, no. 6 (November 1994): 1289–301. http://dx.doi.org/10.1162/neco.1994.6.6.1289.

Full text
Abstract:
We compare the performance of three types of neural network-based ensemble techniques to that of a single neural network. The ensemble algorithms are two versions of boosting and committees of neural networks trained independently. For each of the four algorithms, we experimentally determine the test and training error curves in an optical character recognition (OCR) problem as both a function of training set size and computational cost using three architectures. We show that a single machine is best for small training set size while for large training set size some version of boosting is best. However, for a given computational cost, boosting is always best. Furthermore, we show a surprising result for the original boosting algorithm: namely, that as the training set size increases, the training error decreases until it asymptotes to the test error rate. This has potential implications in the search for better training algorithms.
APA, Harvard, Vancouver, ISO, and other styles
43

Kong, Jie, and Congying Wang. "Resolution Enhancement for Low-resolution Text Images Using Generative Adversarial Network." MATEC Web of Conferences 246 (2018): 03040. http://dx.doi.org/10.1051/matecconf/201824603040.

Full text
Abstract:
In recent years, although Optical Character Recognition (OCR) has made considerable progress, low-resolution text images commonly appearing in many scenarios may still cause errors in recognition. For this problem, the technique of Generative Adversarial Network in super-resolution processing is applied to enhance the resolution of low-quality text images in this study. The principle and the implementation in TensorFlow of this technique are introduced. On this basis, a system is proposed to perform the resolution enhancement and OCR for low-resolution text images. The experimental results indicate that this technique could significantly improve the accuracy, reduce the error rate and false rejection rate of low-resolution text images identification.
APA, Harvard, Vancouver, ISO, and other styles
44

Rodriguez, Maria Rosa, Pablo Roman Duchowicz, and Nieves Carolina Comelli. "QSAR Classification of Anticancer Heterocyclichydrazones With Reactivity Descriptors." International Journal of Quantitative Structure-Property Relationships 6, no. 1 (January 2021): 45–62. http://dx.doi.org/10.4018/ijqspr.2021010104.

Full text
Abstract:
In this study, validated PLS-DA models that discriminate heterocyclichydrazones with potent anticancer activity (GI50/IC50<50x10-6M) from those with low activity (GI50/IC50>=50x10-6M) against the cancer cell lines HCT-116 (colon), OVCAR-8 (human ovary), HL-60 (leukemia), and SF-295 (glioblastoma) were developed. A dataset of 24-a-(N)-heterocyclichydrazones and 14 N-acylhydrazonyl-thienyl and various global and local reactivity descriptors were used for modeling. The best models classified for training and test sets with an accuracy range between 67-100%, a class specificity and sensitivity range between 71-100%, error rate range between 0-0.27, and non-error rate range between 0.73-1.0. An external set of 20 compounds was predicted and the models showed which new compounds are not suggested for further biological investigation. The molecular properties with impact on the modeled endpoints show that the antitumor activity can be improved with electron-acceptor N-acylhydrazonyl-thienyl derivatives and a-(N)-heterocyclichydrazones with moderate electron-donating character.
APA, Harvard, Vancouver, ISO, and other styles
45

Prasetio, Barlian Henryranu, Hiroki Tamura, and Koichi Tanno. "Semi-Supervised Deep Time-Delay Embedded Clustering for Stress Speech Analysis." Electronics 8, no. 11 (November 1, 2019): 1263. http://dx.doi.org/10.3390/electronics8111263.

Full text
Abstract:
Real stressed speech is affected by various aspects (individual characteristics and environment) so that the stress patterns are diverse and different on each individual. To this end, in our previous work, we performed an unsupervised clustering method that able to self-learning manner by mapping the feature representations of the stress speech and clustering tasks simultaneously, called deep time-delay embedded clustering (DTEC). However, DTEC has not confirmed yet the compatibility between the output class and informational classes. Therefore, we proposed semi-supervised time-delay embedded clustering (SDTEC) as a new framework of semi-supervised in DTEC. SDTEC incorporates the prior information of pairwise constraints in the embedding layer and simultaneously learns the feature representation and the clustering assignments. The prior information was used to guide the clustering procedure so that the points that belong to the incorrect cluster can be corrected. The effectiveness of the proposed SDTEC was evaluated by comparing it with some baseline methods in terms of the clustering error rate (CER). Moreover, to demonstrate SDTEC’s capabilities, we conducted a comprehensive ablation study. Based on experiment results, SDTEC outperformed the baseline methods and achieves state-of-the-art results in semi-supervised clustering.
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Kai, Xiaobo Tian, Hongzhou Yu, Min Yu, and Aiguo Yin. "A High Capacity Watermarking Technique for the Printed Document." Electronics 8, no. 12 (November 25, 2019): 1403. http://dx.doi.org/10.3390/electronics8121403.

Full text
Abstract:
Digital watermarking technology is an effective method for copyright protection of digital information, such as images, documents, etc. In this paper, we propose a high capacity text image watermarking technique against printing and scanning processes. Firstly, this method obtains the invariant in the process of printing and scanning under the mathematical hypothesis model of print-scan transformation. Then based on the print-scan invariant, the Fourier descriptor is used to flip the trivial pixel points with high frequency information on the character boundary. Next, considering the resolution of the print-scan equipment and its influence on the print-scan invariant, a quadratic quantization function is proposed to embed watermark information of multiple bits for a single character. Finally, the QR code (Quick Response code) is researched, which has large information capacity, robust error correction ability and high decoding reliability. By using the QR code as the watermark information, we can reduce the impact of bit error rate during watermark extraction, and the robustness of the watermark information can be improved. The experimental results show that the proposed text watermarking algorithm has the advantages of anti-print scanning, anti-scaling, large capacity and good visual effects.
APA, Harvard, Vancouver, ISO, and other styles
47

Song, Qian, and Yoo Sang Wook. "Exploration of the Application of Virtual Reality and Internet of Things in Film and Television Production Mode." Applied Sciences 10, no. 10 (May 16, 2020): 3450. http://dx.doi.org/10.3390/app10103450.

Full text
Abstract:
In order to reduce some of the problems of technological restructuring and insufficient expansion in the current film and television production mode, the application of emerging technologies such as artificial intelligence (AI), virtual reality (VR), and Internet of Things (IoT) in the film and television industry is introduced in this research. First, a topical crawler tool was constructed to grab relevant texts about “AI”, “VR”, and “IoT” crossover “film and television”, and the grasping accuracy rate and recall rate of this tool were compared. Then, based on the extracted text, the data of recent development in related fields were extracted. The AdaBoost algorithm was used to improve the BP (Back Propagation) neural network (BPNN). This model was used to predict the future development scale of related fields. Finally, a virtual character interaction system based on IoT-sensor technology was built and its performance was tested. The results showed that the topical crawler tool constructed in this study had higher recall rate and accuracy than other tools, and a total of 188 texts related to AI, VR, and IoT crossover television films were selected after Naive Bayes classification. In addition, the error of the BPNN prediction model based on the AdaBoost algorithm was less than 20%, and it can effectively predict the future development scale of AI and other fields. In addition, the virtual character interaction system based on IoT technology constructed in this study has a high motion recognition rate, produces a strong sense of immersion among users, and can realize real-time capture and imitation of character movements. In a word, the field of AI and VR crossover film and television has great development prospects in the future. Therefore, the application of IoT technology in building the virtual-character interaction system can improve the effect of VR or AI film and television production.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Gui Lin, Yi Fan Dai, and Sheng Yi Li. "Study on Anastomosis Feature in Polishing Zone in Aspheric Optics Machining." Key Engineering Materials 364-366 (December 2007): 584–89. http://dx.doi.org/10.4028/www.scientific.net/kem.364-366.584.

Full text
Abstract:
In order to keep the stability of tool’s removal function, it is required that the anastomosis be tight between the tool and workpiece surface in Computer-controlled Optical Surfacing (CCOS). In this paper, the influence of tool’s character on anastomosis status is firstly studied. The relation model on the ratio of radius to thickness, Young's modulus of the tool, normal asphericity and normal arc height of workpiece surface is established, and the macroscopical condition of tight anastomosis is derived in aspheric optics machining. According to the microcosmic distribution of surface error, the mathematical relation between anastomosis error and removal rate is researched. In the end, the influence rule of anastomosis status on the convergence ratio of residual error is analyzed in machining zone. Based on the conclusion of machining instance, it is found that workpiece material would be fast removed in middle contact zone when the peak value of tool’s removal function locates in its center position.
APA, Harvard, Vancouver, ISO, and other styles
49

Qin, Fan, Linxia Fu, Yuanqing Wang, and Yi Mao. "A bagging tree-based pseudorange correction algorithm for global navigation satellite system positioning in foliage canyons." International Journal of Distributed Sensor Networks 17, no. 5 (May 2021): 155014772110167. http://dx.doi.org/10.1177/15501477211016757.

Full text
Abstract:
Global navigation satellite system is indispensable to provide positioning, navigation, and timing information for pedestrians and vehicles in location-based services. However, tree canopies, although considered as valuable city infrastructures in urban areas, adversely degrade the accuracy of global navigation satellite system positioning as they attenuate the satellite signals. This article proposes a bagging tree-based global navigation satellite system pseudorange error prediction algorithm, by considering two variables, including carrier to noise C/ N0 and elevation angle θe to improve the global navigation satellite system positioning accuracy in the foliage area. The positioning accuracy improvement is then obtained by applying the predicted pseudorange error corrections. The experimental results shows that as the stationary character of the geostationary orbit satellites, the improvement of the prediction accuracy of the BeiDou navigation satellite system solution (85.42% in light foliage and 83.99% in heavy foliage) is much higher than that of the global positioning system solution (70.77% in light foliage and 73.61% in heavy foliage). The positioning error values in east, north, and up coordinates are improved by the proposed algorithm, especially a significant decrease in up direction. Moreover, the improvement rate of the three-dimensional root mean square error of positioning accuracy in light foliage area test is 86% for BeiDou navigation satellite system/global positioning system combination solutions, while the corresponding improvement rate is 82% for the heavy foliage area test.
APA, Harvard, Vancouver, ISO, and other styles
50

Ahmed, Saadaldeen, Mustafa Fadhil, and Salwa Abdulateef. "Enhancing Reading Advancement Using Eye Gaze Tracking." 3D SCEEER Conference sceeer, no. 3d (July 1, 2020): 59–64. http://dx.doi.org/10.37917/ijeee.sceeer.3rd.9.

Full text
Abstract:
This research aims to understand the enhancing reading advancement using eye gaze tracking in regards to pull the increase of time interacting with such devices along. In order to realize that, user should have a good understanding of the reading process and of the eye gaze tracking systems; as well as a good understanding of the issues existing while using eye gaze tracking system for reading process. Some issues are very common, so our proposed implementation algorithm compensate these issues. To obtain the best results possible, two mains algorithm have been implemented: the baseline algorithm and the algorithm to smooth the data. The tracking error rate is calculated based on changing points and missed changing points. In [21], a previous implementation on the same data was done and the final tracking error rate value was of 126%. The tracking error rate value seems to be abnormally high but this value is actually useful as described in [21]. For this system, all the algorithms used give a final tracking error rate value of 114.6%. Three main origins of the accuracy of the eye gaze reading were normal fixation, regression, skip fixation; and accuracies are displayed by the tracking rate value obtained. The three main sources of errors are the calibration drift, the quality of the setup and the physical characteristics of the eyes. For the tests, the graphical interface uses characters with an average height of 24 pixels for the text. By considering that the subject was approximately at 60 centimeters of the tracker. The character on the screen represents an angle of ±0.88◦; which is just above the threshold of ±0.5◦ imposed by the physical characteristics of the eyeball for the advancement of reading using eye gaze tracking.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography