To see the other types of publications on this topic, follow the link: Edit Automata.

Journal articles on the topic 'Edit Automata'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Edit Automata.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

MOHRI, MEHRYAR. "EDIT-DISTANCE OF WEIGHTED AUTOMATA: GENERAL DEFINITIONS AND ALGORITHMS." International Journal of Foundations of Computer Science 14, no. 06 (2003): 957–82. http://dx.doi.org/10.1142/s0129054103002114.

Full text
Abstract:
The problem of computing the similarity between two sequences arises in many areas such as computational biology and natural language processing. A common measure of the similarity of two strings is their edit-distance, that is the minimal cost of a series of symbol insertions, deletions, or substitutions transforming one string into the other. In several applications such as speech recognition or computational biology, the objects to compare are distributions over strings, i.e., sets of strings representing a range of alternative hypotheses with their associated weights or probabilities. We define the edit-distance of two distributions over strings and present algorithms for computing it when these distributions are given by automata. In the particular case where two sets of strings are given by unweighted automata, their edit-distance can be computed using the general algorithm of composition of weighted transducers combined with a single-source shortest-paths algorithm. In the general case, we show that general weighted automata algorithms over the appropriate semirings can be used to compute the edit-distance of two weighted automata exactly. These include classical algorithms such as the composition and ∊-removal of weighted transducers and a new and simple synchronization algorithm for weighted transducers which, combined with ∊-removal, can be used to normalize weighted transducers with bounded delays. Our algorithm for computing the edit-distance of weighted automata can be used to improve the word accuracy of automatic speech recognition systems. It can also be extended to provide an edit-distance automaton useful for re-scoring and other post-processing purposes in the context of large-vocabulary speech recognition.
APA, Harvard, Vancouver, ISO, and other styles
2

Beauquier, Danièle, Joëlle Cohen, and Ruggero Lanotte. "Security Policies Enforcement Using Finite Edit Automata." Electronic Notes in Theoretical Computer Science 229, no. 3 (2009): 19–35. http://dx.doi.org/10.1016/j.entcs.2009.06.037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Okhotin, Alexander, and Kai Salomaa. "Edit distance neighbourhoods of input-driven pushdown automata." Theoretical Computer Science 777 (July 2019): 417–30. http://dx.doi.org/10.1016/j.tcs.2019.03.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ligatti, Jay, Lujo Bauer, and David Walker. "Edit automata: enforcement mechanisms for run-time security policies." International Journal of Information Security 4, no. 1-2 (2005): 2–16. http://dx.doi.org/10.1007/s10207-004-0046-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Beauquier, Danièle, Joëlle Cohen, and Ruggero Lanotte. "Security policies enforcement using finite and pushdown edit automata." International Journal of Information Security 12, no. 4 (2013): 319–36. http://dx.doi.org/10.1007/s10207-013-0195-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

ALLAUZEN, CYRIL, and MEHRYAR MOHRI. "N-WAY COMPOSITION OF WEIGHTED FINITE-STATE TRANSDUCERS." International Journal of Foundations of Computer Science 20, no. 04 (2009): 613–27. http://dx.doi.org/10.1142/s0129054109006772.

Full text
Abstract:
Composition of weighted transducers is a fundamental algorithm used in many applications, including for computing complex edit-distances between automata, or string kernels in machine learning, or to combine different components of a speech recognition, speech synthesis, or information extraction system. We present a generalization of the composition of weighted transducers, n-way composition, which is dramatically faster in practice than the standard composition algorithm when combining more than two transducers. The worst-case complexity of our algorithm for composing three transducers T1, T2, and T3 resulting in T, is O(|T|Q min (d(T1)d(T3), d(T2)) + |T|E), where |·|Q denotes the number of states, |·|E the number of transitions, and d(·) the maximum out-degree. As in regular composition, the use of perfect hashing requires a pre-processing step with linear-time expected complexity in the size of the input transducers. In many cases, this approach significantly improves on the complexity of standard composition. Our algorithm also leads to a dramatically faster composition in practice. Furthermore, standard composition can be obtained as a special case of our algorithm. We report the results of several experiments demonstrating this improvement. These theoretical and empirical improvements significantly enhance performance in the applications already mentioned.
APA, Harvard, Vancouver, ISO, and other styles
7

Islam, Md Rakibul, and Minhaz F. Zibran. "What changes in where?" ACM SIGAPP Applied Computing Review 20, no. 4 (2021): 18–34. http://dx.doi.org/10.1145/3447332.3447334.

Full text
Abstract:
A deep understanding of the common patterns of bug-fixing changes is useful in several ways: (a) such knowledge can help developers in proactively avoiding coding patterns that lead to bugs and (b) bug-fixing patterns are exploited in devising techniques for automatic bug localization and program repair. This work includes an in-depth quantitative and qualitative analysis over 4,653 buggy revisions of five software systems. Our study identifies 38 bug-fixing edit patterns and discovers 37 new patterns of nested code structures, which frequently host the bug-fixing edits. While some of the edit patterns were reported in earlier studies, these nesting patterns are new and were never targeted before.
APA, Harvard, Vancouver, ISO, and other styles
8

Daalmans, Jacco. "Constraint Simplification for Data Editing of Numerical Variables." Journal of Official Statistics 34, no. 1 (2018): 27–39. http://dx.doi.org/10.1515/jos-2018-0002.

Full text
Abstract:
Abstract Data editing is the process of checking and correcting data. In practise, these processes are often automated. A large number of constraints needs to be handled in many applications. This article shows that data editing can benefit from automated constraint simplification techniques. Performance can be improved, which broadens the scope of applicability of automatic data editing. Flaws in edit rule formulation may be detected, which improves the quality of automatic edited data.
APA, Harvard, Vancouver, ISO, and other styles
9

McCarroll, Rachel E., Beth M. Beadle, Peter A. Balter, et al. "Retrospective Validation and Clinical Implementation of Automated Contouring of Organs at Risk in the Head and Neck: A Step Toward Automated Radiation Treatment Planning for Low- and Middle-Income Countries." Journal of Global Oncology, no. 4 (December 2018): 1–11. http://dx.doi.org/10.1200/jgo.18.00055.

Full text
Abstract:
Purpose We assessed automated contouring of normal structures for patients with head-and-neck cancer (HNC) using a multiatlas deformable-image-registration algorithm to better provide a fully automated radiation treatment planning solution for low- and middle-income countries, provide quantitative analysis, and determine acceptability worldwide. Methods Autocontours of eight normal structures (brain, brainstem, cochleae, eyes, lungs, mandible, parotid glands, and spinal cord) from 128 patients with HNC were retrospectively scored by a dedicated HNC radiation oncologist. Contours from a 10-patient subset were evaluated by five additional radiation oncologists from international partner institutions, and interphysician variability was assessed. Quantitative agreement of autocontours with independently physician-drawn structures was assessed using the Dice similarity coefficient and mean surface and Hausdorff distances. Automated contouring was then implemented clinically and has been used for 166 patients, and contours were quantitatively compared with the physician-edited autocontours using the same metrics. Results Retrospectively, 87% of normal structure contours were rated as acceptable for use in dose-volume-histogram–based planning without edit. Upon clinical implementation, 50% of contours were not edited for use in treatment planning. The mean (± standard deviation) Dice similarity coefficient of autocontours compared with physician-edited autocontours for parotid glands (0.92 ± 0.10), brainstem (0.95 ± 0.09), and spinal cord (0.92 ± 0.12) indicate that only minor edits were performed. The average mean surface and Hausdorff distances for all structures were less than 0.15 mm and 1.8 mm, respectively. Conclusion Automated contouring of normal structures generates reliable contours that require only minimal editing, as judged by retrospective ratings from multiple international centers and clinical integration. Autocontours are acceptable for treatment planning with no or, at most, minor edits, suggesting that automated contouring is feasible for clinical use and in the ongoing development of automated radiation treatment planning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
10

Cui, Jiang Tao, and Guo Qiang Shen. "Research of CAPP System Based on the Solid Edge." Advanced Materials Research 765-767 (September 2013): 167–70. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.167.

Full text
Abstract:
The three-dimensional CAPP prototype is studied based on Solid Edge platform using the technology of secondary development of VB. The system is able to complete part the selection of machining feature of the part, edit of process information and selection of process. The system finally output process card semi-automatic or automatic, through the module of process edit and the module of process card output module having the support of the process resource management module.
APA, Harvard, Vancouver, ISO, and other styles
11

HAN, YO-SUB, SANG-KI KO, and KAI SALOMAA. "THE EDIT-DISTANCE BETWEEN A REGULAR LANGUAGE AND A CONTEXT-FREE LANGUAGE." International Journal of Foundations of Computer Science 24, no. 07 (2013): 1067–82. http://dx.doi.org/10.1142/s0129054113400315.

Full text
Abstract:
The edit-distance between two strings is the smallest number of operations required to transform one string into the other. The distance between languages L1and L2is the smallest edit-distance between string wi∈ Li, i = 1, 2. We consider the problem of computing the edit-distance of a given regular language and a given context-free language. First, we present an algorithm that finds for the languages an optimal alignment, that is, a sequence of edit operations that transforms a string in one language to a string in the other. The length of the optimal alignment, in the worst case, is exponential in the size of the given grammar and finite automaton. Then, we investigate the problem of computing only the edit-distance of the languages without explicitly producing an optimal alignment. We design a polynomial time algorithm that calculates the edit-distance based on unary homomorphisms.
APA, Harvard, Vancouver, ISO, and other styles
12

Wächter, Thomas, and Michael Schroeder. "Semi-automated ontology generation within OBO-Edit." Bioinformatics 26, no. 12 (2010): i88—i96. http://dx.doi.org/10.1093/bioinformatics/btq188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Smith, Michael, Kevin T. Cunningham, and Katarina L. Haley. "Automating Error Frequency Analysis via the Phonemic Edit Distance Ratio." Journal of Speech, Language, and Hearing Research 62, no. 6 (2019): 1719–23. http://dx.doi.org/10.1044/2019_jslhr-s-18-0423.

Full text
Abstract:
Purpose Many communication disorders result in speech sound errors that listeners perceive as phonemic errors. Unfortunately, manual methods for calculating phonemic error frequency are prohibitively time consuming to use in large-scale research and busy clinical settings. The purpose of this study was to validate an automated analysis based on a string metric—the unweighted Levenshtein edit distance—to express phonemic error frequency after left hemisphere stroke. Method Audio-recorded speech samples from 65 speakers who repeated single words after a clinician were transcribed phonetically. By comparing these transcriptions to the target, we calculated the percent segments with a combination of phonemic substitutions, additions, and omissions and derived the phonemic edit distance ratio, which theoretically corresponds to percent segments with these phonemic errors. Results Convergent validity between the manually calculated error frequency and the automated edit distance ratio was excellent, as demonstrated by nearly perfect correlations and negligible mean differences. The results were replicated across 2 speech samples and 2 computation applications. Conclusions The phonemic edit distance ratio is well suited to estimate phonemic error rate and proves itself for immediate application to research and clinical practice. It can be calculated from any paired strings of transcription symbols and requires no additional steps, algorithms, or judgment concerning alignment between target and production. We recommend it as a valid, simple, and efficient substitute for manual calculation of error frequency.
APA, Harvard, Vancouver, ISO, and other styles
14

Kari, Lila, Stavros Konstantinidis, Steffen Kopecki, and Meng Yang. "Efficient Algorithms for Computing the Inner Edit Distance of a Regular Language via Transducers." Algorithms 11, no. 11 (2018): 165. http://dx.doi.org/10.3390/a11110165.

Full text
Abstract:
The concept of edit distance and its variants has applications in many areas such as computational linguistics, bioinformatics, and synchronization error detection in data communications. Here, we revisit the problem of computing the inner edit distance of a regular language given via a Nondeterministic Finite Automaton (NFA). This problem relates to the inherent maximal error-detecting capability of the language in question. We present two efficient algorithms for solving this problem, both of which execute in time O ( r 2 n 2 d ) , where r is the cardinality of the alphabet involved, n is the number of transitions in the given NFA, and d is the computed edit distance. We have implemented one of the two algorithms and present here a set of performance tests. The correctness of the algorithms is based on the connection between word distances and error detection and the fact that nondeterministic transducers can be used to represent the errors (resp., edit operations) involved in error-detection (resp., in word distances).
APA, Harvard, Vancouver, ISO, and other styles
15

Pryzant, Reid, Richard Diehl Martinez, Nathan Dass, Sadao Kurohashi, Dan Jurafsky, and Diyi Yang. "Automatically Neutralizing Subjective Bias in Text." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (2020): 480–89. http://dx.doi.org/10.1609/aaai.v34i01.5385.

Full text
Abstract:
Texts like news, encyclopedias, and some social media strive for objectivity. Yet bias in the form of inappropriate subjectivity — introducing attitudes via framing, presupposing truth, and casting doubt — remains ubiquitous. This kind of bias erodes our collective trust and fuels social conflict. To address this issue, we introduce a novel testbed for natural language generation: automatically bringing inappropriately subjective text into a neutral point of view (“neutralizing” biased text). We also offer the first parallel corpus of biased language. The corpus contains 180,000 sentence pairs and originates from Wikipedia edits that removed various framings, presuppositions, and attitudes from biased sentences. Last, we propose two strong encoder-decoder baselines for the task. A straightforward yet opaque concurrent system uses a BERT encoder to identify subjective words as part of the generation process. An interpretable and controllable modular algorithm separates these steps, using (1) a BERT-based classifier to identify problematic words and (2) a novel join embedding through which the classifier can edit the hidden states of the encoder. Large-scale human evaluation across four domains (encyclopedias, news headlines, books, and political speeches) suggests that these algorithms are a first step towards the automatic identification and reduction of bias.
APA, Harvard, Vancouver, ISO, and other styles
16

Drachev, O. I., B. M. Gorshkov, and N. S. Samokhina. "Automatic cold edit control system of non-rigid shafts." IOP Conference Series: Materials Science and Engineering 919 (September 26, 2020): 032012. http://dx.doi.org/10.1088/1757-899x/919/3/032012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Neuhaus, Michel, and Horst Bunke. "Automatic learning of cost functions for graph edit distance." Information Sciences 177, no. 1 (2007): 239–47. http://dx.doi.org/10.1016/j.ins.2006.02.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Harrison, Merrie Jean, and Alain Gene DuChene. "P06 Automated edit system from data entry to site notification." Controlled Clinical Trials 17, no. 2 (1996): S94. http://dx.doi.org/10.1016/0197-2456(96)84626-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yu, Meng-Hsuan, Juntao Li, Danyang Liu, et al. "Draft and Edit: Automatic Storytelling Through Multi-Pass Hierarchical Conditional Variational Autoencoder." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 02 (2020): 1741–48. http://dx.doi.org/10.1609/aaai.v34i02.5538.

Full text
Abstract:
Automatic Storytelling has consistently been a challenging area in the field of natural language processing. Despite considerable achievements have been made, the gap between automatically generated stories and human-written stories is still significant. Moreover, the limitations of existing automatic storytelling methods are obvious, e.g., the consistency of content, wording diversity. In this paper, we proposed a multi-pass hierarchical conditional variational autoencoder model to overcome the challenges and limitations in existing automatic storytelling models. While the conditional variational autoencoder (CVAE) model has been employed to generate diversified content, the hierarchical structure and multi-pass editing scheme allow the story to create more consistent content. We conduct extensive experiments on the ROCStories Dataset. The results verified the validity and effectiveness of our proposed model and yields substantial improvement over the existing state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
20

Hou, Vincent D. H. "Automatic Page-Layout Scripts for Gatan Digital Micrograph®." Microscopy and Microanalysis 7, S2 (2001): 976–77. http://dx.doi.org/10.1017/s1431927600030956.

Full text
Abstract:
The software DigitalMicrograph (DM) by Gatan, Inc., is a popular software platform for digital imaging in microscopy. in a service-oriented microscopy laboratory, a large number of images from many different samples are generated each day. It is critical that each printed image is properly labeled with sample identification and a description before printing. with DM, a script language is provided: from this, various analyses can be designed or customized and repetitive tasks can be automated. This paper presents the procedures and DM scripts needed to perform these tasks. Due to the major software architecture change between version 2.5x and version 3.5x, each will be discussed separately.DM Version 2.5.8 (on Macintosh®)A “Data Bar” mechanism is provided in this version of DM. Using the “Edit→Data Bar→Define and Add Data Bar...“ menu command specifies data bar items (e.g., scale bar, microscope operator) to be included in the image. in addition, other annotations (text, line, rectangle, and oval) can be included as part of “Data Bar.” This is done by first selecting the desired annotation on the image and then using the “Edit→Data Bar→use AS Default Data Bar...” menu command. After defining data bar items, executing the menu command adds these data bar items to the image.
APA, Harvard, Vancouver, ISO, and other styles
21

McCormack, Michael D., David E. Zaucha, and Dennis W. Dushek. "First‐break refraction event picking and seismic data trace editing using neural networks." GEOPHYSICS 58, no. 1 (1993): 67–78. http://dx.doi.org/10.1190/1.1443352.

Full text
Abstract:
Interactive seismic processing systems for editing noisy seismic traces and picking first‐break refraction events have been developed using a neural network learning algorithm. We employ a backpropagation neural network (BNN) paradigm modified to improve the convergence rate of the BNN. The BNN is interactively “trained” to edit seismic data or pick first breaks by a human processor who judiciously selects and presents to the network examples of trace edits or refraction picks. The network then iteratively adjusts a set of internal weights until it can accurately duplicate the examples provided by the user. After the training session is completed, the BNN system can then process new data sets in a manner that mimics the human processor. Synthetic modeling studies indicate that the BNN uses many of the same subjective criteria that humans employ in editing and picking seismic data sets. Automated trace editing and first‐break picking based on the modified BNN paradigm achieve 90 to 98 percent agreement with manual methods for seismic data of moderate to good quality. Productivity increases over manual editing, and picking techniques range from 60 percent for two‐dimensional (2-D) data sets and up to 800 percent for three‐dimensional (3-D) data sets. Neural network‐based seismic processing can provide consistent and high quality results with substantial improvements in processing efficiency.
APA, Harvard, Vancouver, ISO, and other styles
22

Charlton, John. "Editorial: Evaluating automatic edit and imputation methods, and the EUREDIT project." Journal of the Royal Statistical Society: Series A (Statistics in Society) 167, no. 2 (2004): 199–207. http://dx.doi.org/10.1111/j.1467-985x.2004.02051.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yagahara, Ayako, Masahito Uesugi, and Hideto Yokoi. "Identification of Synonyms Using Definition Similarities in Japanese Medical Device Adverse Event Terminology." Applied Sciences 11, no. 8 (2021): 3659. http://dx.doi.org/10.3390/app11083659.

Full text
Abstract:
Japanese medical device adverse events terminology, published by the Japan Federation of Medical Devices Associations (JFMDA terminology), contains entries for 89 terminology items, with each of the terminology entries created independently. It is necessary to establish and verify the consistency of these terminology entries and map them efficiently and accurately. Therefore, developing an automatic synonym detection tool is an important concern. Such tools for edit distances and distributed representations have achieved good performance in previous studies. The purpose of this study was to identify synonyms in JFMDA terminology and evaluate the accuracy using these algorithms. A total of 125 definition sentence pairs were created from the terminology as baselines. Edit distances (Levenshtein and Jaro–Winkler distance) and distributed representations (Word2vec, fastText, and Doc2vec) were employed for calculating similarities. Receiver operating characteristic analysis was carried out to evaluate the accuracy of synonym detection. A comparison of the accuracies of the algorithms showed that the Jaro–Winkler distance had the highest sensitivity, Doc2vec with DM had the highest specificity, and the Levenshtein distance had the highest value in area under the curve. Edit distances and Doc2vec makes it possible to obtain high accuracy in predicting synonyms in JFMDA terminology.
APA, Harvard, Vancouver, ISO, and other styles
24

Alamri, Maha, and William Teahan. "Automatic Correction of Arabic Dyslexic Text." Computers 8, no. 1 (2019): 19. http://dx.doi.org/10.3390/computers8010019.

Full text
Abstract:
This paper proposes an automatic correction system that detects and corrects dyslexic errors in Arabic text. The system uses a language model based on the Prediction by Partial Matching (PPM) text compression scheme that generates possible alternatives for each misspelled word. Furthermore, the generated candidate list is based on edit operations (insertion, deletion, substitution and transposition), and the correct alternative for each misspelled word is chosen on the basis of the compression codelength of the trigram. The system is compared with widely-used Arabic word processing software and the Farasa tool. The system provided good results compared with the other tools, with a recall of 43%, precision 89%, F1 58% and accuracy 81%.
APA, Harvard, Vancouver, ISO, and other styles
25

Vachharajani, Vinay, and Jyoti Pareek. "Effective Structure Matching Algorithm for Automatic Assessment of Use-Case Diagram." International Journal of Distance Education Technologies 18, no. 4 (2020): 31–50. http://dx.doi.org/10.4018/ijdet.2020100103.

Full text
Abstract:
The demand for higher education keeps on increasing. The invention of information technology and e-learning have, to a large extent, solved the problem of shortage of skilled and qualified teachers. But there is no guarantee that this will ensure the high quality of learning. In spite of large number of students, though the delivery of learning materials and tests to the students have become very easy by uploading the same on the web, assessment could be tedious. There is a need to develop tools and technologies for fully automated assessment. In this paper, an innovative algorithm has been proposed for matching structures of two use-case diagrams drawn by a student and an expert respectively for automatic assessment of the same. Zhang and Shasha's tree edit distance algorithm has been extended for assessing use-case diagrams. Results from 445 students' answers based on 14 different scenarios are analyzed to evaluate the performance of the proposed algorithm. No comparable study has been reported by any other diagram assessing algorithms in the research literature.
APA, Harvard, Vancouver, ISO, and other styles
26

SHEFFER, ALLA, MICHEL BERCOVIER, TED BLACKER, and JAN CLEMENTS. "VIRTUAL TOPOLOGY OPERATORS FOR MESHING." International Journal of Computational Geometry & Applications 10, no. 03 (2000): 309–31. http://dx.doi.org/10.1142/s0218195900000188.

Full text
Abstract:
In recent years several automatic 3D meshing algorithms have emerged. However direct analysis of CAD models is still elusive. Among the major obstacles preventing automation is the necessity to edit the CAD models to be suitable both for the analysis objectives and the available meshing algorithms. Such editing includes topology correction and validation, detail suppression and decomposition. Editing the geometry directly (e.g. surface redefinitions) is cumbersome, tedious, and expensive. Introducing virtual topology allows such operations as modifications to the topology only. In this work the concept and operators of virtual topology are described, along with their use in performing the required editing of the model. A set of automatic and semi-automatic tools for the various editing operations are introduced.
APA, Harvard, Vancouver, ISO, and other styles
27

Huang, Chin Jung, Fa Ta Tsai, and Ku Pen Tsai. "Research and Development of an Automatic Test System for Power Supplies." Applied Mechanics and Materials 220-223 (November 2012): 1450–55. http://dx.doi.org/10.4028/www.scientific.net/amm.220-223.1450.

Full text
Abstract:
With rising attention to the quality of AC/DC, DC/DC power supply, in response to the verification of design and production of power supply, and to ensure the quality demand of power supply, power supply automatic testing system is thus developed. Power supply testing systems with features of higher accuracy, lower cost and shorter testing time should be designed to guarantee the quality of power supply to win high level of customer trust. This testing system uses Labview software to establish an open platform with built-in testing program editors of considerable freedom and convenience to allow engineers to rapidly edit testing programs for various projects and design various customized testing items. This study develops the unified instrument control interface for various testing demands of instruments and equipments made by different manufacturers. With software operating panel record functions, the system can record each testing step and furthermore edit. This study proposes the Test System Performance Index to facilitate customers to evaluate and compare the performance capabilities of various power supply testing systems. Higher Test System Performance Index indicates better performance of the testing systems. By comparison, the test system performance index of the testing system and Chroma 8000 are 3.36 and 0.164, respectively. This indicates that the testing system has definite advantages including more testing functional items, shorter testing time, low manufacturing cost, and supportability of various types of equipments, thus making it a system with great potentials.
APA, Harvard, Vancouver, ISO, and other styles
28

Feilhauer, Thomas, Florian Braun, Katja Faller, et al. "Mobility Choices—An Instrument for Precise Automatized Travel Behavior Detection & Analysis." Sustainability 13, no. 4 (2021): 1912. http://dx.doi.org/10.3390/su13041912.

Full text
Abstract:
Within the Mobility Choices (MC) project we have developed an app that allows users to record their travel behavior and encourages them to try out new means of transportation that may better fit their preferences. Tracks explicitly released by the users are anonymized and can be analyzed by authorized institutions. For recorded tracks, the freely available app automatically determines the segments with their transportation mode; analyzes the track according to the criteria environment, health, costs, and time; and indicates alternative connections that better fit the criteria, which can individually be configured by the user. In the second step, the users can edit their tracks and release them for further analysis by authorized institutions. The system is complemented by a Web-based analysis program that helps authorized institutions carry out specific evaluations of traffic flows based on the released tracks of the app users. The automatic transportation mode detection of the system reaches an accuracy of 97%. This requires only minimal corrections by the user, which can easily be done directly in the app before releasing a track. All this enables significantly more accurate surveys of transport behavior than the usual time-consuming manual (non-automated) approaches, based on questionnaires.
APA, Harvard, Vancouver, ISO, and other styles
29

Akiba, Yasuhiro, Kenji Imamura, Eiichiro Sumita, Hiromi Nakaiwa, Seiichi Yamamoto, and Hiroshi G. Okuno. "Automatic Grader of MT Outputs in Colloquial Style by Using Multiple Edit Distances." Transactions of the Japanese Society for Artificial Intelligence 20 (2005): 139–48. http://dx.doi.org/10.1527/tjsai.20.139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Fisichella, Marco, and Andrea Ceroni. "Event Detection in Wikipedia Edit History Improved by Documents Web Based Automatic Assessment." Big Data and Cognitive Computing 5, no. 3 (2021): 34. http://dx.doi.org/10.3390/bdcc5030034.

Full text
Abstract:
A majority of current work in events extraction assumes the static nature of relationships in constant expertise knowledge bases. However, in collaborative environments, such as Wikipedia, information and systems are extraordinarily dynamic over time. In this work, we introduce a new approach for extracting complex structures of events from Wikipedia. We advocate a new model to represent events by engaging more than one entities that are generalizable to an arbitrary language. The evolution of an event is captured successfully primarily based on analyzing the user edits records in Wikipedia. Our work presents a basis for a singular class of evolution-aware entity-primarily based enrichment algorithms and will extensively increase the quality of entity accessibility and temporal retrieval for Wikipedia. We formalize this problem case and conduct comprehensive experiments on a real dataset of 1.8 million Wikipedia articles in order to show the effectiveness of our proposed answer. Furthermore, we suggest a new event validation automatic method relying on a supervised model to predict the presence of events in a non-annotated corpus. As the extra document source for event validation, we chose the Web due to its ease of accessibility and wide event coverage. Our outcomes display that we are capable of acquiring 70% precision evaluated on a manually annotated corpus. Ultimately, we conduct a comparison of our strategy versus the Current Event Portal of Wikipedia and discover that our proposed WikipEvent along with the usage of Co-References technique may be utilized to provide new and more data on events.
APA, Harvard, Vancouver, ISO, and other styles
31

Lybarger, Kevin, Mari Ostendorf, Eve Riskin, Thomas Payne, Andrew White, and Meliha Yetisgen. "Asynchronous Speech Recognition Affects Physician Editing of Notes." Applied Clinical Informatics 09, no. 04 (2018): 782–90. http://dx.doi.org/10.1055/s-0038-1673417.

Full text
Abstract:
Objective Clinician progress notes are an important record for care and communication, but there is a perception that electronic notes take too long to write and may not accurately reflect the patient encounter, threatening quality of care. Automatic speech recognition (ASR) has the potential to improve clinical documentation process; however, ASR inaccuracy and editing time are barriers to wider use. We hypothesized that automatic text processing technologies could decrease editing time and improve note quality. To inform the development of these technologies, we studied how physicians create clinical notes using ASR and analyzed note content that is revised or added during asynchronous editing. Materials and Methods We analyzed a corpus of 649 dictated clinical notes from 9 physicians. Notes were dictated during rounds to portable devices, automatically transcribed, and edited later at the physician's convenience. Comparing ASR transcripts and the final edited notes, we identified the word sequences edited by physicians and categorized the edits by length and content. Results We found that 40% of the words in the final notes were added by physicians while editing: 6% corresponded to short edits associated with error correction and format changes, and 34% were associated with longer edits. Short error correction edits that affect note accuracy are estimated to be less than 3% of the words in the dictated notes. Longer edits primarily involved insertion of material associated with clinical data or assessment and plans. The longer edits improve note completeness; some could be handled with verbalized commands in dictation. Conclusion Process interventions to reduce ASR documentation burden, whether related to technology or the dictation/editing workflow, should apply a portfolio of solutions to address all categories of required edits. Improved processes could reduce an important barrier to broader use of ASR by clinicians and improve note quality.
APA, Harvard, Vancouver, ISO, and other styles
32

Eriksson, Henrik, and Mark A. Musen. "Conceptual models for automatic generation of knowledge-acquisition tools." Knowledge Engineering Review 8, no. 1 (1993): 27–47. http://dx.doi.org/10.1017/s0269888900000059.

Full text
Abstract:
AbstractInteractive knowledge-acquisition (KA) programs allow users to enter relevant domain knowledge according to a model predefined by the tool developers. KA tools are designed to provide conceptual models of the knowledge to their users. Many different classes of models are possible, resulting in different categories of tools. Whenever it is possible to describe KA tools according to explicit conceptual models, it is also possible to edit the models and to instantiate new KA tools automatically for specialized purposes. Several meta-tools that address this task have been implemented. Meta-tools provide developers of domain-specific KA tools with generic design models, or meta-views, of the emerging KA tools. The same KA tool can be specified according to several alternative meta-views.
APA, Harvard, Vancouver, ISO, and other styles
33

Vlasova, S. A. "Automated system for supporting a database of scientific works of academic institution’s employees." Information resources of Russia, no. 5 (2020): 29–31. http://dx.doi.org/10.51218/0204-3653-2020-5-29-31.

Full text
Abstract:
The article describes the automated system for creating and maintaining a database of scientific works of academic institution’s employees, developed by specialists of the Joint Supercomputer Center RAS. The system’s information base contains data about the authors, related organizations (places of their work), publications at the analytical and monographic levels, sources (publications at the summary level - journals, collections), reports made at scientific conferences, symposia, seminars. The system has an administrative module designed to enter and edit data. The user’s module of the system is a special search engine that searches for information about publications, sources, reports, events, authors by processing search queries.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Cong, Fabricio Sampaio Peres Kury, Ziran Li, Casey Ta, Kai Wang, and Chunhua Weng. "Doc2Hpo: a web application for efficient and accurate HPO concept curation." Nucleic Acids Research 47, W1 (2019): W566—W570. http://dx.doi.org/10.1093/nar/gkz386.

Full text
Abstract:
AbstractWe present Doc2Hpo, an interactive web application that enables interactive and efficient phenotype concept curation from clinical text with automated concept normalization using the Human Phenotype Ontology (HPO). Users can edit the HPO concepts automatically extracted by Doc2Hpo in real time, and export the extracted HPO concepts into gene prioritization tools. Our evaluation showed that Doc2Hpo significantly reduced manual effort while achieving high accuracy in HPO concept curation. Doc2Hpo is freely available at https://impact2.dbmi.columbia.edu/doc2hpo/. The source code is available at https://github.com/stormliucong/doc2hpo for local installation for protected health data.
APA, Harvard, Vancouver, ISO, and other styles
35

Jiang, Zijian, Ye Wang, Hao Zhong, and Na Meng. "Automatic method change suggestion to complement multi-entity edits." Journal of Systems and Software 159 (January 2020): 110441. http://dx.doi.org/10.1016/j.jss.2019.110441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Vangipuram, Gautam, Aaron Y. Lee, Kasra A. Rezaei, et al. "CAPTCHA as a Visual Performance Metric in Active Macular Disease." Journal of Ophthalmology 2019 (June 9, 2019): 1–6. http://dx.doi.org/10.1155/2019/6710754.

Full text
Abstract:
Purpose. CAPTCHA (completely automated public turing test to tell computers and humans apart) was designed as a spam prevention test. In patients with visual impairment, completion of this task has been assumed to be difficult; but to date, no study has proven this to be true. As visual function is not well measured by Snellen visual acuity (VA) alone, we theorized that CAPTCHA performance may provide additional information on macular disease-related visual dysfunction. Methods. This was designed as a pilot study. Active disease was defined as the presence of either intraretinal fluid (IRF) or subretinal fluid (SRF) on spectral-domain optical coherence tomography. CAPTCHA performance was tested using 10 prompts. In addition, near and distance VA, contrast sensitivity, and reading speed were measured. Visual acuity matched pseudophakic patients were used as controls. Primary outcome measures were average edit distance and percent of correct responses. Results. 70 patients were recruited: 33 with active macular disease and 37 control subjects. Contrast sensitivity was found to be significantly different in both the IRF (p<0.01) and SRF groups (p<0.01). No significant difference was found comparing the odds ratio of average edit distance of active disease (IRF, SRF) vs. control (OR 1.09 (0.62, 1.90), 1.10 (0.58, 2.05), p=0.77, 0.77) or percent correct responses of active disease vs. control (OR 0.98 (0.96, 1.01), 1.09 (0.58, 2.05), p=0.22,0.51) in CAPTCHA testing. The goodness of fit using logistic regression analysis for the dependent variables of either IRF or SRF did not improve accounting for average edit distance (p=0.49, p=0.27) or percent correct (p=0.89, p=0.61). Conclusions. Distance VA and contrast sensitivity are positively correlated with the presence of IRF and SRF in active macular disease. CAPTCHA performance did not appear to be a significant predictor of either IRF or SRF in our pilot study.
APA, Harvard, Vancouver, ISO, and other styles
37

Jia, Danbing, Dongyu Zhang, and Naimin Li. "Pulse Waveform Classification Using Support Vector Machine with Gaussian Time Warp Edit Distance Kernel." Computational and Mathematical Methods in Medicine 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/947254.

Full text
Abstract:
Advances in signal processing techniques have provided effective tools for quantitative research in traditional Chinese pulse diagnosis. However, because of the inevitable intraclass variations of pulse patterns, the automatic classification of pulse waveforms has remained a difficult problem. Utilizing the new elastic metric, that is, time wrap edit distance (TWED), this paper proposes to address the problem under the support vector machines (SVM) framework by using the Gaussian TWED kernel function. The proposed method, SVM with GTWED kernel (GTWED-SVM), is evaluated on a dataset including 2470 pulse waveforms of five distinct patterns. The experimental results show that the proposed method achieves a lower average error rate than current pulse waveform classification methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Kuang, Wei Hua, and Fa Piao Li. "Rapid Establishment of 3D Intricate Design Platform Based on Modular Technology by UG." Key Engineering Materials 460-461 (January 2011): 36–39. http://dx.doi.org/10.4028/www.scientific.net/kem.460-461.36.

Full text
Abstract:
In order to solve the problem of 3D intricate product design and mold development, an approach based on UG NX was proposed. The design platform and modular technology of UG MoldWizard were introduced. How to add and edit attributes of the standard part library was explained by using sankyo lifter as example. As an application, a lamp set and its injection mold were rapidly designed by UG MoldWizard. How to patch surfaces, define parting lines and how to extract mold core and mold cavity were illustrated step by step. It proved that UG MoldWizard efficiently promoted work and integrated complex elements of design technology into automated sequences.
APA, Harvard, Vancouver, ISO, and other styles
39

Kroon, Martin, Sjef Barbiers, Jan Odijk, and Stéphanie van der Pas. "A filter for syntactically incomparable parallel sentences." Linguistics in the Netherlands 36 (November 5, 2019): 147–61. http://dx.doi.org/10.1075/avt.00029.kro.

Full text
Abstract:
Abstract Massive automatic comparison of languages in parallel corpora will greatly speed up and enhance comparative syntactic research. Automatically extracting and mining syntactic differences from parallel corpora requires a pre-processing step that filters out sentence pairs that cannot be compared syntactically, for example because they involve “free” translations. In this paper we explore four possible filters: the Damerau-Levenshtein distance between POS-tags, the sentence-length ratio, the graph-edit distance between dependency parses, and a combination of the three in a logistic regression model. Results suggest that the dependency-parse filter is the most stable throughout language pairs, while the combination filter achieves the best results.
APA, Harvard, Vancouver, ISO, and other styles
40

Ames, Arlo L., Elaine M. Hinman-Sweeney, and John M. Sizemore. "Automated Generation of Weld Path Trajectories." Journal of Ship Production 19, no. 03 (2003): 147–50. http://dx.doi.org/10.5957/jsp.2003.19.3.147.

Full text
Abstract:
AUTOmated GENeration of Control Programs for Robotic Welding of Ship Structure (AUTOGEN) is software that automates the planning and compiling of control programs for robotic welding of ship structure. The software works by evaluating computer representations of the ship design and the manufacturing plan. Based on this evaluation, AUTOGEN internally identifies and appropriately characterizes each weld. Then it constructs the robot motions necessary to accomplish the welds and determines for each the correct assignment of process control values. AUTOGEN generates these robot control programs completely without manual intervention or edits except to correct wrong or missing input data. Most ship structure assemblies are unique or at best manufactured only a few times. Accordingly, the high cost inherent in all previous methods of preparing complex control programs has made robot welding of ship structures economically unattractive to the U.S. shipbuilding industry. AUTOGEN eliminates the cost of creating robot control programs. With programming costs eliminated, capitalization of robots to weld ship structures becomes economically viable. Robot welding of ship structures will result in reduced ship costs, uniform product quality, and enhanced worker safety. Sandia National Laboratories and Northrop Grumman Ship Systems worked with the National Shipbuilding Research Program to develop a means of automated path and process generation for robotic welding. This effort resulted in the AUTOGEN program, which has successfully demonstrated automated path generation and robot control. Although the current implementation of AUTOGEN is optimized for welding applications, the path and process planning capability has applicability to a number of industrial applications, including painting, riveting, and adhesive delivery.
APA, Harvard, Vancouver, ISO, and other styles
41

Saha, Sourav, Sahibjot Kaur, Jayanta Basak, and Priya Ranjan Sinha Mahapatra. "A Computer Vision Framework for Automated Shape Retrieval." American Journal of Advanced Computing 1, no. 1 (2020): 1–15. http://dx.doi.org/10.15864/ajac.1108.

Full text
Abstract:
With the increasing number of images generated every day, textual annotation of images for image mining becomes impractical and inefficient. Thus, computer vision based image retrieval has received considerable interest in recent years. One of the fundamental characteristics of any image representation of an object is its shape which plays a vital role to recognize the object at primitive level. Keeping this view as the primary motivational focus, we propose a shape descriptive frame work using a multilevel tree structured representation called Hierarchical Convex Polygonal Decomposition (HCPD). Such a frame work explores different degrees of convexity of an object’s contour-segments in the course of its construction. The convex and non-convex segments of an object’s contour are discovered at every level of the HCPD-tree generation by repetitive convex-polygonal approximation of contour segments. We have also presented a novel shape-string-encoding scheme for representing the HCPD-tree which allows us touse the popular concept of string-edit distance to compute shape similarity score between two objects. The proposed framework when deployed for similar shape retrieval task demonstrates reasonably good performance in comparison with other popular shape-retrieval algorithms.
APA, Harvard, Vancouver, ISO, and other styles
42

LYRAS, DIMITRIOS P., KYRIAKOS N. SGARBAS, and NIKOLAOS D. FAKOTAKIS. "APPLYING SIMILARITY MEASURES FOR AUTOMATIC LEMMATIZATION: A CASE STUDY FOR MODERN GREEK AND ENGLISH." International Journal on Artificial Intelligence Tools 17, no. 05 (2008): 1043–64. http://dx.doi.org/10.1142/s021821300800428x.

Full text
Abstract:
This paper addresses the problem of automatic induction of the normalized form (lemma) of regular and mildly irregular words with no direct supervision using language-independent algorithms. More specifically, two string distance metric models (i.e. the Levenshtein Edit Distance algorithm and the Dice Coefficient similarity measure) were employed in order to deal with the automatic word lemmatization task by combining two alignment models based on the string similarity and the most frequent inflectional suffixes. The performance of the proposed model has been evaluated quantitatively and qualitatively. Experiments were performed for the Modern Greek and English languages and the results, which are set within the state-of-the-art, have showed that the proposed model is robust (for a variety of languages) and computationally efficient. The proposed model may be useful as a pre-processing tool to various language engineering and text mining applications such as spell-checkers, electronic dictionaries, morphological analyzers etc.
APA, Harvard, Vancouver, ISO, and other styles
43

LIN, HONG. "TOWARD AUTOMATED GENERATION OF CHINESE CLASSIC POETRY." New Mathematics and Natural Computation 09, no. 02 (2013): 153–81. http://dx.doi.org/10.1142/s1793005713400024.

Full text
Abstract:
The forms of Chinese classic poetry have been developed through thousands of years of history and are still current in today's poetry society. A re-classification of the rhyming words, however, is necessary to keep the classic poetry up to date in the new settings of modern Chinese language. To ease the transition process, computing technology is used to help the readers as well as poetry writers to check the compliance of poems in accordance with the forms and to compose poems without the effort to learn the old grouping of rhyming words. A piece of software has been developed in a faculty/student research project at the University of Houston-Downtown to verify this idea. This software, called Chinese classic poetry wizard, provides the functionality of checking metrical forms and rhyming schemes. It also allows users to edit rhyme dictionaries and metrical forms. The new rhyming scheme proposed in this paper should rationalize the composition rules of classic Chinese poetry in the modern society; and the poem composition wizard will provide a handy tool for poem composition. This work will help revive Chinese classic poetry in modern society and, in a sequel, contribute to the current campaign of advocating Chinese traditional teachings.
APA, Harvard, Vancouver, ISO, and other styles
44

Al-Aynati, Maamoun M., and Katherine A. Chorneyko. "Comparison of Voice-Automated Transcription and Human Transcription in Generating Pathology Reports." Archives of Pathology & Laboratory Medicine 127, no. 6 (2003): 721–25. http://dx.doi.org/10.5858/2003-127-721-covtah.

Full text
Abstract:
Abstract Context.—Software that can convert spoken words into written text has been available since the early 1980s. Early continuous speech systems were developed in 1994, with the latest commercially available editions having a claimed accuracy of up to 98% of speech recognition at natural speech rates. Objectives.—To evaluate the efficacy of one commercially available voice-recognition software system with pathology vocabulary in generating pathology reports and to compare this with human transcription. To draw cost analysis conclusions regarding human versus computer-based transcription. Design.—Two hundred six routine pathology reports from the surgical pathology material handled at St Joseph's Healthcare, Hamilton, Ontario, were generated simultaneously using computer-based transcription and human transcription. The following hardware and software were used: a desktop 450-MHz Intel Pentium III processor with 192 MB of RAM, a speech-quality sound card (Sound Blaster), noise-canceling headset microphone, and IBM ViaVoice Pro version 8 with pathology vocabulary support (Voice Automated, Huntington Beach, Calif). The cost of the hardware and software used was approximately Can $2250. Results.—A total of 23 458 words were transcribed using both methods with a mean of 114 words per report. The mean accuracy rate was 93.6% (range, 87.4%–96%) using the computer software, compared to a mean accuracy of 99.6% (range, 99.4%–99.8%) for human transcription (P < .001). Time needed to edit documents by the primary evaluator (M.A.) using the computer was on average twice that needed for editing the documents produced by human transcriptionists (range, 1.4–3.5 times). The extra time needed to edit documents was 67 minutes per week (13 minutes per day). Conclusions.—Computer-based continuous speech-recognition systems in pathology can be successfully used in pathology practice even during the handling of gross pathology specimens. The relatively low accuracy rate of this voice-recognition software with resultant increased editing burden on pathologists may not encourage its application on a wide scale in pathology departments with sufficient human transcription services, despite significant potential financial savings. However, computer-based transcription represents an attractive and relatively inexpensive alternative to human transcription in departments where there is a shortage of transcription services, and will no doubt become more commonly used in pathology departments in the future.
APA, Harvard, Vancouver, ISO, and other styles
45

Calvo-Zaragoza, Jorge, Jose Oncina, and Colin de la Higuera. "Computing the Expected Edit Distance from a String to a Probabilistic Finite-State Automaton." International Journal of Foundations of Computer Science 28, no. 05 (2017): 603–21. http://dx.doi.org/10.1142/s0129054117400093.

Full text
Abstract:
In a number of fields, it is necessary to compare a witness string with a distribution. One possibility is to compute the probability of the string for that distribution. Another, giving a more global view, is to compute the expected edit distance from a string randomly drawn to the witness string. This number is often used to measure the performance of a prediction, the goal then being to return the median string, or the string with smallest expected distance. To be able to measure this, computing the distance between a hypothesis and that distribution is necessary. This paper proposes two solutions for computing this value, when the distribution is defined with a probabilistic finite state automaton. The first is exact but has a cost which can be exponential in the length of the input string, whereas the second is a fully polynomial-time randomized schema.
APA, Harvard, Vancouver, ISO, and other styles
46

Previtali, M., L. Barazzetti, R. Brumana, and M. Scaioni. "Towards automatic indoor reconstruction of cluttered building rooms from point clouds." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-5 (May 28, 2014): 281–88. http://dx.doi.org/10.5194/isprsannals-ii-5-281-2014.

Full text
Abstract:
Terrestrial laser scanning is increasingly used in architecture and building engineering for as-built modelling of large and medium size civil structures. However, raw point clouds derived from laser scanning survey are generally not directly ready for generation of such models. A manual modelling phase has to be undertaken to edit and complete 3D models, which may cover indoor or outdoor environments. This paper presents an automated procedure to turn raw point clouds into semantically-enriched models of building interiors. The developed method mainly copes with a geometric complexity typical of indoor scenes with prevalence of planar surfaces, such as walls, floors and ceilings. A characteristic aspect of indoor modelling is the large amount of clutter and occlusion that may characterize any point clouds. For this reason the developed reconstruction pipeline was designed to recover and complete missing parts in a plausible way. The accuracy of the presented method was evaluated against traditional manual modelling and showed comparable results.
APA, Harvard, Vancouver, ISO, and other styles
47

MITKOV, RUSLAN, LE AN HA, and NIKIFOROS KARAMANIS. "A computer-aided environment for generating multiple-choice test items." Natural Language Engineering 12, no. 2 (2006): 177–94. http://dx.doi.org/10.1017/s1351324906004177.

Full text
Abstract:
This paper describes a novel computer-aided procedure for generating multiple-choice test items from electronic documents. In addition to employing various Natural Language Processing techniques, including shallow parsing, automatic term extraction, sentence transformation and computing of semantic distance, the system makes use of language resources such as corpora and ontologies. It identifies important concepts in the text and generates questions about these concepts as well as multiple-choice distractors, offering the user the option to post-edit the test items by means of a user-friendly interface. In assisting test developers to produce items in a fast and expedient manner without compromising quality, the tool saves both time and production costs.
APA, Harvard, Vancouver, ISO, and other styles
48

Bokhove, Christian, and Christopher Downey. "Automated generation of ‘good enough’ transcripts as a first step to transcription of audio-recorded data." Methodological Innovations 11, no. 2 (2018): 205979911879074. http://dx.doi.org/10.1177/2059799118790743.

Full text
Abstract:
In the last decade, automated captioning services have appeared in mainstream technology use. Until now, the focus of these services have been on the technical aspects, supporting pupils with special educational needs and supporting teaching and learning of second language students. Only limited explorations have been attempted regarding its use for research purposes: transcription of audio recordings. This article presents a proof-of-concept exploration utilising three examples of automated transcription of audio recordings from different contexts; an interview, a public hearing and a classroom setting, and compares them against ‘manual’ transcription techniques in each case. It begins with an overview of literature on automated captioning and the use of voice recognition tools for the purposes of transcription. An account is provided of the specific processes and tools used for the generation of the automated captions followed by some basic processing of the captions to produce automated transcripts. Originality checking software was used to determine a percentage match between the automated transcript and a manual version as a basic measure of the potential usability of each of the automated transcripts. Some analysis of the more common and persistent mismatches observed between automated and manual transcripts is provided, revealing that the majority of mismatches would be easily identified and rectified in a review and edit of the automated transcript. Finally, some of the challenges and limitations of the approach are considered. These limitations notwithstanding, we conclude that this form of automated transcription provides ‘good enough’ transcription for first versions of transcripts. The time and cost advantages of this could be considerable, even for the production of summary or gisted transcripts.
APA, Harvard, Vancouver, ISO, and other styles
49

Mednis, Martins, and Maike K. Aurich. "Application of string similarity ratio and edit distance in automatic metabolite reconciliation comparing reconstructions and models." Biosystems and Information technology 1, no. 1 (2012): 14–18. http://dx.doi.org/10.11592/bit.121102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Takahashi, Masazumi, Fumihiko Matsuda, Nino Margetic, and Mark Lathrop. "Automated Identification of Single Nucleotide Polymorphisms from Sequencing Data." Journal of Bioinformatics and Computational Biology 01, no. 02 (2003): 253–65. http://dx.doi.org/10.1142/s021972000300006x.

Full text
Abstract:
The single nucleotide polymorphism (SNP) is the difference of the DNA sequence between individuals and provides abundant information about genetic variation. Large scale discovery of high frequency SNPs is being undertaken using various methods. However, the publicly available SNP data sometimes need to be verified. If only a particular gene locus is concerned, locus-specific polymerase chain reaction amplification may be useful. Problem of this method is that the secondary peak has to be measured. We have analyzed trace data from conventional sequencing equipment and found an applicable rule to discern SNPs from noise. The rule is applied to multiply aligned sequences with a trace and the peak height of the traces are compared between samples. We have developed software that integrates this function to automatically identify SNPs. The software works accurately for high quality sequences and also can detect SNPs in low quality sequences. Further, it can determine allele frequency, display this information as a bar graph and assign corresponding nucleotide combinations. It is also designed for a person to verify and edit sequences easily on the screen. It is very useful for identifying de novo SNPs in a DNA fragment of interest.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography