Academic literature on the topic 'Edit Automata'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Edit Automata.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Edit Automata"

1

MOHRI, MEHRYAR. "EDIT-DISTANCE OF WEIGHTED AUTOMATA: GENERAL DEFINITIONS AND ALGORITHMS." International Journal of Foundations of Computer Science 14, no. 06 (2003): 957–82. http://dx.doi.org/10.1142/s0129054103002114.

Full text
Abstract:
The problem of computing the similarity between two sequences arises in many areas such as computational biology and natural language processing. A common measure of the similarity of two strings is their edit-distance, that is the minimal cost of a series of symbol insertions, deletions, or substitutions transforming one string into the other. In several applications such as speech recognition or computational biology, the objects to compare are distributions over strings, i.e., sets of strings representing a range of alternative hypotheses with their associated weights or probabilities. We define the edit-distance of two distributions over strings and present algorithms for computing it when these distributions are given by automata. In the particular case where two sets of strings are given by unweighted automata, their edit-distance can be computed using the general algorithm of composition of weighted transducers combined with a single-source shortest-paths algorithm. In the general case, we show that general weighted automata algorithms over the appropriate semirings can be used to compute the edit-distance of two weighted automata exactly. These include classical algorithms such as the composition and ∊-removal of weighted transducers and a new and simple synchronization algorithm for weighted transducers which, combined with ∊-removal, can be used to normalize weighted transducers with bounded delays. Our algorithm for computing the edit-distance of weighted automata can be used to improve the word accuracy of automatic speech recognition systems. It can also be extended to provide an edit-distance automaton useful for re-scoring and other post-processing purposes in the context of large-vocabulary speech recognition.
APA, Harvard, Vancouver, ISO, and other styles
2

Beauquier, Danièle, Joëlle Cohen, and Ruggero Lanotte. "Security Policies Enforcement Using Finite Edit Automata." Electronic Notes in Theoretical Computer Science 229, no. 3 (2009): 19–35. http://dx.doi.org/10.1016/j.entcs.2009.06.037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Okhotin, Alexander, and Kai Salomaa. "Edit distance neighbourhoods of input-driven pushdown automata." Theoretical Computer Science 777 (July 2019): 417–30. http://dx.doi.org/10.1016/j.tcs.2019.03.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ligatti, Jay, Lujo Bauer, and David Walker. "Edit automata: enforcement mechanisms for run-time security policies." International Journal of Information Security 4, no. 1-2 (2005): 2–16. http://dx.doi.org/10.1007/s10207-004-0046-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Beauquier, Danièle, Joëlle Cohen, and Ruggero Lanotte. "Security policies enforcement using finite and pushdown edit automata." International Journal of Information Security 12, no. 4 (2013): 319–36. http://dx.doi.org/10.1007/s10207-013-0195-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

ALLAUZEN, CYRIL, and MEHRYAR MOHRI. "N-WAY COMPOSITION OF WEIGHTED FINITE-STATE TRANSDUCERS." International Journal of Foundations of Computer Science 20, no. 04 (2009): 613–27. http://dx.doi.org/10.1142/s0129054109006772.

Full text
Abstract:
Composition of weighted transducers is a fundamental algorithm used in many applications, including for computing complex edit-distances between automata, or string kernels in machine learning, or to combine different components of a speech recognition, speech synthesis, or information extraction system. We present a generalization of the composition of weighted transducers, n-way composition, which is dramatically faster in practice than the standard composition algorithm when combining more than two transducers. The worst-case complexity of our algorithm for composing three transducers T1, T2, and T3 resulting in T, is O(|T|Q min (d(T1)d(T3), d(T2)) + |T|E), where |·|Q denotes the number of states, |·|E the number of transitions, and d(·) the maximum out-degree. As in regular composition, the use of perfect hashing requires a pre-processing step with linear-time expected complexity in the size of the input transducers. In many cases, this approach significantly improves on the complexity of standard composition. Our algorithm also leads to a dramatically faster composition in practice. Furthermore, standard composition can be obtained as a special case of our algorithm. We report the results of several experiments demonstrating this improvement. These theoretical and empirical improvements significantly enhance performance in the applications already mentioned.
APA, Harvard, Vancouver, ISO, and other styles
7

Islam, Md Rakibul, and Minhaz F. Zibran. "What changes in where?" ACM SIGAPP Applied Computing Review 20, no. 4 (2021): 18–34. http://dx.doi.org/10.1145/3447332.3447334.

Full text
Abstract:
A deep understanding of the common patterns of bug-fixing changes is useful in several ways: (a) such knowledge can help developers in proactively avoiding coding patterns that lead to bugs and (b) bug-fixing patterns are exploited in devising techniques for automatic bug localization and program repair. This work includes an in-depth quantitative and qualitative analysis over 4,653 buggy revisions of five software systems. Our study identifies 38 bug-fixing edit patterns and discovers 37 new patterns of nested code structures, which frequently host the bug-fixing edits. While some of the edit patterns were reported in earlier studies, these nesting patterns are new and were never targeted before.
APA, Harvard, Vancouver, ISO, and other styles
8

Daalmans, Jacco. "Constraint Simplification for Data Editing of Numerical Variables." Journal of Official Statistics 34, no. 1 (2018): 27–39. http://dx.doi.org/10.1515/jos-2018-0002.

Full text
Abstract:
Abstract Data editing is the process of checking and correcting data. In practise, these processes are often automated. A large number of constraints needs to be handled in many applications. This article shows that data editing can benefit from automated constraint simplification techniques. Performance can be improved, which broadens the scope of applicability of automatic data editing. Flaws in edit rule formulation may be detected, which improves the quality of automatic edited data.
APA, Harvard, Vancouver, ISO, and other styles
9

McCarroll, Rachel E., Beth M. Beadle, Peter A. Balter, et al. "Retrospective Validation and Clinical Implementation of Automated Contouring of Organs at Risk in the Head and Neck: A Step Toward Automated Radiation Treatment Planning for Low- and Middle-Income Countries." Journal of Global Oncology, no. 4 (December 2018): 1–11. http://dx.doi.org/10.1200/jgo.18.00055.

Full text
Abstract:
Purpose We assessed automated contouring of normal structures for patients with head-and-neck cancer (HNC) using a multiatlas deformable-image-registration algorithm to better provide a fully automated radiation treatment planning solution for low- and middle-income countries, provide quantitative analysis, and determine acceptability worldwide. Methods Autocontours of eight normal structures (brain, brainstem, cochleae, eyes, lungs, mandible, parotid glands, and spinal cord) from 128 patients with HNC were retrospectively scored by a dedicated HNC radiation oncologist. Contours from a 10-patient subset were evaluated by five additional radiation oncologists from international partner institutions, and interphysician variability was assessed. Quantitative agreement of autocontours with independently physician-drawn structures was assessed using the Dice similarity coefficient and mean surface and Hausdorff distances. Automated contouring was then implemented clinically and has been used for 166 patients, and contours were quantitatively compared with the physician-edited autocontours using the same metrics. Results Retrospectively, 87% of normal structure contours were rated as acceptable for use in dose-volume-histogram–based planning without edit. Upon clinical implementation, 50% of contours were not edited for use in treatment planning. The mean (± standard deviation) Dice similarity coefficient of autocontours compared with physician-edited autocontours for parotid glands (0.92 ± 0.10), brainstem (0.95 ± 0.09), and spinal cord (0.92 ± 0.12) indicate that only minor edits were performed. The average mean surface and Hausdorff distances for all structures were less than 0.15 mm and 1.8 mm, respectively. Conclusion Automated contouring of normal structures generates reliable contours that require only minimal editing, as judged by retrospective ratings from multiple international centers and clinical integration. Autocontours are acceptable for treatment planning with no or, at most, minor edits, suggesting that automated contouring is feasible for clinical use and in the ongoing development of automated radiation treatment planning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
10

Cui, Jiang Tao, and Guo Qiang Shen. "Research of CAPP System Based on the Solid Edge." Advanced Materials Research 765-767 (September 2013): 167–70. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.167.

Full text
Abstract:
The three-dimensional CAPP prototype is studied based on Solid Edge platform using the technology of secondary development of VB. The system is able to complete part the selection of machining feature of the part, edit of process information and selection of process. The system finally output process card semi-automatic or automatic, through the module of process edit and the module of process card output module having the support of the process resource management module.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Edit Automata"

1

Riveros, Jaeger Cristian. "Repairing strings and trees." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:012d384f-d1d0-471b-ae6e-bbf337892680.

Full text
Abstract:
What do you do if a computational object fails a specification? An obvious approach is to repair it, namely, to modify the object minimally to get something that satisfies the constraints. In this thesis we study foundational problems of repairing regular specifications over strings and trees. Given two regular specifications R and T we aim to understand how difficult it is to transform an object satisfying R into an object satisfying T. The setting is motivated by considering R to be a restriction -- a constraint that the input object is guaranteed to satisfy -- while T is a target -- a constraint that we want to enforce. We first study which pairs of restriction and target specifications can be repaired with a ``small'' numbers of changes. We formalize this as the bounded repair problem -- to determine whether one can repair each object satisfying R into T with a uniform number of edits. We provide effective characterizations of the bounded repair problem for regular specifications over strings and trees. These characterizations are based on a good understanding of the cyclic behaviour of finite automata. By exploiting these characterizations, we give optimal algorithms to decide whether two specifications are bounded repairable or not. We also consider the impact of limitations on the editing process -- what happens when we require the repair to be done sequentially over serialized objects. We study the bounded repair problem over strings and trees restricted to this streaming setting and show that this variant can be characterized in terms of finite games. Furthermore, we use this characterization to decide whether one can repair a pair of specifications in a streaming fashion with bounded cost and how to obtain a streaming repair strategy in this case. The previous notion asks for a uniform bound on the number of edits, but having this property is a strong requirement. To overcome this limitation, we study how to calculate the maximum number of edits per character needed to repair any object in R into T. We formalize this as the asymptotic cost -- the limit of the number of edits divided by the length of the input in the worst case. Our contribution is an algorithm to compute the asymptotic cost for any pair of regular specifications over strings. We also consider the streaming variant of this cost and we show how to compute it by reducing this problem to mean-payoff games.
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, Anand. "Efficient and Private Processing of Analytical Queries in Scientific Datasets." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4822.

Full text
Abstract:
Large amount of data is generated by applications used in basic-science research and development applications. The size of data introduces great challenges in storage, analysis and preserving privacy. This dissertation proposes novel techniques to efficiently analyze the data and reduce storage space requirements through a data compression technique while preserving privacy and providing data security. We present an efficient technique to compute an analytical query called spatial distance histogram (SDH) using spatiotemporal properties of the data. Special spatiotemporal properties present in the data are exploited to process SDH efficiently on the fly. General purpose graphics processing units (GPGPU or just GPU) are employed to further boost the performance of the algorithm. Size of the data generated in scientific applications poses problems of disk space requirements, input/output (I/O) delays and data transfer bandwidth requirements. These problems are addressed by applying proposed compression technique. We also address the issue of preserving privacy and security in scientific data by proposing a security model. The security model monitors user queries input to the database that stores and manages scientific data. Outputs of user queries are also inspected to detect privacy breach. Privacy policies are enforced by the monitor to allow only those queries and results that satisfy data owner specified policies.
APA, Harvard, Vancouver, ISO, and other styles
3

Santi, Greta. "La nuova frontiera della traduzione: la localizzazione di un sito web." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
L’obiettivo dell’elaborato è favorire l’ampliamento del mercato dell’azienda agricola Rio del Sol nei paesi germanofoni. In particolare, viene tradotto in lingua tedesca il sito web aziendale, estremamente importante per la presentazione su scala internazionale dei prodotti e per lo sviluppo dell’e-commerce, risorsa fondamentale, specialmente in questo periodo segnato dall’emergenza da Coronavirus. Infatti, ora più che mai, molti preferiscono fare acquisti comodamente da casa. Quindi, la localizzazione di siti web, destinata a marcare il nostro futuro, rappresenta la nuova frontiera della traduzione. La localizzazione del sito web Rio del Sol viene fatta con l’aiuto di DeepL, strumento di traduzione automatica, del quale si cerca di valutare l’efficacia. In più, tale lavoro comporta molteplici competenze e non si limita alla mera traduzione di alcuni testi: un sito web, come quello di Rio del Sol, si articola spesso in numerose voci di menu contenenti link extra-testuali ad articoli di giornale o testi audiovisivi. Infatti, l’elaborato affronta anche la localizzazione di un articolo apparso su Corriere Romagna e di un’intervista alla titolare dell’azienda pubblicata su YouTube da Italia nel Bicchiere. Il primo capitolo dell’elaborato tratta, perciò, delle caratteristiche del sito web Rio del Sol e delle particolarità della localizzazione del sito stesso e dell’articolo di giornale. Viene inoltre descritto l’utilizzo dei programmi OmegaT e SDL Trados Studio, necessari per la creazione di documenti in lingua tedesca. Il secondo capitolo si concentra, invece, sulla sottotitolazione del video pubblicato su YouTube, descrivendone norme e realizzazione tramite Subtitle Edit. Infine, il terzo capitolo analizza il lavoro svolto da DeepL, valutando se si tratti di un basilare software per la traduzione automatica o di un efficiente supporto al post-editing. Per trarre conclusioni valide vengono utilizzate pubblicazioni accademiche e l’esperienza maturata in corso d’opera.
APA, Harvard, Vancouver, ISO, and other styles
4

Pedralho, André de Souza. "Extração automática de dados de páginas HTML utilizando alinhamento em dois níveis." Universidade Federal do Amazonas, 2011. http://tede.ufam.edu.br/handle/tede/2897.

Full text
Abstract:
Made available in DSpace on 2015-04-11T14:02:41Z (GMT). No. of bitstreams: 1 andre.pdf: 821975 bytes, checksum: 8b72d2493d068d6a827082e5eb108bf6 (MD5) Previous issue date: 2011-07-28<br>There is a huge amount of information in the World Wide Web in pages composed by similar objects. E-commerce Web sites and on-line catalogs, in general, are examples of such data repositories. Although this information usually occurs in semi-structured texts, it is designed to be interpreted and used by humans and not processed by machines. The identification of these objects inWeb pages is performed by external applications called extractors or wrappers. In this work we propose and evaluate an automatic approach to the problem of generating wrappers capable of extracting and structuring data records and the values of their attributes. It uses the Tree Alignment Algorithm to find in the Web page examples of objects of interest. Then, our method generates regular expressions for extracting objects similar to the examples given using the Multiple Sequence Alignment Algorithm. In a final step, the method decomposes the objects in sequences of text using the regular expression and common formats and delimiters, in order to identify the value of the attributes of the data records. Experiments using a collection composed by 128 Web pages from different domains have demonstrated the feasibility of our extraction method. It is evaluated regarding the identification of blocks of HTML source code that contain data records and regarding record extraction and the value of its attributes. It reached a precision of 83% and a recall of 80% when extracting the value of attributes. These values mean a gain in precision of 43.37% and in recall of 68.75% when compared to similar proposals.<br>Existe uma grande quantidade de informação na World Wide Web em páginas compostas por objetos similares. Web sites de comércio eletrônico e catálogos online, em geral, são exemplos destes repositórios de dados. Apesar destes dados serem apresentados em porções de texto semi-estruturados, são projetados para serem interpretados e utilizados por humanos e não processados por máquinas. A identificação destes objetos em páginas Web é feita por aplicações externas chamadas extratores ou wrappers. Neste trabalho propomos e avaliamos um método automático para o problema de extrair e estruturar registros e valores de seus atributos presentes em páginas Web ricas em dados. O método utiliza um Algoritmo de Alinhamento de Árvores para encontrar nestas páginas exemplos de registros que correspondem a objetos de interesse. Em seguida, o método gera expressões regulares para extrair objetos similares aos exemplos dados usando o Algoritmo de Alinhamento de Múltiplas Sequências. Em um passo final, o método decompõe os registros em sequências de texto aplicando a expressão regular criada e formatações e delimitadores comuns, com o intuito de identificar os valores dos atributos dos registros. Experimentos utilizando uma coleção composta por 128 páginasWeb de diferentes domínios demonstram a viabilidade do nosso método de extração. O método foi avaliado em relação à identificação de blocos de código HTML que contêm os registros e quanto à extração dos registros e dos valores de seus atributos. Obtivemos precisão de 83% e revocação de 80% na extração de valores de atributos. Estes valores significam um ganho na precisão de 43,37% e na revocação de 68,75%, em relação a propostas similares
APA, Harvard, Vancouver, ISO, and other styles
5

Potet, Marion. "Vers l'intégration de post-éditions d'utilisateurs pour améliorer les systèmes de traduction automatiques probabilistes." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00995104.

Full text
Abstract:
Les technologies de traduction automatique existantes sont à présent vues comme une approche prometteuse pour aider à produire des traductions de façon efficace et à coût réduit. Cependant, l'état de l'art actuel ne permet pas encore une automatisation complète du processus et la coopération homme/machine reste indispensable pour produire des résultats de qualité. Une pratique usuelle consiste à post-éditer les résultats fournis par le système, c'est-à-dire effectuer une vérification manuelle et, si nécessaire, une correction des sorties erronées du système. Ce travail de post-édition effectué par les utilisateurs sur les résultats de traduction automatique constitue une source de données précieuses pour l'analyse et l'adaptation des systèmes. La problématique abordée dans nos travaux s'intéresse à développer une approche capable de tirer avantage de ces retro-actions (ou post-éditions) d'utilisateurs pour améliorer, en retour, les systèmes de traduction automatique. Les expérimentations menées visent à exploiter un corpus d'environ 10 000 hypothèses de traduction d'un système probabiliste de référence, post-éditées par des volontaires, par le biais d'une plateforme en ligne. Les résultats des premières expériences intégrant les post-éditions, dans le modèle de traduction d'une part, et par post-édition automatique statistique d'autre part, nous ont permis d'évaluer la complexité de la tâche. Une étude plus approfondie des systèmes de post-éditions statistique nous a permis d'évaluer l'utilisabilité de tels systèmes ainsi que les apports et limites de l'approche. Nous montrons aussi que les post-éditions collectées peuvent être utilisées avec succès pour estimer la confiance à accorder à un résultat de traduction automatique. Les résultats de nos travaux montrent la difficulté mais aussi le potentiel de l'utilisation de post-éditions d'hypothèses de traduction automatiques comme source d'information pour améliorer la qualité des systèmes probabilistes actuels.
APA, Harvard, Vancouver, ISO, and other styles
6

Romano, Regiane Relva. "Os impactos do uso de tecnologia da informação e da identificação e captura automática de dados nos processos operacionais do varejo." reponame:Repositório Institucional do FGV, 2011. http://hdl.handle.net/10438/8895.

Full text
Abstract:
Submitted by Regiane Relva Romano (regiane@vip-systems.com.br) on 2011-12-28T12:03:36Z No. of bitstreams: 1 Tese Regiane Relva Romano - dezembro 2011-Versao Final.pdf: 4192254 bytes, checksum: 786a11620fac456f482835d77b815ce8 (MD5)<br>Rejected by Gisele Isaura Hannickel (gisele.hannickel@fgv.br), reason: Prezada Regiane, Está pendente a capa e a ficha catalográfica. Favor retirar o logotipo das primeiras folhas. Segue a sequencia: 1º capa 2º contra capa (que na sua postagem está como 1ª folha) 3º ficha catalográfica 4º folha de assinaturas 5º sequencia do trabalho..... Em caso de dúvidas favor verificar no site da biblioteca / serviços / manuais / normalização de trabalhos academicos. Att, Secretaria de Registro on 2011-12-28T12:09:43Z (GMT)<br>Submitted by Regiane Relva Romano (regiane@vip-systems.com.br) on 2012-01-04T01:31:23Z No. of bitstreams: 1 Tese Regiane Relva Romano - Versao Final Entregue.pdf: 4264628 bytes, checksum: 5cde922f780d99b751abbf306d16f982 (MD5)<br>Approved for entry into archive by Gisele Isaura Hannickel (gisele.hannickel@fgv.br) on 2012-01-04T11:12:52Z (GMT) No. of bitstreams: 1 Tese Regiane Relva Romano - Versao Final Entregue.pdf: 4264628 bytes, checksum: 5cde922f780d99b751abbf306d16f982 (MD5)<br>Made available in DSpace on 2012-01-04T11:16:03Z (GMT). No. of bitstreams: 1 Tese Regiane Relva Romano - Versao Final Entregue.pdf: 4264628 bytes, checksum: 5cde922f780d99b751abbf306d16f982 (MD5) Previous issue date: 2011-12-09<br>Este trabalho objetivou identificar as principais tecnologias disponíveis de TI (Tecnologia da Informação) e de AIDC (Identificação e Captura Automática de Dados) para a área de varejo de autosserviço, para preencher a lacuna existente na literatura, sobre os benefícios de se usar novas tecnologias no ponto de venda, com vistas a otimizar sua operação. Para tanto, foram estudados os principais processos operacionais de uma loja de varejo de autosserviço, com vistas a identificar como as Tecnologias da Informação (TI) e de Identificação e Captura Automática de Dados (AIDC), poderiam ajudar a melhorar os resultados operacionais e agregar valor ao negócio. Para analisar suas proposições (de que o uso de TI e de AIDC podem ajudar na: redução dos tempos dos processos de retaguarda; redução do número de operações no ponto de venda; prevenção de perdas; redução dos custos e dos tempos para a realização dos inventários; redução do número de funcionários nas lojas; redução do tempo de fila no caixa; redução de rupturas e no aumento da eficiência operacional da loja), foram pesquisados diversos estudos de casos mundiais de empresas do segmento de varejo, que implementaram as tecnologias de AIDC e TI, principalmente a de RFID, para saber quais foram os impactos do uso destas tecnologias em suas operações e, em seguida, foi desenvolvido um Estudo de Caso abrangente, por meio do qual se objetivou entender os benefícios empresariais reais do uso destas tecnologias para o varejo de autosserviço. Como resultado final, foi possível identificar as mudanças nos processos operacionais do varejo de autosserviço, bem como os benefícios gerados em termos de custo, produtividade, qualidade, flexibilidade e inovação. O trabalho também evidenciou os pontos críticos de sucesso para a implementação da TI e das AIDC no varejo, que são: a revisão dos processos operacionais; a correta definição do hardware; dos insumos; do software; das interferências do ambiente físico; da disponibilização dos dados/informações dos produtos; das pessoas/funcionários e dos parceiros de negócios/fornecedores. De maneira mais específica, este trabalho buscou contribuir para o enriquecimento do campo de estudos no segmento de varejo e para o uso da tecnologia da informação, no Brasil, já que o assunto sobre o uso e o impacto de novas tecnologias no ponto de vendas, ainda permanece pouco explorado academicamente.<br>This study sought to identify the main IT technologies available for the AIDC and retail self-service area, to fill the gap in the literature about the real advantages of using new technologies at the point of sale, in order to optimize its operation. In order to do this, we studied the main operational processes of a self-service retail store bearing in mind to identify how the technologies of Automatic Identification and Data Capture and IT could help to improve the operating results and add value to the business. To analyze these proposals we have surveyed several global case studies of retail companies, which implemented the AIDC and IT technologies to investigate what were the impacts of using these technologies in their operations and then designed a comprehensive and innovative Case Study, through which we sought to understand the real business benefits. As a final result, it was possible to identify the changes and the benefits in terms of cost, productivity, quality, flexibility and innovation. The work also highlighted the critical points of success for the implementation of AIDC and IT Retail, which are: the review of operating processes, the correct definition of the hardware; inputs; software; interferences of the physical environment, the availability of data / information of products, of people / employees and of business partners / suppliers. More specifically, this study sought to contribute to the enrichment of the field studies in the retail segment and for the use of information technology in Brazil, since the issue on the use and impact of new technologies at the point of sales, still remains unexplored academically.
APA, Harvard, Vancouver, ISO, and other styles
7

Tröger, Ralph. "Supply Chain Event Management – Bedarf, Systemarchitektur und Nutzen aus Perspektive fokaler Unternehmen der Modeindustrie." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-155014.

Full text
Abstract:
Supply Chain Event Management (SCEM) bezeichnet eine Teildisziplin des Supply Chain Management und ist für Unternehmen ein Ansatzpunkt, durch frühzeitige Reaktion auf kritische Ausnahmeereignisse in der Wertschöpfungskette Logistikleistung und -kosten zu optimieren. Durch Rahmenbedingungen wie bspw. globale Logistikstrukturen, eine hohe Artikelvielfalt und volatile Geschäftsbeziehungen zählt die Modeindustrie zu den Branchen, die für kritische Störereignisse besonders anfällig ist. In diesem Sinne untersucht die vorliegende Dissertation nach einer Beleuchtung der wesentlichen Grundlagen zunächst, inwiefern es in der Modeindustrie tatsächlich einen Bedarf an SCEM-Systemen gibt. Anknüpfend daran zeigt sie nach einer Darstellung bisheriger SCEM-Architekturkonzepte Gestaltungsmöglichkeiten für eine Systemarchitektur auf, die auf den Designprinzipien der Serviceorientierung beruht. In diesem Rahmen erfolgt u. a. auch die Identifikation SCEM-relevanter Business Services. Die Vorzüge einer serviceorientierten Gestaltung werden detailliert anhand der EPCIS (EPC Information Services)-Spezifikation illustriert. Abgerundet wird die Arbeit durch eine Betrachtung der Nutzenpotenziale von SCEM-Systemen. Nach einer Darstellung von Ansätzen, welche zur Nutzenbestimmung infrage kommen, wird der Nutzen anhand eines Praxisbeispiels aufgezeigt und fließt zusammen mit den Ergebnissen einer Literaturrecherche in eine Konsolidierung von SCEM-Nutzeffekten. Hierbei wird auch beleuchtet, welche zusätzlichen Vorteile sich für Unternehmen durch eine serviceorientierte Architekturgestaltung bieten. In der Schlussbetrachtung werden die wesentlichen Erkenntnisse der Arbeit zusammengefasst und in einem Ausblick sowohl beleuchtet, welche Relevanz die Ergebnisse der Arbeit für die Bewältigung künftiger Herausforderungen innehaben als auch welche Anknüpfungspunkte sich für anschließende Forschungsarbeiten ergeben.
APA, Harvard, Vancouver, ISO, and other styles
8

Salehi, Sara. "Automated Biomedical Text Fragmentation In Support Of Biomedical Sentence Fragment Classification." Thesis, 2009. http://hdl.handle.net/1974/5251.

Full text
Abstract:
The past decade has seen a tremendous growth in the amount of biomedical literature, specifically in the area of bioinformatics. As a result, biomedical text categorization has become a central task for providing researchers with literature appropriate for their specific information needs. Pan et al. have explored a method that automatically identifies information-bearing sentence fragments within scientific text. Their proposed method aims to automatically classify sentence fragments into certain sets of categories defined to satisfy specific types of information needs. The categories are grouped into five different dimensions known as Focus, Polarity, Certainty, Evidence, and Trend. The reason that fragments are used as the unit of classification is that the class value along each of these dimensions can change mid-sentence. In order to automatically annotate sentence fragments along the five dimensions, automatically breaking sentences into fragments is a necessary step. The performance of the classifier depends on the sentence fragments. In this study, we investigate the problem of automatic fragmentation of biomedical sentences, which is a fundamental layer in the multi-dimensional fragment classification. In addition, we believe that our proposed fragmentation algorithm can be used in other domains such as sentiment analysis. The goal of sentiment analysis is often to classify the polarity (positive or negative) of a given text. Sentiment classification can be conducted at different levels such as document, sentence, or phrase (fragment) level. Our proposed fragmentation algorithm can be used as a prerequisite for phrase-level sentiment categorization which aims to automatically capture multiple sentiments within a sentence.<br>Thesis (Master, Computing) -- Queen's University, 2009-09-25 10:08:04.429
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Edit Automata"

1

Chatterjee, Krishnendu, Thomas A. Henzinger, Rasmus Ibsen-Jensen, and Jan Otop. "Edit Distance for Pushdown Automata." In Automata, Languages, and Programming. Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-47666-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mohri, Mehryar. "Edit-Distance of Weighted Automata." In Implementation and Application of Automata. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44977-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Andoni, Alexandr, and Robert Krauthgamer. "The Smoothed Complexity of Edit Distance." In Automata, Languages and Programming. Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-70575-8_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ko, Sang-Ki, Yo-Sub Han, and Kai Salomaa. "Top-Down Tree Edit-Distance of Regular Tree Languages." In Language and Automata Theory and Applications. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-04921-2_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Geffert, Viliam, Zuzana Bednárová, and Alexander Szabari. "Input-Driven Pushdown Automata for Edit Distance Neighborhood." In Developments in Language Theory. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-24886-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ould-Slimane, Hakima, Mohamed Mejri, and Kamel Adi. "Using Edit Automata for Rewriting-Based Security Enforcement." In Data and Applications Security XXIII. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03007-9_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Okhotin, Alexander, and Kai Salomaa. "Edit Distance Neighbourhoods of Input-Driven Pushdown Automata." In Computer Science – Theory and Applications. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58747-9_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Calvo-Zaragoza, Jorge, Colin de la Higuera, and Jose Oncina. "Computing the Expected Edit Distance from a String to a PFA." In Implementation and Application of Automata. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-40946-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Néraud, Jean. "Complete Variable-Length Codes: An Excursion into Word Edit Operations." In Language and Automata Theory and Applications. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-40608-0_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Paaßen, Benjamin. "Adversarial Edit Attacks for Tree Data." In Intelligent Data Engineering and Automated Learning – IDEAL 2019. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33607-3_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Edit Automata"

1

Chatterjee, Krishnendu, Rasmus Ibsen-Jensen, and Rupak Majumdar. "Edit distance for timed automata." In HSCC'14: 17th International Conference on Hybrid Systems: Computation and Control. ACM, 2014. http://dx.doi.org/10.1145/2562059.2562141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Zhenrong, Aiman Hanna, and Mourad Debbabi. "Team Edit Automata for Testing Security Property." In Third International Symposium on Information Assurance and Security. IEEE, 2007. http://dx.doi.org/10.1109/ias.2007.45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Zhenrong, Aiman Hanna, and Mourad Debbabi. "Team Edit Automata for Testing Security Property." In Third International Symposium on Information Assurance and Security. IEEE, 2007. http://dx.doi.org/10.1109/isias.2007.4299780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rajamanickam, N., and R. Nadarajan. "Implementing real-time transactional security property using timed edit automata." In the 6th International Conference. ACM Press, 2013. http://dx.doi.org/10.1145/2523514.2523578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dang, Quyet Thang, and Trung Huy Phan. "Determining Restricted Damerau-Levenshtein Edit-Distance of Two Languages by Extended Automata." In 2010 IEEE RIVF International Conference on Computing & Communication Technologies, Research, Innovation, and Vision for the Future (RIVF). IEEE, 2010. http://dx.doi.org/10.1109/rivf.2010.5632914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Imperial, Joseph Marvin R., Czeritonnie Gail V. Ya-On, and Jennifer C. Ureta. "An experimental Tagalog Finite State Automata spellchecker with Levenshtein edit-distance feature." In 2019 International Conference on Asian Language Processing (IALP). IEEE, 2019. http://dx.doi.org/10.1109/ialp48816.2019.9037687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bailer, W., H. Stiegler, and G. Thallinger. "Automatic metadata editing using edit decisions." In 3rd European Conference on Visual Media Production (CVMP 2006). Part of the 2nd Multimedia Conference 2006. IEE, 2006. http://dx.doi.org/10.1049/cp:20061965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kehrer, Timo, Udo Kelter, and Gabriele Taentzer. "Consistency-preserving edit scripts in model versioning." In 2013 IEEE/ACM 28th International Conference on Automated Software Engineering (ASE). IEEE, 2013. http://dx.doi.org/10.1109/ase.2013.6693079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Reis, D. C., P. B. Golgher, A. S. Silva, and A. F. Laender. "Automatic web news extraction using tree edit distance." In the 13th conference. ACM Press, 2004. http://dx.doi.org/10.1145/988672.988740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Higo, Yoshiki, Akio Ohtani, and Shinji Kusumoto. "Generating simpler AST edit scripts by considering copy-and-paste." In 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2017. http://dx.doi.org/10.1109/ase.2017.8115664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography