Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Probabilistic finite state automata.

Дисертації з теми "Probabilistic finite state automata"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-37 дисертацій для дослідження на тему "Probabilistic finite state automata".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

FRANCH, Daniel Kudlowiez. "Dynamical system modeling with probabilistic finite state automata." Universidade Federal de Pernambuco, 2017. https://repositorio.ufpe.br/handle/123456789/25448.

Повний текст джерела
Анотація:
Submitted by Fernanda Rodrigues de Lima (fernanda.rlima@ufpe.br) on 2018-08-02T22:51:47Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Daniel Kudlowiez Franch.pdf: 1140156 bytes, checksum: c02b1b4ca33f8165be5960ba5a212730 (MD5)
Approved for entry into archive by Alice Araujo (alice.caraujo@ufpe.br) on 2018-08-07T21:11:31Z (GMT) No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Daniel Kudlowiez Franch.pdf: 1140156 bytes, checksum: c02b1b4ca33f8165be5960ba5a212730 (MD5)
Made available in DSpace on 2018-08-07T21:11:31Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Daniel Kudlowiez Franch.pdf: 1140156 bytes, checksum: c02b1b4ca33f8165be5960ba5a212730 (MD5) Previous issue date: 2017-03-10
FACEPE
Discrete dynamical systems are widely used in a variety of scientific and engineering applications, such as electrical circuits, machine learning, meteorology and neurobiology. Modeling these systems involves performing statistical analysis of the system output to estimate the parameters of a model so it can behave similarly to the original system. These models can be used for simulation, performance analysis, fault detection, among other applications. The current work presents two new algorithms to model discrete dynamical systems from two categories (synchronizable and non-synchronizable) using Probabilistic Finite State Automata (PFSA) by analyzing discrete symbolic sequences generated by the original system and applying statistical methods and inference, machine learning algorithms and graph minimization techniques to obtain compact, precise and efficient PFSA models. Their performance and time complexity are compared with other algorithms present in literature that aim to achieve the same goal by applying the algorithms to a series of common examples.
Sistemas dinâmicos discretos são amplamente usados em uma variedade de aplicações cientifícas e de engenharia, por exemplo, circuitos elétricos, aprendizado de máquina, meteorologia e neurobiologia. O modelamento destes sistemas envolve realizar uma análise estatística de sequências de saída do sistema para estimar parâmetros de um modelo para que este se comporte de maneira similar ao sistema original. Esses modelos podem ser usados para simulação, referência ou detecção de falhas. Este trabalho apresenta dois novos algoritmos para modelar sistemas dinâmicos discretos de duas categorias (sincronizáveis e não-sincronizáveis) por meio de Autômatos Finitos Probabilísticos (PFSA, Probabilistic Finite State Automata) analisando sequências geradas pelo sistema original e aplicando métodos estatísticos, algoritmos de aprendizado de máquina e técnicas de minimização de grafos para obter modelos PFSA compactos e eficientes. Sua performance e complexidade temporal são comparadas com algoritmos presentes na literatura que buscam atingir o mesmo objetivo aplicando os algoritmos a uma série de exemplos.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Merryman, William Patrick. "Animating the conversion of nondeterministic finite state automata to deterministic finite state automata." Thesis, Montana State University, 2007. http://etd.lib.montana.edu/etd/2007/merryman/MerrymanW0507.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Martin, Oliver B. 1979. "Accurate belief state update for probabilistic constraint automata." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32446.

Повний текст джерела
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2005.
Includes bibliographical references (p. 91-93).
As autonomous spacecraft and other robotic systems grow increasingly complex, there is a pressing need for capabilities that more accurately monitor and diagnose system state while maintaining reactivity. Mode estimation addresses this problem by reasoning over declarative models of the physical plant, represented as a factored variant of Hidden Markov Models (HMMs), called Probabilistic Concurrent Constraint Automata (PCCA). Previous mode estimation approaches track a set of most likely PCCA state trajectories, enumerating them in order of trajectory probability. Although Best-First Trajectory Enumeration (BFTE) is efficient, ignoring the additional trajectories that lead to the same target state can significantly underestimate the true state probability and result in misdiagnosis. This thesis introduces two innovative belief state approximation techniques, called Best-First Belief State Enumeration (BFBSE) and Best-First Belief State Update (BFBSU), that address this limitation by computing estimate probabilities directly from the HMM belief state update equations. Theoretical and empirical results show that I3FBSE and BFBSU significantly increases estimator accuracy, uses less memory, and have no increase in computation time when enumerating a moderate number of estimates for the approximate belief state of subsystem sized models.
by Oliver Borelli Martin.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Timmons, Eric (Eric M. ). "Fast, approximate state estimation of concurrent probabilistic hybrid automata." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82494.

Повний текст джерела
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2013.
This electronic version was submitted and approved by the author's academic department as part of an electronic thesis pilot project. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from department-submitted PDF version of thesis
Includes bibliographical references (p. 73).
It is an undeniable fact that autonomous systems are simultaneously becoming more common place, more complex, and deployed in more inhospitable environments. Examples include smart homes, smart cars, Mars rovers, unmanned aerial vehicles, and autonomous underwater vehicles. A common theme that all of these autonomous systems share is that in order to appropriately control them and prevent mission failure, they must be able to quickly estimate their internal state and the state of the world. A natural representation of many real world systems is to describe them in terms of a mixture of continuous and discrete variables. Unfortunately, hybrid estimation is typically intractable due to the large space of possible assignments to the discrete variables. In this thesis, we investigate how to incorporate conflict directed techniques from the consistency-based, model-based diagnosis community into a hybrid framework that is no longer purely consistency based. We introduce a novel search algorithm, A* with Bounding Conflicts, that uses conflicts to not only record infeasiblilities, but also learn where in the search space the heuristic function provided to the A* search is weak (possibly due to heavy to moderate sensor or process noise). Additionally, we describe a hybrid state estimation algorithm that uses this new search to perform estimation on hybrid discrete/continuous systems.
by Eric Timmons.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Khemuka, Atul Ravi. "Workflow Modeling Using Finite Automata." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000172.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Bird, Philip. "Unifying programming paradigms : logic programming and finite state automata." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419609.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wagner, Daniel. "Finite-state abstractions for probabilistic computation tree logic." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/6348.

Повний текст джерела
Анотація:
Probabilistic Computation Tree Logic (PCTL) is the established temporal logic for probabilistic verification of discrete-time Markov chains. Probabilistic model checking is a technique that verifies or refutes whether a property specified in this logic holds in a Markov chain. But Markov chains are often infinite or too large for this technique to apply. A standard solution to this problem is to convert the Markov chain to an abstract model and to model check that abstract model. The problem this thesis therefore studies is whether or when such finite abstractions of Markov chains for model checking PCTL exist. This thesis makes the following contributions. We identify a sizeable fragment of PCTL for which 3-valued Markov chains can serve as finite abstractions; this fragment is maximal for those abstractions and subsumes many practically relevant specifications including, e.g., reachability. We also develop game-theoretic foundations for the semantics of PCTL over Markov chains by capturing the standard PCTL semantics via a two-player games. These games, finally, inspire a notion of p-automata, which accept entire Markov chains. We show that p-automata subsume PCTL and Markov chains; that their languages of Markov chains have pleasant closure properties; and that the complexity of deciding acceptance matches that of probabilistic model checking for p-automata representing PCTL formulae. In addition, we offer a simulation between p-automata that under-approximates language containment. These results then allow us to show that p-automata comprise a solution to the problem studied in this thesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Egri-Nagy, Attila. "Algebraic hierarchical decomposition of finite state automata : a computational approach." Thesis, University of Hertfordshire, 2005. http://hdl.handle.net/2299/14267.

Повний текст джерела
Анотація:
The theory of algebraic hierarchical decomposition of finite state automata is an important and well developed branch of theoretical computer science (Krohn-Rhodes Theory). Beyond this it gives a general model for some important aspects of our cognitive capabilities and also provides possible means for constructing artificial cognitive systems: a Krohn-Rhodes decomposition may serve as a formal model of understanding since we comprehend the world around us in terms of hierarchical representations. In order to investigate formal models of understanding using this approach, we need efficient tools but despite the significance of the theory there has been no computational implementation until this work. Here the main aim was to open up the vast space of these decompositions by developing a computational toolkit and to make the initial steps of the exploration. Two different decomposition methods were implemented: the VuT and the holonomy decomposition. Since the holonomy method, unlike the VUT method, gives decompositions of reasonable lengths, it was chosen for a more detailed study. In studying the holonomy decomposition our main focus is to develop techniques which enable us to calculate the decompositions efficiently, since eventually we would like to apply the decompositions for real-world problems. As the most crucial part is finding the the group components we present several different ways for solving this problem. Then we investigate actual decompositions generated by the holonomy method: automata with some spatial structure illustrating the core structure of the holonomy decomposition, cases for showing interesting properties of the decomposition (length of the decomposition, number of states of a component), and the decomposition of finite residue class rings of integers modulo n. Finally we analyse the applicability of the holonomy decompositions as formal theories of understanding, and delineate the directions for further research.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Cazalis, Daniel S. "Algebraic Theory of Minimal Nondeterministic Finite Automata with Applications." FIU Digital Commons, 2007. http://digitalcommons.fiu.edu/etd/8.

Повний текст джерела
Анотація:
Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA's behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Makarov, Alexander. "Application of finite state methods to shape coding and processing in object-based video." Thesis, Staffordshire University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368316.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Atchuta, Kaushik. "Slicing of extended finite state machines." Kansas State University, 2014. http://hdl.handle.net/2097/17640.

Повний текст джерела
Анотація:
Master of Science
Department of Computing and Information Sciences
Torben Amtoft
An EFSM (Extended Finite State Machine) is a tuple (S, T, E, V) where S is a finite set of states, T is a finite set of transitions, E is a finite set of events, and V is a finite set of variables. Every transition t in T has a source state and a target state, both in S. There is a need to develop a GUI which aids in building such machines and simulating them so that a slicing algorithm can be implemented on such graphs. This was the main idea of Dr. Torben Amtoft, who has actually written the slicing algorithm and wanted this to be implemented in code. The project aims at implementing a GUI which is effective to simulate and build the graph with minimum user effort. Poor design often fails to attract users. So, the initial effort is to build a simple and effective GUI which serves the purpose of taking input from the user, building graphs and simulating it. The scope of this project is to build and implement an interface so that the users can do the following in an effective way:  Input a specification of an EFSM  Store and later retrieve EFSMs  Displaying an EFSM in a graphical form  Simulating the EFSM  Modify an EFSM  Implement the slicing algorithm All the above mentioned features must be integrated into the GUI and it should only fail if the input specification is wrong.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Hulden, Mans. "Finite-state Machine Construction Methods and Algorithms for Phonology and Morphology." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/196112.

Повний текст джерела
Анотація:
This dissertation is concerned with finite state machine-based technology for modeling natural language. Finite-state machines have proven to be efficient computational devices in modeling natural language phenomena in morphology and phonology. Because of their mathematical closure properties, finite-state machines can be manipulated and combined in many flexible ways that closely resemble formalisms used in different areas of linguistics to describe natural language. The use of finite-state transducers in constructing natural language parsers and generators has proven to be a versatile approach to describing phonological alternation, morphological constraints and morphotactics, and syntactic phenomena on the phrase level.The main contributions of this dissertation are the development of a new model of multitape automata, the development of a new logic formalism that can substitute for regular expressions in constructing complex automata, and adaptations of these techniques to solving classical construction problems relating to finite-state transducers, such as modeling reduplication and complex phonological replacement rules.The multitape model presented here goes hand-in-hand with the logic formalism, the latter being a necessary step to constructing the former. These multitape automata can then be used to create entire morphological and phonological grammars, and can also serve as a neutral intermediate tool to ease the construction of automata for other purposes.The construction of large-scale finite-state models for natural language grammars is a very delicate process. Making any solution practicable requires great care in the efficient implementation of low-level tasks such as converting regular expressions, logical statements, sets of constraints, and replacement rules to automata or finite transducers. To support the overall endeavor of showing the practicability of the logical and multitape extensions proposed in this thesis, a detailed treatment of efficient implementation of finite-state construction algorithms for natural language purposes is also presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wilson, Deborah Ann Stoffer. "A Study of the Behavior of Chaos Automata." Kent State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=kent1478955376070686.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Davis, Paul C. "Stone Soup Translation: The Linked Automata Model." Connect to this title online, 2002. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1023806593.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Ohio State University, 2002.
Title from first page of PDF file. Document formatted into pages; contains xvi, 306 p.; includes graphics. Includes abstract and vita. Advisor: Chris Brew, Dept. of Linguistics. Includes indexes. Includes bibliographical references (p. 284-293).
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Petrovic, Pavel. "Incremental Evolutionary Methods for Automatic Programming of Robot Controllers." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1748.

Повний текст джерела
Анотація:

The aim of the main work in the thesis is to investigate Incremental Evolution methods for designing a suitable behavior arbitration mechanism for behavior-based (BB) robot controllers for autonomous mobile robots performing tasks of higher complexity. The challenge of designing effective controllers for autonomous mobile robots has been intensely studied for few decades. Control Theory studies the fundamental control principles of robotic systems. However, the technological progress allows, and the needs of advanced manufacturing, service, entertainment, educational, and mission tasks require features beyond the scope of the standard functionality and basic control. Artificial Intelligence has traditionally looked upon the problem of designing robotics systems from the high-level and top-down perspective: given a working robotic device, how can it be equipped with an intelligent controller. Later approaches advocated for better robustness, modifiability, and control due to a bottom-up layered incremental controller and robot building (Behavior-Based Robotics, BBR). Still, the complexity of programming such system often requires manual work of engineers. Automatic methods might lead to systems that perform task on demand without the need of expert robot programmer. In addition, a robot programmer cannot predict all the possible situations in the robotic applications. Automatic programming methods may provide flexibility and adaptability of the robotic products with respect to the task performed. One possible approach to automatic design of robot controllers is Evolutionary Robotics (ER). Most of the experiments performed in the field of ER have achieved successful learning of target task, while the tasks were of limited complexity. This work is a marriage of incremental idea from the BBR and automatic programming of controllers using ER. Incremental Evolution allows automatic programming of robots for more complex tasks by providing a gentle and easy-to understand support by expertknowledge — division of the target task into sub-tasks. We analyze different types of incrementality, devise new controller architecture, implement an original simulator compatible with hardware, and test it with various incremental evolution tasks for real robots. We build up our experimental field through studies of experimental and educational robotics systems, evolutionary design, distributed computation that provides the required processing power, and robotics applications. University research is tightly coupled with education. Combining the robotics research with educational applications is both a useful consequence as well as a way of satisfying the necessary condition of the need of underlying application domain where the research work can both reflect and base itself.

Стилі APA, Harvard, Vancouver, ISO та ін.
16

Lewandowski, Matthew. "A Novel Method For Watermarking Sequential Circuits." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4528.

Повний текст джерела
Анотація:
We present an Intellectual Property (IP) protection technique for sequential circuits driven by embedding a decomposed signature into a Finite State Machine (FSM) through the manipulation of the arbitrary state encoding of the unprotected FSM. This technique is composed of three steps: (a) transforming the signature into a watermark graph, (b) embedding watermark graphs into the original FSM's State Transition Graph (STG) and (c) generating models for verification and extraction. In the watermark construction process watermark graphs are generated from signatures. The proposed methods for watermark construction are: (1) BSD, (2) FSD, and (3) HSD. The HSD method is shown to be advantageous for all signatures while providing sparse watermark FSMs with complexity O(n^2). The embedding process is related to the sub-graph matching problem. Due to the computational complexity of the matching problem, attempts to reverse engineer or remove the constructed watermark from the protected FSM, with only finite resources and time, are shown to be infeasible. The proposed embedding solutions are: (1) Brute Force and (2) Greedy Heuristic. The greedy heuristic has a computational complexity of O(n log n), where n is the number of states in the watermark graph. The greedy heuristic showed improvements for three of the six encoding schemes used in experimental results. Model generation and verification utilizes design automation techniques for generating multiple representations of the original, watermark, and watermarked FSMs. Analysis of the security provided by this method shows that a variety of attacks on the watermark and system including: (1) data-mining hidden functionality, (2) preimage, (3) secondary preimage, and (4) collision, can be shown to be computationally infeasible. Experimental results for the ten largest IWLS 93 benchmarks that the proposed watermarking technique is a secure, yet flexible, technique for protecting sequential circuit based IP cores.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Puigcerver, I. Pérez Joan. "A Probabilistic Formulation of Keyword Spotting." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/116834.

Повний текст джерела
Анотація:
[ES] La detección de palabras clave (Keyword Spotting, en inglés), aplicada a documentos de texto manuscrito, tiene como objetivo recuperar los documentos, o partes de ellos, que sean relevantes para una cierta consulta (query, en inglés), indicada por el usuario, entre una gran colección de documentos. La temática ha recogido un gran interés en los últimos 20 años entre investigadores en Reconocimiento de Formas (Pattern Recognition), así como bibliotecas y archivos digitales. Esta tesis, en primer lugar, define el objetivo de la detección de palabras clave a partir de una perspectiva basada en la Teoría de la Decisión y una formulación probabilística adecuada. Más concretamente, la detección de palabras clave se presenta como un caso particular de Recuperación de la Información (Information Retrieval), donde el contenido de los documentos es desconocido, pero puede ser modelado mediante una distribución de probabilidad. Además, la tesis también demuestra que, bajo las distribuciones de probabilidad correctas, el marco de trabajo desarrollada conduce a la solución óptima del problema, según múltiples medidas de evaluación utilizadas tradicionalmente en el campo. Más tarde, se utilizan distintos modelos estadísticos para representar las distribuciones necesarias: Redes Neuronales Recurrentes o Modelos Ocultos de Markov. Los parámetros de estos son estimados a partir de datos de entrenamiento, y las respectivas distribuciones son representadas mediante Transductores de Estados Finitos con Pesos (Weighted Finite State Transducers). Con el objetivo de hacer que el marco de trabajo sea práctico en grandes colecciones de documentos, se presentan distintos algoritmos para construir índices de palabras a partir de modelos probabilísticos, basados tanto en un léxico cerrado como abierto. Estos índices son muy similares a los utilizados por los motores de búsqueda tradicionales. Además, se estudia la relación que hay entre la formulación probabilística presentada y otros métodos de gran influencia en el campo de la detección de palabras clave, destacando cuáles son las limitaciones de los segundos. Finalmente, todas la aportaciones se evalúan de forma experimental, no sólo utilizando pruebas académicas estándar, sino también en colecciones con decenas de miles de páginas provenientes de manuscritos históricos. Los resultados muestran que el marco de trabajo presentado permite construir sistemas de detección de palabras clave muy rápidos y precisos, con una sólida base teórica.
[CAT] La detecció de paraules clau (Keyword Spotting, en anglès), aplicada a documents de text manuscrit, té com a objectiu recuperar els documents, o parts d'ells, que siguen rellevants per a una certa consulta (query, en anglès), indicada per l'usuari, dintre d'una gran col·lecció de documents. La temàtica ha recollit un gran interés en els últims 20 anys entre investigadors en Reconeixement de Formes (Pattern Recognition), així com biblioteques i arxius digitals. Aquesta tesi defineix l'objectiu de la detecció de paraules claus a partir d'una perspectiva basada en la Teoria de la Decisió i una formulació probabilística adequada. Més concretament, la detecció de paraules clau es presenta com un cas concret de Recuperació de la Informació (Information Retrieval), on el contingut dels documents és desconegut, però pot ser modelat mitjançant una distribució de probabilitat. A més, la tesi també demostra que, sota les distribucions de probabilitat correctes, el marc de treball desenvolupat condueix a la solució òptima del problema, segons diverses mesures d'avaluació utilitzades tradicionalment en el camp. Després, diferents models estadístics s'utilitzen per representar les distribucions necessàries: Xarxes Neuronal Recurrents i Models Ocults de Markov. Els paràmetres d'aquests són estimats a partir de dades d'entrenament, i les corresponents distribucions són representades mitjançant Transductors d'Estats Finits amb Pesos (Weighted Finite State Transducers). Amb l'objectiu de fer el marc de treball útil per a grans col·leccions de documents, es presenten distints algorismes per construir índexs de paraules a partir dels models probabilístics, tan basats en un lèxic tancat com en un obert. Aquests índexs són molt semblants als utilitzats per motors de cerca tradicionals. A més a més, s'estudia la relació que hi ha entre la formulació probabilística presentada i altres mètodes de gran influència en el camp de la detecció de paraules clau, destacant algunes limitacions dels segons. Finalment, totes les aportacions s'avaluen de forma experimental, no sols utilitzant proves acadèmics estàndard, sinó també en col·leccions amb desenes de milers de pàgines provinents de manuscrits històrics. Els resultats mostren que el marc de treball presentat permet construir sistemes de detecció de paraules clau molt acurats i ràpids, amb una sòlida base teòrica.
[EN] Keyword Spotting, applied to handwritten text documents, aims to retrieve the documents, or parts of them, that are relevant for a query, given by the user, within a large collection of documents. The topic has gained a large interest in the last 20 years among Pattern Recognition researchers, as well as digital libraries and archives. This thesis, first defines the goal of Keyword Spotting from a Decision Theory perspective. Then, the problem is tackled following a probabilistic formulation. More precisely, Keyword Spotting is presented as a particular instance of Information Retrieval, where the content of the documents is unknown, but can be modeled by a probability distribution. In addition, the thesis also proves that, under the correct probability distributions, the framework provides the optimal solution, under many of the evaluation measures traditionally used in the field. Later, different statistical models are used to represent the probability distribution over the content of the documents. These models, Hidden Markov Models or Recurrent Neural Networks, are estimated from training data, and the corresponding distributions over the transcripts of the images can be efficiently represented using Weighted Finite State Transducers. In order to make the framework practical for large collections of documents, this thesis presents several algorithms to build probabilistic word indexes, using both lexicon-based and lexicon-free models. These indexes are very similar to the ones used by traditional search engines. Furthermore, we study the relationship between the presented formulation and other seminal approaches in the field of Keyword Spotting, highlighting some limitations of the latter. Finally, all the contributions are evaluated experimentally, not only on standard academic benchmarks, but also on collections including tens of thousands of pages of historical manuscripts. The results show that the proposed framework and algorithms allow to build very accurate and very fast Keyword Spotting systems, with a solid underlying theory.
Puigcerver I Pérez, J. (2018). A Probabilistic Formulation of Keyword Spotting [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/116834
TESIS
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Brits, Jeanetta Hendrina. "Outomatiese Setswana lemma-identifisering / Jeanetta Hendrina Brits." Thesis, North-West University, 2006. http://hdl.handle.net/10394/1160.

Повний текст джерела
Анотація:
Within the context of natural language processing, a lemmatiser is one of the most important core technology modules that has to be developed for a particular language. A lemmatiser reduces words in a corpus to the corresponding lemmas of the words in the lexicon. A lemma is defined as the meaningful base form from which other more complex forms (i.e. variants) are derived. Before a lemmatiser can be developed for a specific language, the concept "lemma" as it applies to that specific language should first be defined clearly. This study concludes that, in Setswana, only stems (and not roots) can act independently as words; therefore, only stems should be accepted as lemmas in the context of automatic lemmatisation for Setswana. Five of the seven parts of speech in Setswana could be viewed as closed classes, which means that these classes are not extended by means of regular morphological processes. The two other parts of speech (nouns and verbs) require the implementation of alternation rules to determine the lemma. Such alternation rules were formalised in this study, for the purpose of development of a Setswana lemmatiser. The existing Setswana grammars were used as basis for these rules. Therewith the precision of the formalisation of these existing grammars to lemmatise Setswana words could be determined. The software developed by Van Noord (2002), FSA 6, is one of the best-known applications available for the development of finite state automata and transducers. Regular expressions based on the formalised morphological rules were used in FSA 6 to create finite state transducers. The code subsequently generated by FSA 6 was implemented in the lemmatiser. The metric that applies to the evaluation of the lemmatiser is precision. On a test corpus of 1 000 words, the lemmatiser obtained 70,92%. In another evaluation on 500 complex nouns and 500 complex verbs separately, the lemmatiser obtained 70,96% and 70,52% respectively. Expressed in numbers the precision on 500 complex and simplex nouns was 78,45% and on complex and simplex verbs 79,59%. The quantitative achievement only gives an indication of the relative precision of the grammars. Nevertheless, it did offer analysed data with which the grammars were evaluated qualitatively. The study concludes with an overview of how these results might be improved in the future.
Thesis (M.A. (African Languages))--North-West University, Potchefstroom Campus, 2006.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Hošták, Viliam Samuel. "Učení se automatů pro rychlou detekci anomálií v síťovém provozu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-449296.

Повний текст джерела
Анотація:
The focus of this thesis is the fast network anomaly detection based on automata learning. It describes and compares several chosen automata learning algorithms including their adaptation for the learning of network characteristics. In this work, various network anomaly detection methods based on learned automata are proposed which can detect sequential as well as statistical anomalies in target communication. For this purpose, they utilize automata's mechanisms, their transformations, and statistical analysis. Proposed detection methods were implemented and evaluated using network traffic of the protocol IEC 60870-5-104 which is commonly used in industrial control systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Paulson, Jörgen, and Peter Huynh. "Menings- och dokumentklassficering för identifiering av meningar." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-16373.

Повний текст джерела
Анотація:
Detta examensarbete undersöker hur väl tekniker inom meningsklassificering och dokumentklassificering fungerar för att välja ut meningar som innehåller de variabler som använts i experiment som beskrivs i medicinska dokument. För meningsklassificering används tillståndsmaskiner och nyckelord, för dokumentklassificering används linjär SVM och Random forest. De textegenskaper som har valts ut är LIX (läsbarhetsindex) och ordmängd (word count). Textegenskaperna hämtas från en färdig datamängd som skapades av Abrahamsson (T.B.D) från artiklar som samlas in för denna studie. Denna datamängd används sedan för dokumentklassificering. Det som undersöks hos dokumentklassificeringsteknikerna är förmågan att skilja dokument av typerna vetenskapliga artiklar med experiment, vetenskapliga artiklar utan experiment, vetenskapliga artiklar med metaanalyser och dokument som inte är vetenskapliga artiklar åt. Dessa dokument behandlas med meningsklassificering för att undersöka hur väl denna hittar meningar sominnehåller definitioner av variabler. Resultatet från experimentet tydde på att teknikerna för meningsklassificering inte var dugliga för detta ändamål på grund av låg precision. För dokumentklassificering var Randomforest bäst lämpad men hade problem att skilja olika typer av vetenskapliga artiklar åt.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Dolzhenko, Egor. "Transducer dynamics." Scholar Commons, 2007. https://scholarcommons.usf.edu/etd/217.

Повний текст джерела
Анотація:
Transducers are finite state automata with an output. In this thesis, we attempt to classify sequences that can be constructed by iteratively applying a transducer to a given word. We begin exploring this problem by considering sequences of words that can be produced by iterative application of a transducer to a given input word, i.e., identifying sequences of words of the form w, t(w), t²(w), . . . We call such sequences transducer recognizable. Also we introduce the notion of "recognition of a sequence in context", which captures the possibility of concatenating prefix and suffix words to each word in the sequence, so a given sequence of words becomes transducer recognizable. It turns out that all finite and periodic sequences of words of equal length are transducer recognizable. We also show how to construct a deterministic transducer with the least number of states recognizing a given sequence. To each transducer t we associate a two-dimensional language L²(t) consisting of blocks of symbols in the following way. The first row, w, of each block is in the input language of t, the second row is a word that t outputs on input w. Inductively, every subsequent row is a word outputted by the transducer when its preceding row is read as an input. We show a relationship of the entropy values of these two-dimensional languages to the entropy values of the one-dimensional languages that appear as input languages for finite state transducers.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Veselý, Lukáš. "Korektor diakritiky." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-236816.

Повний текст джерела
Анотація:
The goal of this diploma work is the suggestion and the implementation of the application, which allows adding / removing of diacritics into / from Czech written text. Retrieval "trie" structure is described along with its relation to finite state automata. Further, algorithm for minimization of finite state automata is described and various methods for adding diacritics are discussed. In practical part the implementation in Java programming language with usage of object-oriented approach is given. Achieved results are evaluated and analysed in the conclusion.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Solár, Peter. "Syntaxí řízený překlad založený na hlubokých zásobníkových automatech." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236779.

Повний текст джерела
Анотація:
This thesis introduces syntax-directed translation based on deep pushdown automata. Necessary theoretical models are introduced in the theoretical part. The most important model, introduced in this thesis, is a deep pushdown transducer. The transducer should be used in syntax analysis, significant part of translation. Practical part consists of an implementation of simple-language interpret based on these models.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Angus, Simon Douglas Economics Australian School of Business UNSW. "Economic networks: communication, cooperation & complexity." Awarded by:University of New South Wales. Economics, 2007. http://handle.unsw.edu.au/1959.4/27005.

Повний текст джерела
Анотація:
This thesis is concerned with the analysis of economic network formation. There are three novel sections to this thesis (Chapters 5, 6 and 8). In the first, the non-cooperative communication network formation model of Bala and Goyal (2000) (BG) is re-assessed under conditions of no inertia. It is found that the Strict Nash circle (or wheel) structure is still the equilibrium outcome for n = 3 under no inertia. However, a counter-example for n = 4 shows that with no inertia infinite cycles are possible, and hence the system does not converge. In fact, cycles are found to quickly dominate outcomes for n > 4 and further numerical simulations of conditions approximating no inertia (probability of updating > 0.8 to 1) indicate that cycles account for a dramatic slowing of convergence times. These results, together with the experimental evidence of Falk and Kosfeld (2003) (FK) motivate the second contribution of this thesis. A novel artificial agent model is constructed that allows for a vast strategy space (including the Best Response) and permits agents to learn from each other as was indicated by the FK results. After calibration, this model replicates many of the FK experimental results and finds that an externality exploiting ratio of benefits and costs (rather than the difference) combined with a simple altruism score is a good proxy for the human objective function. Furthermore, the inequity aversion results of FK are found to arise as an emergent property of the system. The third novel section of this thesis turns to the nature of network formation in a trust-based context. A modified Iterated Prisoners' Dilemma (IPD) model is developed which enables agents to play an additional and costly network forming action. Initially, canonical analytical results are obtained despite this modification under uniform (non-local) interactions. However, as agent network decisions are 'turned on' persistent cooperation is observed. Furthermore, in contrast to the vast majority of non-local, or static network models in the literature, it is found that a-periodic, complex dynamics result for the system in the long-run. Subsequent analysis of this regime indicates that the network dynamics have fingerprints of self-organized criticality (SOC). Whilst evidence for SOC is found in many physical systems, such dynamics have been seldom, if ever, reported in the strategic interaction literature.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Pétréolle, Mathias. "Quelques développements combinatoires autour des groupes de Coxeter et des partitions d'entiers." Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10237/document.

Повний текст джерела
Анотація:
Cette thèse porte sur l'étude de la combinatoire énumérative, plus particulièrement autour des partitions d'entiers et des groupes de Coxeter. Dans une première partie, à l'instar de Han et de Nekrasov-Okounkov, nous étudions des développements combinatoires des puissances de la fonction êta de Dedekind, en termes de longueurs d'équerres de partitions d'entiers. Notre approche, bijective, utilise notamment les identités de Macdonald en types affines (en particulier le type C), généralisant l'approche de Han en type A. Nous étendons ensuite avec de nouveaux paramètres ces développements, grâce à de nouvelles propriétés de la décomposition de Littlewood vis-à-vis des partitions et statistiques considérées. Cela nous permet de déduire des formules des équerres symplectiques, ainsi qu'une connexion avec la théorie des représentations. Dans une seconde partie, nous étudions les éléments cycliquement pleinement commutatifs dans les groupes de Coxeter introduits par Boothby et al., qui forment une sous famille des éléments pleinement commutatifs. Nous commençons par développer une construction, la clôture cylindrique, donnant un cadre théorique qui est aux éléments CPC ce que les empilements de Viennot sont aux éléments PC. Nous donnons une caractérisation des éléments CPC en terme de clôtures cylindriques pour n'importe quel système de Coxeter. Celle-ci nous permet de déterminer en termes d'expressions réduites les éléments CPC dans tous les groupes de Coxeter finis ou affines, et d'en déduire dans tous ces groupes l'énumération de ces éléments. En utilisant la théorie des automates finis, nous montrons aussi que la série génératrice de ces éléments est une fraction rationnelle
This thesis focuses on enumerative combinatorics, particularly on integer partitions and Coxeter groups. In the first part, like Han and Nekrasov-Okounkov, we study the combinatorial expansion of power of the Dedekind's eta function, in terms of hook lengths of integer partitions. Our approach, bijective, use the Macdonald identities in affine types, generalizing the study of Han in the case of type A. We extend with new parameters the expansions that we obtained through new properties of the Littlewood decomposition. This enables us to deduce symplectic hook length formulas and a connexion with representation theory. In the second part, we study the cyclically fully commutative elements in Coxeter groups, introduced by Boothby et al., which are a sub family of the fully commutative elements. We start by introducing a new construction, the cylindrical closure, which give a theoretical framework for the CPC elements analogous to the Viennot's heaps for fully commutative elements. We give a characterization of CPC elements in terms of cylindrical closures in any Coxeter groups. This allows to deduce a characterization of these elements in terms of reduced decompositions in all finite and affine Coxeter and their enumerations in those groups. By using the theory of finite state automata, we show that the generating function of these elements is always rational, in all Coxeter groups
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Beaucamps, Philippe. "Analyse de Programmes Malveillants par Abstraction de Comportements." Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2011. http://tel.archives-ouvertes.fr/tel-00646395.

Повний текст джерела
Анотація:
L'analyse comportementale traditionnelle opère en général au niveau de l'implantation du comportement malveillant. Pourtant, elle s'intéresse surtout à l'identification d'un comportement donné, indépendamment de sa mise en œuvre technique, et elle se situe donc plus naturellement à un niveau fonctionnel. Dans cette thèse, nous définissons une forme d'analyse comportementale de programmes qui opère non pas sur les interactions élémentaires d'un programme avec le système mais sur la fonction que le programme réalise. Cette fonction est extraite des traces d'un programme, un procédé que nous appelons abstraction. Nous définissons de façon simple, intuitive et formelle les fonctionnalités de base à abstraire et les comportements à détecter, puis nous proposons un mécanisme d'abstraction applicable à un cadre d'analyse statique ou dynamique, avec des algorithmes pratiques à complexité raisonnable, enfin nous décrivons une technique d'analyse comportementale intégrant ce mécanisme d'abstraction. Notre méthode est particulièrement adaptée à l'analyse des programmes dans des langages de haut niveau ou dont le code source est connu, pour lesquels l'analyse statique est facilitée : les programmes conçus pour des machines virtuelles comme Java ou .NET, les scripts Web, les extensions de navigateurs, les composants off-the-shelf. Le formalisme d'analyse comportementale par abstraction que nous proposons repose sur la théorie de la réécriture de mots et de termes, les langages réguliers de mots et de termes et le model checking. Il permet d'identifier efficacement des fonctionnalités dans des traces et ainsi d'obtenir une représentation des traces à un niveau fonctionnel ; il définit les fonctionnalités et les comportements de façon naturelle, à l'aide de formules de logique temporelle, ce qui garantit leur simplicité et leur flexibilité et permet l'utilisation de techniques de model checking pour la détection de ces comportements ; il opère sur un ensemble quelconque de traces d'exécution ; il prend en compte le flux de données dans les traces d'exécution ; et il permet, sans perte d'efficacité, de tenir compte de l'incertitude dans l'identification des fonctionnalités. Nous validons nos résultats par un ensemble d'expériences, menées sur des codes malicieux existants, dont les traces sont obtenues soit par instrumentation binaire dynamique, soit par analyse statique.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Abubaker, Sarshad. "Probabilistic, lightweight cryptosystems based on finite automata." Thesis, 2011. http://hdl.handle.net/1828/3410.

Повний текст джерела
Анотація:
Most of the cryptosystems currently used are based on number theoretic problems. We focus on cryptosystems based on finite automata (FA) which are lightweight in nature and have relatively small key sizes. The security of these systems relies on the difficulties in inverting non-linear finite automata and factoring matrix polynomials. In symmetric or single key encryption, the secret key consists of two finite automata and their inverses. By applying the inverses of the automata to the cipher text, the plain text can be effectively calculated. In case of asymmetric or public key encryption, the public key consists of another automaton, which is the combination of the two finite automata while the private key consists of the inverse of the two individual automata. It is hard to invert the combined automaton without the knowledge of the private key automata. We propose a third variant which is based on a 128-bit key and uses a DES-based key generation algorithm. We implement and test all three cryptosystems - the standard single key and public key cryptosystems as well as our novel DES-based FA cryptosystem. We also extensively test the finite automata cryptosystems on a standard desktop machine as well as the Nokia N900 smartphone. All statistical tests carried out on the ciphertext are satisfactory.
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Lee, Gen-Cher, and 李政池. "Autonomous Dynamical Associative Memory with the Applications in Learning Finite State Automata." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/27484301726063797537.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
資訊工程學研究所
87
Learning the structure of finite state automata (FSA) from training strings is an interesting and important problem. [Giles, 1992] proposed an analog second-order single layer recurrent neural networks (SLRNN) and demonstrated how it is capable of extracting FSA. However, there is no concrete transition in the state space of such a model, and the abstract FSA is extracted by clustering analysis. We stand in a point of view enhancing the dynamic behavior of RNN, and propose two kinds of associative recurrent neural networks (ARNN). Moreover, We prove that both two ARNNs have the capability for simulating any FSA. We also derive the learning algorithm for ADAM. Afterward, applying the method for solving nonlinear equations that is the basis of ADAM, we derive a weight evolving method that converges to a weight that satisfied all input-output specification. This weight decision method can be used to speed up the convergent time of ARNN, and generally applied into both feed forward single layer networks and simple feedback networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Leung, Samuel. "Pathway representation using finite state automata and comparison using the NCI thesaurus." Thesis, 2006. http://hdl.handle.net/1828/2200.

Повний текст джерела
Анотація:
Can one classify biochemical pathways based on their topology? What is the topology of a biochemical pathway? What are the fundamental principles underlying different biochemical pathways involved in similar functional areas? Will one be able to characterize pathway "motifs" similar to motifs in proteins - i.e. reoccurring patterns in pathways? This thesis describes an attempt to develop a quantitative framework for the general representation and comparison of biochemical pathways. This quantitative framework involves a mathematical model to represent biochemical pathways and a set of similarity criteria to compare these biochemical pathways. We anticipate that such a tool would allow biologists to answer important questions such as the ones mentioned above.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Busse, Edgar [Verfasser]. "Finite-state genericity : on the diagonalization strength of finite automata / vorgelegt von Edgar Busse (geb. Damir Serikovich Muldagaliev)." 2006. http://d-nb.info/979601673/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ransikarbum, Kasin Wysk Richard A. "A procedural validation for affordanced-based finite state automata in human-involved complex systems." 2009. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-3883/index.html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Ipate, F., Marian Gheorghe, and Raluca Lefticaru. "Fundamental results for learning deterministic extended finite state machines from queries." 2020. http://hdl.handle.net/10454/18046.

Повний текст джерела
Анотація:
Yes
Regular language inference, initiated by Angluin, has many developments, including applications in software engineering and testing. However, the capability of finite automata to model the system data is quite limited and, in many cases, extended finite state machine formalisms, that combine the system control with data structures, are used instead. The application of Angluin-style inference algorithms to extended state machines would involve constructing a minimal deterministic extended finite state machine consistent with a deterministic 3-valued deterministic finite automaton. In addition to the usual, accepting and rejecting, states of finite automaton, a 3-valued deterministic finite automaton may have “don't care” states; the sequences of inputs that reach such states may be considered as accepted or rejected, as is convenient. The aforementioned construction reduces to finding a minimal deterministic finite automaton consistent with a 3-valued deterministic finite automaton, that preserves the deterministic nature of the extended model that also handles the data structure associated with it. This paper investigates fundamental properties of extended finite state machines in relation to Angluin's language inference problem and provides an inference algorithm for such models.
The full-text of this article will be released for public view at the end of the publisher embargo on 17 Sep 2021.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

"Finite-state methods and natural language processing : 6th International Workshop, FSMNLP 2007 Potsdam, Germany, september 14 - 16 ; revised papers." Universität Potsdam, 2008. http://opus.kobv.de/ubp/volltexte/2008/2381/.

Повний текст джерела
Анотація:
Proceedings with the revised papers of the FSMNLP (Finite-state Methods and Natural Language Processing) 2007 Workshop in Potsdam
Tagungsband mit den Beiträgen der FSMNLP (Finite-state Methods and Natural Language Processing) 2007 in Potsdam
Стилі APA, Harvard, Vancouver, ISO та ін.
34

"Modeling, Characterizing and Reconstructing Mesoscale Microstructural Evolution in Particulate Processing and Solid-State Sintering." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.49029.

Повний текст джерела
Анотація:
abstract: In material science, microstructure plays a key role in determining properties, which further determine utility of the material. However, effectively measuring microstructure evolution in real time remains an challenge. To date, a wide range of advanced experimental techniques have been developed and applied to characterize material microstructure and structural evolution on different length and time scales. Most of these methods can only resolve 2D structural features within a narrow range of length scale and for a single or a series of snapshots. The currently available 3D microstructure characterization techniques are usually destructive and require slicing and polishing the samples each time a picture is taken. Simulation methods, on the other hand, are cheap, sample-free and versatile without the special necessity of taking care of the physical limitations, such as extreme temperature or pressure, which are prominent issues for experimental methods. Yet the majority of simulation methods are limited to specific circumstances, for example, first principle computation can only handle several thousands of atoms, molecular dynamics can only efficiently simulate a few seconds of evolution of a system with several millions particles, and finite element method can only be used in continuous medium, etc. Such limitations make these individual methods far from satisfaction to simulate macroscopic processes that a material sample undergoes up to experimental level accuracy. Therefore, it is highly desirable to develop a framework that integrate different simulation schemes from various scales to model complicated microstructure evolution and corresponding properties. Guided by such an objective, we have made our efforts towards incorporating a collection of simulation methods, including finite element method (FEM), cellular automata (CA), kinetic Monte Carlo (kMC), stochastic reconstruction method, Discrete Element Method (DEM), etc, to generate an integrated computational material engineering platform (ICMEP), which could enable us to effectively model microstructure evolution and use the simulated microstructure to do subsequent performance analysis. In this thesis, we will introduce some cases of building coupled modeling schemes and present the preliminary results in solid-state sintering. For example, we use coupled DEM and kinetic Monte Carlo method to simulate solid state sintering, and use coupled FEM and cellular automata method to model microstrucutre evolution during selective laser sintering of titanium alloy. Current results indicate that joining models from different length and time scales is fruitful in terms of understanding and describing microstructure evolution of a macroscopic physical process from various perspectives.
Dissertation/Thesis
Doctoral Dissertation Materials Science and Engineering 2018
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Arthur, Kweku Kwakye. "Considerations towards the development of a forensic evidence management system." Diss., 2010. http://hdl.handle.net/2263/26567.

Повний текст джерела
Анотація:
The decentralized nature of the Internet forms its very foundation, yet it is this very nature that has opened networks and individual machines to a host of threats and attacks from malicious agents. Consequently, forensic specialists - tasked with the investigation of crimes commissioned through the use of computer systems, where evidence is digital in nature - are often unable to adequately reach convincing conclusions pertaining to their investigations. Some of the challenges within reliable forensic investigations include the lack of a global view of the investigation landscape and the complexity and obfuscated nature of the digital world. A perpetual challenge within the evidence analysis process is the reliability and integrity associated with digital evidence, particularly from disparate sources. Given the ease with which digital evidence (such as metadata) can be created, altered, or destroyed, the integrity attributed to digital evidence is of paramount importance. This dissertation focuses on the challenges relating to the integrity of digital evidence within reliable forensic investigations. These challenges are addressed through the proposal of a model for the construction of a Forensic Evidence Management System (FEMS) to preserve the integrity of digital evidence within forensic investigations. The Biba Integrity Model is utilized to maintain the integrity of digital evidence within the FEMS. Casey's Certainty Scale is then employed as the integrity classifcation scheme for assigning integrity labels to digital evidence within the system. The FEMS model consists of a client layer, a logic layer and a data layer, with eight system components distributed amongst these layers. In addition to describing the FEMS system components, a fnite state automata is utilized to describe the system component interactions. In so doing, we reason about the FEMS's behaviour and demonstrate how rules within the FEMS can be developed to recognize and pro le various cyber crimes. Furthermore, we design fundamental algorithms for processing of information by the FEMS's core system components; this provides further insight into the system component interdependencies and the input and output parameters for the system transitions and decision-points infuencing the value of inferences derived within the FEMS. Lastly, the completeness of the FEMS is assessed by comparing the constructs and operation of the FEMS against the published work of Brian D Carrier. This approach provides a mechanism for critically analyzing the FEMS model, to identify similarities or impactful considerations within the solution approach, and more importantly, to identify shortcomings within the model. Ultimately, the greatest value in the FEMS is in its ability to serve as a decision support or enhancement system for digital forensic investigators. Copyright
Dissertation (MSc)--University of Pretoria, 2010.
Computer Science
unrestricted
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Ang, Thomas. "Problems Related to Shortest Strings in Formal Languages." Thesis, 2010. http://hdl.handle.net/10012/5162.

Повний текст джерела
Анотація:
In formal language theory, studying shortest strings in languages, and variations thereof, can be useful since these strings can serve as small witnesses for properties of the languages, and can also provide bounds for other problems involving languages. For example, the length of the shortest string accepted by a regular language provides a lower bound on the state complexity of the language. In Chapter 1, we introduce some relevant concepts and notation used in automata and language theory, and we show some basic results concerning the connection between the length of the shortest string and the nondeterministic state complexity of a regular language. Chapter 2 examines the effect of the intersection operation on the length of the shortest string in regular languages. A tight worst-case bound is given for the length of the shortest string in the intersection of two regular languages, and loose bounds are given for two variations on the problem. Chapter 3 discusses languages that are defined over a free group instead of a free monoid. We study the length of the shortest string in a regular language that becomes the empty string in the free group, and a variety of bounds are given for different cases. Chapter 4 mentions open problems and some interesting observations that were made while studying two of the problems: finding good bounds on the length of the shortest squarefree string accepted by a deterministic finite automaton, and finding an efficient way to check if a finite set of finite words generates the free monoid. Some of the results in this thesis have appeared in work that the author has participated in \cite{AngPigRamSha,AngShallit}.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ciddi, Sibel. "Zpracování turkických jazyků." Master's thesis, 2014. http://www.nusl.cz/ntk/nusl-323086.

Повний текст джерела
Анотація:
Title: Processing of Turkic Languages Author: Sibel Ciddi Department: Institute of Formal and Applied Linguistics, Faculty of Mathematics and Physics, Charles University in Prague Supervisor: RNDr. Daniel Zeman, Ph.D. Abstract: This thesis presents several methods for the morpholog- ical processing of Turkic languages, such as Turkish, which pose a specific set of challenges for natural language processing. In order to alleviate the problems with lack of large language resources, it makes the data sets used for morphological processing and expansion of lex- icons publicly available for further use by researchers. Data sparsity, caused by highly productive and agglutinative morphology in Turkish, imposes difficulties in processing of Turkish text, especially for meth- ods using purely statistical natural language processing. Therefore, we evaluated a publicly available rule-based morphological analyzer, TRmorph, based on finite state methods and technologies. In order to enhance the efficiency of this analyzer, we worked on expansion of lexicons, by employing heuristics-based methods for the extraction of named entities and multi-word expressions. Furthermore, as a prepro- cessing step, we introduced a dictionary-based recognition method for tokenization of multi-word expressions. This method complements...
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії