To see the other types of publications on this topic, follow the link: Data translation model.

Dissertations / Theses on the topic 'Data translation model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 17 dissertations / theses for your research on the topic 'Data translation model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Winegar, Matthew Bryston. "Extending the Abstract Data Model." Digital Commons @ East Tennessee State University, 2005. https://dc.etsu.edu/etd/1007.

Full text
Abstract:
The Abstract Data Model (ADM) was developed by Sanderson [19] to model and predict semantic loss in data translation between computer languages. In this work, the ADM was applied to eight languages that were not considered as part of the original work. Some of the languages were found to support semantic features, such as the restriction semantics for inheritance found in languages like XML Schemas and Java, which could not be represented in the ADM. A proposal was made to extend the ADM to support these semantic features, and the requirements and implications of implementing that proposal were considered.
APA, Harvard, Vancouver, ISO, and other styles
2

Levenberg, Abby D. "Stream-based statistical machine translation." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5760.

Full text
Abstract:
We investigate a new approach for SMT system training within the streaming model of computation. We develop and test incrementally retrainable models which, given an incoming stream of new data, can efficiently incorporate the stream data online. A naive approach using a stream would use an unbounded amount of space. Instead, our online SMT system can incorporate information from unbounded incoming streams and maintain constant space and time. Crucially, we are able to match (or even exceed) translation performance of comparable systems which are batch retrained and use unbounded space. Our approach is particularly suited for situations when there is arbitrarily large amounts of new training material and we wish to incorporate it efficiently and in small space. The novel contributions of this thesis are: 1. An online, randomised language model that can model unbounded input streams in constant space and time. 2. An incrementally retrainable translationmodel for both phrase-based and grammarbased systems. The model presented is efficient enough to incorporate novel parallel text at the single sentence level. 3. Strategies for updating our stream-based language model and translation model which demonstrate how such components can be successfully used in a streaming translation setting. This operates both within a single streaming environment and also in the novel situation of having to translate multiple streams. 4. Demonstration that recent data from the stream is beneficial to translation performance. Our stream-based SMT system is efficient for tackling massive volumes of new training data and offers-up new ways of thinking about translating web data and dealing with other natural language streams.
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Pilho. "E-model event-based graph data model theory and implementation /." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29608.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Madisetti, Vijay; Committee Member: Jayant, Nikil; Committee Member: Lee, Chin-Hui; Committee Member: Ramachandran, Umakishore; Committee Member: Yalamanchili, Sudhakar. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
4

Blackman-Lees, Shellon. "Towards a Conceptual Framework for Persistent Use: A Technical Plan to Achieve Semantic Interoperability within Electronic Health Record Systems." NSUWorks, 2017. http://nsuworks.nova.edu/gscis_etd/998.

Full text
Abstract:
Semantic interoperability within the health care sector requires that patient data be fully available and shared without ambiguity across participating health facilities. The need for the current research was based on federal stipulations that required health facilities provide complete and optimal care to patients by allowing full access to their health records. The ongoing discussions to achieve interoperability within the health care industry continue to emphasize the need for healthcare facilities to successfully adopt and implement Electronic Health Record (EHR) systems. Reluctance by the healthcare industry to implement these EHRs for the purpose of achieving interoperability has led to the current research problem where it was determined that there is no existing single data standardization structure that can effectively share and interpret patient data within heterogeneous systems. The current research used the design science research methodology (DSRM) to design and develop a master data standardization and translation (MDST) model that allowed seamless exchange of healthcare data among multiple facilities. To achieve interoperability through a common data standardization structure, where multiple independent data models can coexist, the translation mechanism incorporated the use of the Resource Description Framework (RDF). Using RDF, a universal exchange language, allowed for multiple data models and vocabularies to be easily combined and interrelated within a single environment thereby reducing data definition ambiguity. Based on the results from the research, key functional capabilities to effectively map and translate health data were documented. The research solution addressed two primary issues that impact semantic interoperability – the need for a centralized standards repository and a framework that effectively maps and translates data between various EHRs and vocabularies. Thus, health professionals have a single interpretation of health data across multiple facilities which ensures the integrity and validity of patient care. The research contributed to the field of design science development through the advancements of the underlying theories, phases, and frameworks used in the design and development of data translation models. While the current research focused on the development of a single, common information model, further research opportunities and recommendations could include investigations into the implementation of these types of artifacts within a single environment at a multi-facility hospital entity.
APA, Harvard, Vancouver, ISO, and other styles
5

Svanberg, Kerstin. "Bringing the history of fashion up-to-date; towards a model for temporal adatation in translation." Thesis, Linnéuniversitetet, Institutionen för språk och litteratur, SOL, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-22629.

Full text
Abstract:
In cultural adaptation, the translator has a solid theoretical ground to stand upon; scholars have elaborated strategies that are helpful to this effect. However, there is little research, if any, to rely upon in the matter of temporal adaptation. The aim of this paper is to fill this gap. The primary data used in this translational study consists of an English source text that was published in 2008 and the resulting target text, translated to Swedish in 2012. Hence, in order for the target text to function in its time, there was a four-year long time gap to fill with accurate and relevant data and in a style that would not deviate from the author’s original intentions; the target text needed to be temporally adapted. In what follows, I will suggest a set of strategies for temporal adaptation. The model is elaborated with strategies for cultural adaptation as a starting point and based upon measures taken to relocate the target text to 2012. The suggested strategies are time bridging, updating, adjustment and omission. These four strategies make up the model that I put forward to bridge the theoretical gap that seems to prevail in the matter of temporal adaptation. However, considering that the data used in this study was relatively limited, the applicability of the strategies may be the scope of future studies.
APA, Harvard, Vancouver, ISO, and other styles
6

Ngô, Van Chan. "Formal verification of a synchronous data-flow compiler : from Signal to C." Phd thesis, Université Rennes 1, 2014. http://tel.archives-ouvertes.fr/tel-01067477.

Full text
Abstract:
Synchronous languages such as Signal, Lustre and Esterel are dedicated to designing safety-critical systems. Their compilers are large and complicated programs that may be incorrect in some contexts, which might produce silently bad compiled code when compiling source programs. The bad compiled code can invalidate the safety properties that are guaranteed on the source programs by applying formal methods. Adopting the translation validation approach, this thesis aims at formally proving the correctness of the highly optimizing and industrial Signal compiler. The correctness proof represents both source program and compiled code in a common semantic framework, then formalizes a relation between the source program and its compiled code to express that the semantics of the source program are preserved in the compiled code.
APA, Harvard, Vancouver, ISO, and other styles
7

Potet, Marion. "Vers l'intégration de post-éditions d'utilisateurs pour améliorer les systèmes de traduction automatiques probabilistes." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00995104.

Full text
Abstract:
Les technologies de traduction automatique existantes sont à présent vues comme une approche prometteuse pour aider à produire des traductions de façon efficace et à coût réduit. Cependant, l'état de l'art actuel ne permet pas encore une automatisation complète du processus et la coopération homme/machine reste indispensable pour produire des résultats de qualité. Une pratique usuelle consiste à post-éditer les résultats fournis par le système, c'est-à-dire effectuer une vérification manuelle et, si nécessaire, une correction des sorties erronées du système. Ce travail de post-édition effectué par les utilisateurs sur les résultats de traduction automatique constitue une source de données précieuses pour l'analyse et l'adaptation des systèmes. La problématique abordée dans nos travaux s'intéresse à développer une approche capable de tirer avantage de ces retro-actions (ou post-éditions) d'utilisateurs pour améliorer, en retour, les systèmes de traduction automatique. Les expérimentations menées visent à exploiter un corpus d'environ 10 000 hypothèses de traduction d'un système probabiliste de référence, post-éditées par des volontaires, par le biais d'une plateforme en ligne. Les résultats des premières expériences intégrant les post-éditions, dans le modèle de traduction d'une part, et par post-édition automatique statistique d'autre part, nous ont permis d'évaluer la complexité de la tâche. Une étude plus approfondie des systèmes de post-éditions statistique nous a permis d'évaluer l'utilisabilité de tels systèmes ainsi que les apports et limites de l'approche. Nous montrons aussi que les post-éditions collectées peuvent être utilisées avec succès pour estimer la confiance à accorder à un résultat de traduction automatique. Les résultats de nos travaux montrent la difficulté mais aussi le potentiel de l'utilisation de post-éditions d'hypothèses de traduction automatiques comme source d'information pour améliorer la qualité des systèmes probabilistes actuels.
APA, Harvard, Vancouver, ISO, and other styles
8

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
9

Scarlato, Michele. "Sicurezza di rete, analisi del traffico e monitoraggio." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3223/.

Full text
Abstract:
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
APA, Harvard, Vancouver, ISO, and other styles
10

Vincent, Charles, R. Färe, and S. Grosskopf. "A translation invariant pure DEA model." 2015. http://hdl.handle.net/10454/17539.

Full text
Abstract:
Yes
This communication complements the DEA model proposed by Lovell and Pastor (1999), by incorporating both positive and negative criteria in the model. As such, we propose a DEA model, known as pure DEA, using a directional distance function approach.
APA, Harvard, Vancouver, ISO, and other styles
11

Lu, Ming-Chuang, and 呂明權. "Translation Applications for Heterogeneous Database System Based on Data Model." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/08308952062008251513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

"Automating Fixture Setups Based on Point Cloud Data & CAD Model." Master's thesis, 2016. http://hdl.handle.net/2286/R.I.40331.

Full text
Abstract:
abstract: Metal castings are selectively machined-based on dimensional control requirements. To ensure that all the finished surfaces are fully machined, each as-cast part needs to be measured and then adjusted optimally in its fixture. The topics of this thesis address two parts of this process: data translations and feature-fitting clouds of points measured on each cast part. For the first, a CAD model of the finished part is required to be communicated to the machine shop for performing various machining operations on the metal casting. The data flow must include GD&T specifications along with other special notes that may be required to communicate to the machinist. Current data exchange, among various digital applications, is limited to translation of only CAD geometry via STEP AP203. Therefore, an algorithm is developed in order to read, store and translate the data from a CAD file (for example SolidWorks, CREO) to a standard and machine readable format (ACIS format - *.sat). Second, the geometry of cast parts varies from piece to piece and hence fixture set-up parameters for each part must be adjusted individually. To predictively determine these adjustments, the datum surfaces, and to-be-machined surfaces are scanned individually and the point clouds reduced to feature fits. The scanned data are stored as separate point cloud files. The labels associated with the datum and to-be-machined (TBM) features are extracted from the *.sat file. These labels are further matched with the file name of the point cloud data to identify data for the respective features. The point cloud data and the CAD model are then used to fit the appropriate features (features at maximum material condition (MMC) for datums and features at least material condition (LMC) for TBM features) using the existing normative feature fitting (nFF) algorithm. Once the feature fitting is complete, a global datum reference frame (GDRF) is constructed based on the locating method that will be used to machine the part. The locating method is extracted from a fixture library that specifies the type of fixturing used to machine the part. All entities are transformed from its local coordinate system into the GDRF. The nominal geometry, fitted features, and the GD&T information are then stored in a neutral file format called the Constraint Tolerance Feature (CTF) Graph. The final outputs are then used to identify the locations of the critical features on each part and these are used to establish the adjustments for its setup prior to machining, in another module, not part of this thesis.
Dissertation/Thesis
Masters Thesis Mechanical Engineering 2016
APA, Harvard, Vancouver, ISO, and other styles
13

Yi, Xing. "Discovering and using implicit data for information retrieval." 2011. https://scholarworks.umass.edu/dissertations/AAI3482732.

Full text
Abstract:
In real-world information retrieval (IR) tasks, the searched items and/or the users' queries often have implicit information associated with them—information that describes unspecified aspects of the items or queries. For example, in web search tasks, web pages are often pointed to by hyperlinks (known as anchors) from other pages, and thus have human-generated succinct descriptions of their content (anchor text) associated with them. This indirectly available information has been shown to improve search effectiveness for different retrieval tasks. However, in many real-world IR challenges this information is sparse in the data; i.e., it is incomplete or missing in a large portion of the data. In this work, we explore how to discover and use implicit information in large amounts of data in the context of IR. We present a general perspective for discovering implicit information and demonstrate how to use the discovered data in four specific IR challenges: (1) finding relevant records in semi-structured databases where many records contain incomplete or empty fields; (2) searching web pages that have little or no associated anchor text; (3) using click-through records in web query logs to help search pages that have no or very few clicks; and (4) discovering plausible geographic locations for web queries that contain no explicit geographic information. The intuition behind our approach is that data similar in some aspects are often similar in other aspects. Thus we can (a) use the observed information of queries/documents to find similar queries/documents, and then (b) utilize those similar queries/documents to reconstruct plausible implicit information for the original queries/documents. We develop language modeling based techniques to effectively use content similarity among data for our work. Using the four different search tasks on large-scale noisy datasets, we empirically demonstrate the effectiveness of our approach. We further discuss the advantages and weaknesses of two complementary approaches within our general perspective of handling implicit information for retrieval purpose. Taken together, we describe a general perspective that uses contextual similarity among data to discover implicit information for IR challenges. Using this general perspective, we formally present two language modeling based information discovery approaches. We empirically evaluate our approaches using different IR challenges. Our research shows that supporting information discovery tailored to different search tasks can enhance IR systems' search performance and improve users' search experience.
APA, Harvard, Vancouver, ISO, and other styles
14

"Translating data into action: A data team model as the seed of comprehensive district change." NATIONAL-LOUIS UNIVERSITY, 2009. http://pqdtopen.proquest.com/#viewpdf?dispub=3333053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hsu, Wen-Yu, and 徐玟瑜. "The Study of Translator for Compiling a C-like Procedural Language to Data Flow Model." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/96903439240945990596.

Full text
Abstract:
碩士
國立彰化師範大學
電子工程學系
94
The dataflow computing system has been recognized to as the innovative computing platform that is able to achieve highly parallel process in the most natural style. However, the most of programmer are trained to use the procedural programming language to realize the software systems. Specifically, the dataflow computing system is accepted to perform the intensive computing tasks more outstanding than the traditional computing systems. Thus, the aim of this thesis is to develop a translator which compiles a procedural language program to the corresponding dataflow computing model. The translator will make the procedural language programs that can be executed on a dataflow computing platform, easily and conveniently. Petri net is an internal representation of the translated dataflow computing model in our research. In order to systematically construct the dataflow computing model on Petri nets, there are some primitive Petri net templates being constructed to correspond the primary statements of procedural languages that include sequence, selection, and looping statements. In general, the compiler will be composed by analysis and synthesis phases. First, we implement lexical analysis, syntax analysis and semantic analysis of the translator by Flex and Bison as two tools for rapidly generating analysis programs. Secondly, there are four steps in the synthesis phase, that are different to the traditional synthesis phase, to convert the parse tree into the corresponding Petri net model. The function of revised synthesis phase includes the modeling Petri net from basic expressions and control statements, adjusting the links of some sub-Petri net models referring to the variable scope levels, and reconstructing the Petri net links on the dependent variables according to the statement sequence. Finally, we perform the simulation and verification on the generated Petri net models to exemplify the correctness and efficiency of the translated dataflow computing model.
APA, Harvard, Vancouver, ISO, and other styles
16

Magana-Mora, Arturo. "Genetic Algorithms for Optimization of Machine-learning Models and their Applications in Bioinformatics." Diss., 2017. http://hdl.handle.net/10754/623317.

Full text
Abstract:
Machine-learning (ML) techniques have been widely applied to solve different problems in biology. However, biological data are large and complex, which often result in extremely intricate ML models. Frequently, these models may have a poor performance or may be computationally unfeasible. This study presents a set of novel computational methods and focuses on the application of genetic algorithms (GAs) for the simplification and optimization of ML models and their applications to biological problems. The dissertation addresses the following three challenges. The first is to develop a generalizable classification methodology able to systematically derive competitive models despite the complexity and nature of the data. Although several algorithms for the induction of classification models have been proposed, the algorithms are data dependent. Consequently, we developed OmniGA, a novel and generalizable framework that uses different classification models in a treeXlike decision structure, along with a parallel GA for the optimization of the OmniGA structure. Results show that OmniGA consistently outperformed existing commonly used classification models. The second challenge is the prediction of translation initiation sites (TIS) in plants genomic DNA. We performed a statistical analysis of the genomic DNA and proposed a new set of discriminant features for this problem. We developed a wrapper method based on GAs for selecting an optimal feature subset, which, in conjunction with a classification model, produced the most accurate framework for the recognition of TIS in plants. Finally, results demonstrate that despite the evolutionary distance between different plants, our approach successfully identified conserved genomic elements that may serve as the starting point for the development of a generic model for prediction of TIS in eukaryotic organisms. Finally, the third challenge is the accurate prediction of polyadenylation signals in human genomic DNA. To achieve this, we analyzed genomic DNA sequences for the 12 most frequent polyadenylation signal variants and proposed a new set of features that may contribute to the understanding of the polyadenylation process. We derived Omni-PolyA, a model, and tool based on OmniGA for the prediction of the polyadenylation signals. Results show that Omni-PolyA significantly reduced the average classification error rate compared to the state-of-the-art results.
APA, Harvard, Vancouver, ISO, and other styles
17

Stevenson, Lynette. "Modal satisifiability in a constraint logic environment." Thesis, 2007. http://hdl.handle.net/10500/2030.

Full text
Abstract:
The modal satisfiability problem has to date been solved using either a specifically designed algorithm, or by translating the modal logic formula into a different class of problem, such as a first-order logic, a propositional satisfiability problem or a constraint satisfaction problem. These approaches and the solvers developed to support them are surveyed and a synthesis thereof is presented. The translation of a modal K formula into a constraint satisfaction problem, as developed by Brand et al. [18], is further enhanced. The modal formula, which must be in conjunctive normal form, is translated into layered propositional formulae. Each of these layers is translated into a constraint satisfaction problem and solved using the constraint solver ECLiPSe. I extend this translation to deal with reflexive and transitive accessibility relations, thereby providing for the modal logics KT and S4. Two of the difficulties that arise when these accessibility relations are added are that the resultant formula increases considerably in complexity, and that it is no longer in conjunctive normal form (CNF). I eliminate the need for the conversion of the formula to CNF and deal instead with formulae that are in negation normal form (NNF). I apply a number of enhancements to the formula at each modal layer before it is translated into a constraint satisfaction problem. These include extensive simplification, the assignment of a single value to propositional variables that occur only positively or only negatively, and caching the status of the formula at each node of the search tree. All of these significantly prune the search space. The final results I achieve compare favorably with those obtained by other solvers.
Computing
M.Sc. (Computer Science)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography