To see the other types of publications on this topic, follow the link: Edit.

Dissertations / Theses on the topic 'Edit'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Edit.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Larsson, Robin, and Martin Davik. "WP-Edit." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-17044.

Full text
Abstract:
Det här är till för att skapa ett system där andra webbutvecklare kunde skapa sina hemsidor online. Det webbutvecklaren inte behöver tänka på är att ha en egen webbserver, vilket kan vara ett problem för många om dom sitter vid datorer med begränsade rättigheter så att dom inte kan ladda ner och installera en webbserver. En annan bra fördel är att webbutvecklaren kan logga in vart som helst, när som helst och redigera koden till en hemsida. Skulle webbutvecklaren vilja ha sitt projekt på en egen webbserver, går det enkelt att ladda ner projektet.
APA, Harvard, Vancouver, ISO, and other styles
2

Berti, Monica. "Epigraphy Edit-a-thon." Universitätsbibliothek Leipzig, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-220763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Houseton, Fran. "Saved By the Edit." Digital Commons @ East Tennessee State University, 2019. https://dc.etsu.edu/honors/505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Carlson, Hedda. "write drunk/edit sober." Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-22043.

Full text
Abstract:
This work starts with a simple question of ; Why not to draw the garment directly on the body since this is the way it will inevitably be worn? Working through steps; wrapping into a fabric and drawing the garment on the body to reveal lines for constructing that is directly based on the body, this work shows an alternative way of constructing a garment; the result that is presented can be seen as a base for further development within the field this method has explored. Further, the work challenges the current norms in archetypical garments with the intention to redefine their expression, where the methods aim is to broaden the field of garment construction, investigating the gap between construction lines and material expectations. The method Write Drunk/ Edit Sober is both discovering the fundamentals of garment construction and questioning the systematic interpretations we place on a garments connection to materials.
APA, Harvard, Vancouver, ISO, and other styles
5

Olsson, Emil. "Fyra analyser av Edit Södergran." Thesis, Mittuniversitetet, Avdelningen för humaniora, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-23692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Holmquist, Johan. "Formalisation of edit operations for structure editors." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5946.

Full text
Abstract:
<p>Although several systems with structure editors have been built, no model exist to formally describe the edit operations used in such editors. This thesis introduces such a model --- a formalism to describe general structure edit operations for text oriented documents. The model allows free bottom-up editing for any tree-based structural document with a textual content. It can also handle attribute and erroneous structures. Some classes of common structures have been identified and structure editor specifications constructed for them, which can be used and combined in the creation of other structure editors.</p>
APA, Harvard, Vancouver, ISO, and other styles
7

Crowder, Benjamin M. "Ansible: Select-to-Edit for Physical Widgets." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/9266.

Full text
Abstract:
Ansible brings select-to-edit functionality to physical widgets. When programming sets of physical widgets, it can be bothersome for a programmer to remember the name of the software object that corresponds to a specific widget. Click-to-edit functionality in GUI programming provides a physical action--moving the mouse to a widget and clicking a button on the mouse--to select a virtual widget. In a similar vein, when programming physical widgets, it is natural to point at a widget and think, "I want to program that one." Ansible allows physical user interface programmers to "click" on a physical widget by making a physical action: shining a light, waving a magnet, or pressing a button on the widget. This brings up the widget's code for editing on a laptop or workstation. The Ansible system is intended to help physical user interface programmers prototype distributed systems built from physical widgets. We conducted a user study with twelve programmers using Ansible; the study showed that shining a light eliminates the need for a programmer to remember the mapping between physical widgets and their names. We also built three example systems to illustrate some of the kinds of systems that can be implemented using Ansible.
APA, Harvard, Vancouver, ISO, and other styles
8

Bodemyr, Oskar. "Resolving Higher-Order Conflicts in Edit History Refactoring." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-196685.

Full text
Abstract:
When committing source code changes to a version control system, these changes might affect more than one of several tasks connected to the project. This can have a bad impact on analysis of changes. It can also make it difficult to reuse or undo previous changes, or difficult to understand the evolution of software. With edit history refactoring, the edit history of source code can be reconfigured to help the developer separate commits into smaller commits. Thereby, one can avoid commits that affect more than one task. However, separated commits can have unwanted effects on the source code. The aim of this thesis project is to resolve conflicts that may occur when refactoring an edit history. Changes have been made to Historef, a tool created by Saeki Laboratory at Tokyo Institute of Technology. This tool uses the technique of edit history refactoring. The changes help the tool evaluate possible edit history refactorings in order to suggest one that avoids source code con- flicts. By testing examples of edit histories, one can show that the tool can avoid different types of conflicts and the separated commits can be commited to a version control system without unwanted effects.<br>När kodändringar laddas upp till ett versionshanteringssystem är det möjligt att dessa ändringar är kopplade till mer än en uppgift i projektet. Det här kan ha en dålig påverkan när man analyserar projektets ändringar, återanvänder tidigare ändringar, eller försöker förstå hur mjukvaran har ändrats över tiden. Med hjälp av edit history refactoring kan kodändringshistorik omstruktureras för att separera kodändringar till mindre ändringar. Då kan man undvika att en ändring på- verkar mer än en uppgift. Däremot kan uppdelningen av ändringar resultera i oönskade effekter på källkoden. Syftet med det här examensarbetet är att undvika konflikter som kan uppstå när man strukturerar om kodändringshistorik. Historef är ett verktyg skapat av Saeki Laboratory vid Tokyo Institute of Technology. Det här verktyget använder sig av tekniken edit history refactoring. Genom att utöka verktygets funktionalitet kan det numera evaluera omstruktureringar av kodändringshistorik för att kunna föreslå en omstrukturering som undviker konflikter. Genom att testa exempel på kodhistorik kan man visa att verktyget nu kan undvika olika typer av konflikter så att dessa ändringar kan laddas upp utan oönskade effekter.
APA, Harvard, Vancouver, ISO, and other styles
9

Mandal, Bikash. "Comparison of edit history clustering techniques for spatial hypertext." Texas A&M University, 2005. http://hdl.handle.net/1969.1/3184.

Full text
Abstract:
History mechanisms available in hypertext systems allow access to past user interactions with the system. This helps users evaluate past work and learn from past activity. It also allows systems identify usage patterns and potentially predict behaviors with the system. Thus, recording history is useful to both the system and the user. Various tools and techniques have been developed to group and annotate history in Visual Knowledge Builder (VKB). But the problem with these tools is that the operations are performed manually. For a large VKB history growing over a long period of time, performing grouping operations using such tools is difficult and time consuming. This thesis examines various methods to analyze VKB history in order to automatically group/cluster all the user events in this history. In this thesis, three different approaches are compared. The first approach is a pattern matching approach identifying repeated patterns of edit events in the history. The second approach is a rule-based approach that uses simple rules, such as group all consecutive events on a single object. The third approach uses hierarchical agglomerative clustering (HAC) where edits are grouped based on a function of edit time and edit location. The contributions of this thesis work are: (a) developing tools to automatically cluster large VKB history using these approaches, (b) analyzing performance of each approach in order to determine their relative strengths and weaknesses, and (c) answering the question, how well do the automatic clustering approaches perform by comparing the results obtained from this automatic tool with that obtained from the manual grouping performed by actual users on a same set of VKB history. Results obtained from this thesis work show that the rule-based approach performs the best in that it best matches human-defined groups and generates the fewest number of groups. The hierarchic agglomerative clustering approach is in between the other two approaches with regards to identifying human-defined groups. The pattern-matching approach generates many potential groups but only a few matches with those generated by actual VKB users.
APA, Harvard, Vancouver, ISO, and other styles
10

Brinkmeyer, Grant Rogers. "Intercept System to Edit, Control, and Analyze Packets (ISECAP)." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ahmed, Algabli Shaima. "Learning the Graph Edit Distance through embedding the graph matching." Doctoral thesis, Universitat Rovira i Virgili, 2020. http://hdl.handle.net/10803/669612.

Full text
Abstract:
Els gràfics són estructures de dades abstractes que s’utilitzen per modelar problemes reals amb dues entitats bàsiques: nodes i vores. Cada node o vèrtex representa un punt d'interès rellevant d'un problema i cada vora representa la relació entre aquests punts. Es poden atribuir nodes i vores per augmentar la precisió del model, cosa que significa que aquests atributs podrien variar des de vectors de característiques fins a etiquetes de descripció. A causa d'aquesta versatilitat, s'han trobat moltes aplicacions en camps com la visió per ordinador, la biomèdica i l'anàlisi de xarxa, etc., la primera part d'aquesta tesi presenta un mètode general per aprendre automàticament els costos d'edició que comporta l'edició de gràfics. Distància. El mètode es basa en incrustar parells de gràfics i el seu mapeig de node a node de veritat terrestre en un espai euclidià. D’aquesta manera, l’algoritme d’aprenentatge no necessita calcular cap concordança de gràfics tolerant als errors, que és l’inconvenient principal d’altres mètodes a causa de la seva intrínseca complexitat computacional exponencial. No obstant això, el mètode d’aprenentatge té la principal restricció que els costos d’edició han de ser constants. A continuació, posem a prova aquest mètode amb diverses bases de dades gràfiques i també l’aplicem per realitzar el registre d’imatges. A la segona part de la tesi, aquest mètode es particularitza a la verificació d’empremtes dactilars. Les dues diferències principals respecte a l’altre mètode són que només definim els costos d’edició de substitució als nodes. Per tant, suposem que els gràfics no tenen arestes. I també, el mètode d’aprenentatge no es basa en una classificació lineal sinó en una regressió lineal.<br>Los gráficos son estructuras de datos abstractas que se utilizan para modelar problemas reales con dos entidades básicas: nodos y aristas. Cada nodo o vértice representa un punto de interés relevante de un problema, y cada borde representa la relación entre estos puntos. Se podrían atribuir nodos y bordes para aumentar la precisión del modelo, lo que significa que estos atributos podrían variar de vectores de características a etiquetas de descripción. Debido a esta versatilidad, se han encontrado muchas aplicaciones en campos como visión por computadora, biomédicos y análisis de redes, etc. La primera parte de esta tesis presenta un método general para aprender automáticamente los costos de edición involucrados en la Edición de Gráficos Distancia. El método se basa en incrustar pares de gráficos y su mapeo de nodo a nodo de verdad fundamental en un espacio euclidiano. De esta manera, el algoritmo de aprendizaje no necesita calcular ninguna coincidencia de gráfico tolerante a errores, que es el principal inconveniente de otros métodos debido a su complejidad computacional exponencial intrínseca. Sin embargo, el método de aprendizaje tiene la principal restricción de que los costos de edición deben ser constantes. Luego probamos este método con varias bases de datos de gráficos y también lo aplicamos para realizar el registro de imágenes. En la segunda parte de la tesis, este método se especializa en la verificación de huellas digitales. Las dos diferencias principales con respecto al otro método son que solo definimos los costos de edición de sustitución en los nodos. Por lo tanto, suponemos que los gráficos no tienen aristas. Y también, el método de aprendizaje no se basa en una clasificación lineal sino en una regresión lineal.<br>Graphs are abstract data structures used to model real problems with two basic entities: nodes and edges. Each node or vertex represents a relevant point of interest of a problem, and each edge represents the relationship between these points. Nodes and edges could be attributed to increase the accuracy of the model, which means that these attributes could vary from feature vectors to description labels. Due to this versatility, many applications have been found in fields such as computer vision, biomedics, and network analysis, and so on .The first part of this thesis presents a general method to automatically learn the edit costs involved in the Graph Edit Distance. The method is based on embedding pairs of graphs and their ground-truth node-tonode mapping into a Euclidean space. In this way, the learning algorithm does not need to compute any Error-Tolerant Graph Matching, which is the main drawback of other methods due to its intrinsic exponential computational complexity. Nevertheless, the learning method has the main restriction that edit costs have to be constant. Then we test this method with several graph databases and also we apply it to perform image registration. In the second part of the thesis, this method is particularized to fingerprint verification. The two main differences with respect to the other method are that we only define the substitution edit costs on the nodes. Thus, we assume graphs do not have edges. And also, the learning method is not based on a linear classification but on a linear regression.
APA, Harvard, Vancouver, ISO, and other styles
12

Bafail, Alaa. "EDIT : an Educational Design Intelligence Tool for supporting design decisions." Thesis, Nottingham Trent University, 2016. http://irep.ntu.ac.uk/id/eprint/33240/.

Full text
Abstract:
Designing for learning is a complex task and considered one of the most fundamental activities of teaching practitioners. A well-balanced teaching system ensures that all aspects of teaching, from the intended learning outcomes, the teaching and learning activities used, and the assessment tasks are all associated and aligned to each other (Biggs, 1996). This guarantees appropriate and therefore effective student engagement. The design and promotion of constructively aligned teaching practices has been supported to some degree by the development of software tools that attempt to support teaching practitioners in the design process and assist them in the development of more informed design decisions. Despite the potential of the existing tools, these tools have several limitations in respect of the support and guidance provided and cannot be adapted according to how the design pattern works in practice. Therefore; there is a real need to incorporate an intelligent metric system that enables intelligent design decisions to be made not only theoretically according to pedagogical theories but also practically based on good design practices according to high levels of satisfaction scores. To overcome the limitations of existing design tools, this research explores machine learning techniques; in particular artificial neural networks as an innovative approach for building an Educational Intelligence Design Tool EDIT that supports teaching practitioners to measure, align, and edit their teaching designs based on good design practices and on the pedagogic theory of constructive alignment. Student satisfaction scores are utilized as indicators of good design practice to identify meaningful alignment ranges for the main components of Tepper's metric (2006). It is suggested that modules designed within those ranges will be well-formed and constructively aligned and potentially yield higher student satisfaction. On this basis, the research had developed a substantial module design database with 519 design patterns spanning 476 modules from the STEM discipline. This is considered the first substantial database compared to the state-of-the-art Learning Design Support Environment (LDSE)(Laurillard, 2011), which includes 122 design patterns available. In order to have a neural-based framework for EDIT, a neural auto-encoder was incorporated to act as an auto-associative memory that learns on the basis of exposure to sets of 'good' design patterns. 519 generated design patterns were coded as input criteria and introduced to the designed neural network with feed-forward multilayer perceptron architecture using the IV hyperbolic tangent function and back-propagation training algorithm for learning the desired task. After successful training (88%), the testing phase was followed by presenting 102 new patterns (associated with low student satisfaction) to the network where higher pattern errors were generated suggesting substantial design changes to input patterns had been generated by the network. The findings of the research are significant in showing the degree of changes for the test patterns (before) and (after) and evaluating the relationships between the core features of module designs and overall student satisfaction. T-test analysis results show statistically significant differences in the test set (before) and (after) in case of the alignment score between learning outcomes and learning objectives (V1) and the alignment score between learning objectives and teaching activities (V2), whereas no statistically significant difference is seen in the alignment score between learning outcomes and assessment tasks (V3). The network gives an average improvement of 0.9, 1.5, and 0.5 in the alignment scores of V1, V2, and V3, respectively. This resulted in increasing the average of satisfaction scores from 3.3 to 3.8. Accordingly, positive correlation with different degrees between student satisfaction and the alignment scores were suggested as a result of applying the network proposal changes. EDIT, with its data‐orientated and adaptive approach to design, reveals orthodox practices whilst revealing some unexpected incongruity between alignment theory and design practice. For example, as expected, increasing the amount of questioning, interaction and group‐based activity effects higher levels of student satisfaction even though misalignment may be present. However, the model is relatively ambivalent towards the alignment of learning outcomes and learning objectives suggesting there is some confusion between practitioners as to how these are related. Also, this confusion appears to persist when defining session learning objectives for different types of teaching, learning and assessment tasks in that the activities themselves appear to be at a higher cognitive level according to Bloom's Taxonomy than the respective learning objectives (resulting in positive misalignment).
APA, Harvard, Vancouver, ISO, and other styles
13

Samuelsson, Axel. "Weighting Edit Distance to Improve Spelling Correction in Music Entity Search." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210036.

Full text
Abstract:
This master’s thesis project undertook investigation of whether the extant Damerau- Levenshtein edit distance measurement between two strings could be made more useful for detecting and adjusting misspellings in a search query. The idea was to use the knowledge that many users type their queries using the QWERTY keyboard layout, and weighting the edit distance in a manner that makes it cheaper to correct misspellings caused by confusion of nearer keys. Two different weighting approaches were tested, one with a linear spread from 2/9 to 2 depending on the keyboard distance, and the other had neighbors preferred over non-neighbors (either with half the cost or no cost at all). They were tested against an unweighted baseline as well as inverted versions of themselves (nearer keys more expensive to replace) against a dataset of 1,162,145 searches. No significant improvement in the retrieval of search results were observed when compared to the baseline. However, each of the weightings performed better than its corresponding inversion on a p &lt; 0.05 significance level. This means that while the weighted edit distance did not outperform the baseline, the data still clearly points toward a correlation between the physical position of keys on the keyboard, and what spelling mistakes are made.<br>Detta examensarbete åtog sig att undersöka om det etablerade Damerau-Levenshtein-avståndet som mäter avståndet kan anpassas för att bättre hitta och korrigera stavningsfel i sökfrågor. Tanken var att använda det faktum att många användare skriver sina sökfrågor på ett tangentbord med QWERTY-layout, och att vikta ändrings- avståndet så att det blir billigare att korrigera stavfel orsakade av hopblandning av två knappar som är närmare varandra. Två olika viktningar testades, en hade vikterna utspridda linjärt mellan 2/9 och 2, och den andra föredrog grannar över icke-grannar (antingen halva kostnaden eller ingen alls). De testades mot ett oviktat referensavstånd samt inversen av sig själva (så att närmare knappar blev dyrare att byta ut) mot ett dataset bestående av 1 162 145 sökningar. Ingen signifikant förbättring uppmättes gentemot referensen. Däremot presterade var och en av viktningarna bättre än sin inverterade motpart på konfidensnivå p &lt; 0,05. Det innebär att trots att de viktade distansavstånden inte presterade bättre än referensen så pekar datan tydligt mot en korrelation mellan den fysiska positioneringen av knapparna på tangentbordet och vilka stavningsmisstag som begås.
APA, Harvard, Vancouver, ISO, and other styles
14

Diedrich, Andrea Edit [Verfasser]. "Entwicklung einer nanopartikulären Formulierung zur Vakzinierung über den Respirationstrakt / Andrea Edit Diedrich." Kiel : Universitätsbibliothek Kiel, 2018. http://d-nb.info/116289251X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kostov, Viktor, and Andriy Slyusar. "Development of a Track Editing System for Use with Maps on Smartphones." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-28426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Santacruz, Muñoz José Luis. "Error-tolerant Graph Matching on Huge Graphs and Learning Strategies on the Edit Costs." Doctoral thesis, Universitat Rovira i Virgili, 2019. http://hdl.handle.net/10803/668356.

Full text
Abstract:
Els grafs són estructures de dades abstractes que s'utilitzen per a modelar problemes reals amb dues entitats bàsiques: nodes i arestes. Cada node o vèrtex representa un punt d'interès rellevant d'un problema, i cada aresta representa la relació entre aquests vèrtexs. Els nodes i les arestes podrien incorporar atributs per augmentar la precisió del problema modelat. Degut a aquesta versatilitat, s'han trobat moltes aplicacions en camps com la visió per computador, biomèdics, anàlisi de xarxes, etc. La Distància d'edició de grafs (GED) s'ha convertit en una eina important en el reconeixement de patrons estructurals, ja que permet mesurar la dissimilitud dels grafs. A la primera part d'aquesta tesi es presenta un mètode per generar una parella grafs juntament amb la seva correspondència en un cost computacional lineal. A continuació, se centra en com mesurar la dissimilitud entre dos grafs enormes (més de 10.000 nodes), utilitzant un nou algoritme de aparellament de grafs anomenat Belief Propagation. Té un cost computacional O(d^3.5N). Aquesta tesi també presenta un marc general per aprendre els costos d'edició implicats en els càlculs de la GED automàticament. Després, concretem aquest marc en dos models diferents basats en xarxes neuronals i funcions de densitat de probabilitat. S'ha realitzat una validació pràctica exhaustiva en 14 bases de dades públiques. Aquesta validació mostra que la precisió és major amb els costos d'edició apresos, que amb alguns costos impostos manualment o altres costos apresos automàticament per mètodes anteriors. Finalment proposem una aplicació de l'algoritme Belief propagation utilitzat en la simulació de la mecànica muscular.<br>Los grafos son estructuras de datos abstractos que se utilizan para modelar problemas reales con dos entidades básicas: nodos y aristas. Cada nodo o vértice representa un punto de interés relevante de un problema, y cada arista representa la relación entre estos vértices. Los nodos y las aristas podrían incorporar atributos para aumentar la precisión del problema modelado. Debido a esta versatilidad, se han encontrado muchas aplicaciones en campos como la visión por computador, biomédicos, análisis de redes, etc. La Distancia de edición de grafos (GED) se ha convertido en una herramienta importante en el reconocimiento de patrones estructurales, ya que permite medir la disimilitud de los grafos. En la primera parte de esta tesis se presenta un método para generar una pareja grafos junto con su correspondencia en un coste computacional lineal. A continuación, se centra en cómo medir la disimilitud entre dos grafos enormes (más de 10.000 nodos), utilizando un nuevo algoritmo de emparejamiento de grafos llamado Belief Propagation. Tiene un coste computacional O(d^3.5n). Esta tesis también presenta un marco general para aprender los costos de edición implicados en los cálculos de GED automáticamente. Luego, concretamos este marco en dos modelos diferentes basados en redes neuronales y funciones de densidad de probabilidad. Se ha realizado una validación práctica exhaustiva en 14 bases de datos públicas. Esta validación muestra que la precisión es mayor con los costos de edición aprendidos, que con algunos costos impuestos manualmente u otros costos aprendidos automáticamente por métodos anteriores. Finalmente proponemos una aplicación del algoritmo Belief propagation utilizado en la simulación de la mecánica muscular.<br>Graphs are abstract data structures used to model real problems with two basic entities: nodes and edges. Each node or vertex represents a relevant point of interest of a problem, and each edge represents the relationship between these points. Nodes and edges could be attributed to increase the accuracy of the modeled problem, which means that these attributes could vary from feature vectors to description labels. Due to this versatility, many applications have been found in fields such as computer vision, bio-medics, network analysis, etc. Graph Edit Distance (GED) has become an important tool in structural pattern recognition since it allows to measure the dissimilarity of attributed graphs. The first part presents a method is presented to generate graphs together with an upper and lower bound distance and a correspondence in a linear computational cost. Through this method, the behaviour of the known -or the new- sub-optimal Error-Tolerant graph matching algorithm can be tested against a lower and an upper bound GED on large graphs, even though we do not have the true distance. Next, the present is focused on how to measure the dissimilarity between two huge graphs (more than 10.000 nodes), using a new Error-Tolerant graph matching algorithm called Belief Propagation algorithm. It has a O(d^3.5n) computational cost.This thesis also presents a general framework to learn the edit costs involved in the GED calculations automatically. Then, we concretise this framework in two different models based on neural networks and probability density functions. An exhaustive practical validation on 14 public databases has been performed. This validation shows that the accuracy is higher with the learned edit costs, than with some manually imposed costs or other costs automatically learned by previous methods. Finally we propose an application of the Belief propagation algorithm applied to muscle mechanics.
APA, Harvard, Vancouver, ISO, and other styles
17

Røkenes, Håkon Drolsum. "Graph-based Natural Language Processing : Graph edit distance applied to the task of detecting plagiarism." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-20778.

Full text
Abstract:
The focus of this thesis is the exploration of graph-based similarity, in the context of natural language processing. The work is motivated by a need for richer representations of text. A graph edit distance algorithm was implemented, that calculates the difference between graphs. Sentences were represented by means of dependency graphs, which consist of words connected by dependencies. A dependency graph captures the syntactic structure of a sentence. The graph-based similarity approach was applied to the problem of detecting plagiarism, and was compared against state of the art systems. The key advantages of graph-based textual representations are mainly word order indifference and the ability to capture similarity between words, based on the sentence structure. The approach was compared against contributions made to the PAN plagiarism detection challenge at the CLEF 2011 conference, and would have achieved a 5th place out of 10 contestants. The evaluation results suggest that the approach can be applicable to the task of detecting plagiarism, but require some fine tuning on input parameters. The evaluation results demonstrated that dependency graphs are best represented by directed edges. The graph edit distance algorithm scored best with a combination of node and edge label matching. Different edit weights were applied, which increased performance. Keywords: Graph Edit Distance, Natural Language Processing, Dependency Graphs, Plagiarism Detection
APA, Harvard, Vancouver, ISO, and other styles
18

Wyer, Sarah. "Folk Networks, Cyberfeminism, and Information Activism in the Art+Feminism Wikipedia Edit-a-thon Series." Thesis, University of Oregon, 2017. http://hdl.handle.net/1794/22752.

Full text
Abstract:
This thesis explores how the Art+Feminism Wikipedia Edit-a-thon event impacts the people who coordinate and participate in it. I review museum catalogs to determine institutional representation of women artists, and then examine the Edit-a-thon as a vernacular event on two levels: national and local. The founders have a shared vision of combating perceived barriers to participation in editing Wikipedia, but their larger goal is to address the biases in Wikipedia’s content. My interviews with organizers of the local Eugene, Oregon, edit-a-thon revealed that the network connections possible via the Internet platform of the event did not supersede the importance of face-to-face interaction and vernacular expression during the editing process. The results of my fieldwork found a clear ideological connection to the national event through the more localized satellite edit-a-thons. Both events pursue the consciousness-raising goal of information activism and the construction of a community that advocates for women’s visibility online.
APA, Harvard, Vancouver, ISO, and other styles
19

Kiviloog, Liisa. "Interacting with EDIT. A Qualitative Study on, and a Re-design of, an Educational Technology System." Thesis, Linköping University, Department of Computer and Information Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1469.

Full text
Abstract:
<p>This thesis aimed to study the interaction between an educational technology system and its users and give suggestions for design improvements. The technology system is called EDIT (Educational Development through Information Technology) and has been developed and applied at Linköping University’s Faculty of Health Science. EDIT supports Problem Based Learning and enables scenarios to be presented through the World Wide Web. </p><p>The study was divided into two parts. The first part consisted of a qualitative study with the objective to describe the interaction between the students and EDIT. Students from the faculty’s medical-, nursing- and social care programs were interviewed and observed using the system. The study showed that EDIT was not fully designed to support multiple user interaction. EDIT could only be operated by one user at a time which in turn resulted in an interaction reliant on the operators technical knowledge and ability to handle the system. The second part consisted of a redesign of EDIT. The design goal was to create a groupware that could be operated by multiple users. The design solutions were presented as lofi prototypes to three EDIT users. The users approved of the ideas but stressed the danger of using too advanced and unfamiliar technology.</p>
APA, Harvard, Vancouver, ISO, and other styles
20

Jiang, Zijian. "Investigating and Recommending Co-Changed Entities for JavaScript Programs." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/101102.

Full text
Abstract:
JavaScript (JS) is one of the most popular programming languages due to its flexibility and versatility, but debugging JS code is tedious and error-prone. In our research, we conducted an empirical study to characterize the relationship between co-changed software entities (e.g., functions and variables), and built a machine learning (ML)-based approach to recommend additional entity to edit given developers’ code changes. Specifically, we first crawled 14,747 commits in 10 open-source projects; for each commit, we created one or more change dependency graphs (CDGs) to model the referencer-referencee relationship between co-changed entities. Next, we extracted the common subgraphs between CDGs to locate recurring co-change patterns between entities. Finally, based on those patterns, we extracted code features from co-changed entities and trained an ML model that recommends entities-to-change given a program commit. According to our empirical investigation, (1) 50% of the crawled commits involve multi-entity edits (i.e., edits that touch multiple entities simultaneously); (2) three recurring patterns commonly exist in all projects; and (3) 80–90% of co-changed function pairs either invoke the same function(s), access the same variable(s), or contain similar statement(s); and (4) our ML-based approach CoRec recommended entity changes with high accuracy. This research will improve programmer productivity and software quality.<br>M.S.<br>This thesis introduced a tool CoRec which can provide co-change suggestions when JavaScript programmers fix a bug. A comprehensive empirical study was carried out on 14,747 multi-entity bug fixes in ten open-source JavaScript programs. We characterized the relationship between co-changed entities (e.g., functions and variables), and extracted the most popular change patterns, based on which we built a machine learning (ML)-based approach to recommend additional entity to edit given developers’ code changes. Our empirical study shows that: (1) 50% of the crawled commits involve multi-entity edits (i.e., edits that touch multiple entities simultaneously); (2) three change patterns commonly exist in all ten projects; (3) 80-90% of co-changed function pairs in the 3 patterns either invoke the same function(s), access the same variable(s), or contain similar statement(s); and (4) our ML-based approach CoRec recommended entity changes with high accuracy. Our research will improve programmer productivity and software quality.
APA, Harvard, Vancouver, ISO, and other styles
21

Dosi, Shubham. "Optimization and Further Development of an Algorithm for Driver Intention Detection with Fuzzy Logic and Edit Distance." Master's thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-202567.

Full text
Abstract:
Inspired by the idea of vision zero, there is a lot of work that needs to be done in the field of advance driver assistance systems to develop more safer systems. Driver intention detection with a prediction of upcoming behavior of the driver is one possible solution to reduce the fatalities in road traffic. Driver intention detection provides an early warning of the driver's behavior to an Advanced Driver Assistance Systems (ADAS) and at the same time reduces the risk of non-essential warnings. This will significantly reduce the problem of warning dilemma and the system will become more safer. A driving maneuver prediction can be regarded as an implementation of driver's behavior. So the aim of this thesis is to determine the driver's intention by early prediction of a driving maneuver using Controller Area Network (CAN) bus data. The focus of this thesis is to optimize and further develop an algorithm for driver intention detection with fuzzy logic and edit distance method. At first the basics concerning driver's intention detection are described as there exists different ways to determine it. This work basically uses CAN bus data to determine a driver's intention. The algorithm overview with the design parameters are described next to have an idea about the functioning of the algorithm. Then different implementation tasks are explained for optimization and further development of the algorithm. The main aim to execute these implementation tasks is to improve the overall performance of the algorithm concerning True Positive Rate (TPR), False Positive Rate (FPR) and earliness values. At the end, the results are validated to check the algorithm performance with different possibilities and a test drive is performed to evaluate the real time capability of the algorithm. Lastly the use of driver intention detection algorithm for an ADAS to make it more safer is described in details. The early warning information can be feed to an ADAS, for example, an automatic collision avoidance or a lane change assistance ADAS to further improve safety for these systems.
APA, Harvard, Vancouver, ISO, and other styles
22

Callemein, Gwenaëlle. "L'empoisonnement devant la justice criminelle française en application de l'édit sur les empoisonneurs (1682-1789)." Thesis, Nice, 2015. http://www.theses.fr/2015NICE0031.

Full text
Abstract:
L’empoisonnement est une infraction qui est apparue tardivement, bien que le poison soit depuis longtemps utilisé comme une arme criminelle redoutable. En 1682, il fait l’objet d’une réglementation spécifique qui le distingue du simple homicide et qui encadre de manière rigoureuse le commerce des substances vénéneuses. Depuis cette date, l’empoisonnement a toujours été incriminé de façon autonome dans le droit français. Aussi, cette nouveauté juridique soulève de nombreuses questions d’une part sur la constitution de l’infraction et, de l’autre, sur sa répression par les tribunaux. L’empoisonnement étant un crime difficilement démontrable, la question de la preuve se pose à chaque instant. Par conséquent, il faut interroger la justice criminelle pour comprendre l’apport de cette nouvelle législation et les spécificités qui sont propres au crime d’empoisonnement, tant dans le déroulement de la procédure criminelle que dans la sanction appliquée aux empoisonneurs<br>Poisonning is a violation which appeared lately, though poison has been used as a powerful criminal weapon for a long time. In 1682, a specific regulation distinguished it from a manslaughter and supervised the trade of poisonous substances in a rigorous way. Since then, the poisoning has always been incriminated independantly in the French law. So a lot of question was raised by this new law ; in one hand on the constitution of the breach and on the other hand on its repression by the courts. As poisonning is a crime which is hard to proove, evidence have to be found all the time. Therefore, we have to ask the Criminal Justice to understand this new legislation and these specificities which are particular to poisonning, both in the progress of the criminal procedure and in the penalty applied to the poisoners
APA, Harvard, Vancouver, ISO, and other styles
23

Sullivan, Jennifer Niamh. "Approaching intonational distance and change." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5619.

Full text
Abstract:
The main aim of this thesis is to begin to extend phonetic distance measurements to the domain of intonation. Existing studies of segmental phonetic distance have strong associations with historical linguistic questions. I begin with this context and demonstrate problems with the use of feature systems in these segmental measures. Then I attempt to draw strands from the disparate fields of quantitative historical linguistics and intonation together. The intonation of Belfast and Glasgow English provides a central case study for this. Previous work suggests that both varieties display nuclear rises on statements, yet they have never been formally compared. This thesis presents two main hypotheses on the source of these statement rises: the Alignment hypothesis and the Transfer hypothesis. The Alignment hypothesis posits that statement rises were originally more typical statement falls but have changed into rises over time through gradual phonetic change to the location of the pitch peak. The Transfer hypothesis considers that statement rises have come about through pragmatic transfer of rises onto a statement context, either from question rises or continuation rises. I evaluate these hypotheses using the primary parameters of alignment and scaling as phonetic distance measurements. The main data set consists of data from 3 Belfast English and 3 Glasgow English speakers in a Sentence reading task and Map task. The results crucially indicate that the origin of the statement rises in Belfast and Glasgow English respectively may be different. The Glasgow statement nuclear tones show support for the Alignment hypothesis, while the Belfast nuclear tones fit best with the Transfer hypothesis. The fundamental differences between Glasgow and Belfast are the earlier alignment of the peak (H) in Glasgow and the presence of a final low (L) tonal target in Glasgow and a final high (H) target in Belfast. The scaling of the final H in Belfast statements suggests that the transfer may be from continuation rather than from question rises. I then present a proposal for an overall measure of intonational distance, showing problems with parameter weighting, comparing like with like, and distinguishing between chance resemblance and genuine historical connections. The thesis concludes with an assessment of the benefits that intonational analysis could bring to improving segmental phonetic distance measures.
APA, Harvard, Vancouver, ISO, and other styles
24

Schulz, Drew. "PiaNote: A Sight-Reading Program That Algorithmically Generates Music Based on Human Performance." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1579.

Full text
Abstract:
Sight-reading is the act of performing a piece of music at first sight. This can be a difficult task to master, because it requires extensive knowledge of music theory, practice, quick thinking, and most importantly, a wide variety of musical material. A musician can only effectively sight-read with a new piece of music. This not only requires many resources, but also musical pieces that are challenging while also within a player's abilities. This thesis presents PiaNote, a sight-reading web application for pianists that algorithmically generates music based on human performance. PiaNote's goal is to alleviate some of the hassles pianists face when sight-reading. PiaNote presents musicians with algorithmically generated pieces, ensuring that a musician never sees the same piece of music twice. PiaNote also monitors player performances in order to intelligently present music that is challenging, but within the player's abilities. As a result, PiaNote offers a sight-reading experience that is tailored to the player. On a broader level, this thesis explores different methods in effectively creating a sight-reading application. We evaluate PiaNote with a user study involving novice piano players. The players actively practice with PiaNote over three fifteen-minute sessions. At the end of the study, users are asked to determine whether PiaNote is an effective practice tool that improves both their confidence in sight-reading and their sight-reading abilities. Results suggest that PiaNote does improve user's sight-reading confidence and abilities, but further research must be conducted to clearly validate PiaNote's effectiveness. We conclude that PiaNote has potential to become an effective sight-reading application with slight improvements and further research.
APA, Harvard, Vancouver, ISO, and other styles
25

Ibragimov, Rashid [Verfasser], and Jan [Akademischer Betreuer] Baumbach. "Exact and heuristic algorithms for network alignment using graph edit distance models / Rashid Ibragimov. Betreuer: Prof. Dr. Jan Baumbach." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2015. http://d-nb.info/1067098542/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Prause, Annabel [Verfasser], Ansgar [Akademischer Betreuer] Steland, Rainer von [Akademischer Betreuer] Sachs, and Edit [Akademischer Betreuer] Gombay. "Sequential Nonparametric Detection of High-Dimensional Signals under Dependent Noise / Annabel Prause ; Ansgar Steland, Rainer von Sachs, Edit Gombay." Aachen : Universitätsbibliothek der RWTH Aachen, 2015. http://d-nb.info/1126271551/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hampe, Martínez Teodoro. "Zevallos Quiñones, Jorge. Historia de Chiclayo (siglos XVI, XVII, XVIII y XIX). Lima: Lib. Edit. Minerva, 1995. 193 p." Pontificia Universidad Católica del Perú, 2014. http://repositorio.pucp.edu.pe/index/handle/123456789/122122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Palladino, Chiara. "Round table report: Epigraphy Edit-a-thon: editing chronological and geographic data in ancient inscriptions: April 20-22, 2016." Epigraphy Edit-a-thon : editing chronological and geographic data in ancient inscriptions ; April 20-22, 2016 / edited by Monica Berti. Leipzig, 2016. Beitrag 15, 2016. https://ul.qucosa.de/id/qucosa%3A15477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ferry, A. Douglas. "A project to compile and edit a devotional book for military personnel written by members of the United States Chaplains Corps." Theological Research Exchange Network (TREN), 1989. http://www.tren.com.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Remes, J. (Janne). "The development of laser chemical vapor deposition and focused ion beam methods for prototype integrated circuit modification." Doctoral thesis, University of Oulu, 2006. http://urn.fi/urn:isbn:9514281403.

Full text
Abstract:
Abstract In this work the LCVD of copper and nickel from the precursor gases Cu(hfac)tmvs and Ni(CO)4 has been investigated. The in-house constructed LCVD system and processes and the practical utilisation of these in prototype integrated circuit edit work are described. The investigated process parameters include laser power, laser scan speed, precursor partial pressure and the effect of H2 and He carrier gases. The deposited metal conductor lines have been examined by LIMA, AFM, FIB secondary electron/ion micrography, and by electrical measurements. Furthermore, the study of experimental FIB circuit edit processes is carried out and discussed with particular emphasis on ion beam induced ESD damages. It is shown how the LCVD and FIB methods can be combined to create a novel method to carry out successfully circuit edit cases where both methods alone will fail. The combined FIB/LCVD- method is shown to be highly complementary and effective in practical circuit edit work in terms of reduced process time and improved yield. Circuit edit cases where both technologies are successfully used in a complementary way are presented. Selected examples of some special circuit edit cases include RF- circuit editing, a high resolution method for FIB-deposited tungsten conductor line resistance reduction and large area EMI shielding of IC surfaces. Based on the research it was possible for a formal workflow for the combined process to be developed and this approach was applied to 132 circuit edit cases with 85% yield. The combined method was applied to 30% of the total number of edit cases. Finally, the developed process and constructed system was commercialized.
APA, Harvard, Vancouver, ISO, and other styles
31

Viktorsson, Arvid, and Illya Kyrychenko. "Spell checker for a Java Application." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-78054.

Full text
Abstract:
Many text-editor users depend on spellcheckers to correct their typographical errors. The absence of a spellchecker can create a negative experience for the user. In today's advanced technological environment spellchecking is an expected feature. 2Consiliate Business Solutions owns a Java application with a text-editor which does not have a spellchecker. This project aims to investigate and implement available techniques and algorithms for spellcheckers and automated word correction. During implementation, the techniques were tested for their performance and the best solutions were chosen for this project. All the techniques were gathered from earlier written literature on the topic and implemented in Java using default Java libraries. Analysis of the results proves that it is possible to create a complete spellchecker combining available techniques and that the quality of a spellchecker largely depends on a well defined dictionary.
APA, Harvard, Vancouver, ISO, and other styles
32

Bohuš, Michal. "Diagnostika chyb v počítačových sítích založená na překlepech." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417292.

Full text
Abstract:
The goal of this diploma thesis is to create system for network data diagnostics based on detecting and correcting spelling errors. The system is intended to be used by network administrators as next diagnostics tool. As opposed to the primary use of detection and correction spelling error in common text, these methods are applied to network data, which are given by the user. Created system works with NetFlow data, pcap files or log files. Context is modeled with different created data categories. Dictionaries are used to verify the correctness of words, where each category uses its own. Finding a correction only according to the edit distance leads to many results and therefore a heuristic for evaluating candidates was proposed for selecting the right candidate. The created system was tested in terms of functionality and performance.
APA, Harvard, Vancouver, ISO, and other styles
33

Afzal, Zeeshan. "Towards Secure Multipath TCP Communication." Licentiate thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-48172.

Full text
Abstract:
The evolution in networking coupled with an increasing demand to improve user experience has led to different proposals to extend the standard TCP. Multipath TCP (MPTCP) is one such extension that has the potential to overcome few inherent limitations in the standard TCP. While MPTCP's design and deployment progresses, most of the focus has been on its compatibility. The security aspect is confined to making sure that the MPTCP protocol itself offers the same security level as the standard TCP. The topic of this thesis is to investigate the unexpected security implications raised by using MPTCP in the traditional networking environment. The Internet of today has security middle-boxes that perform traffic analysis to detect intrusions and attacks. Such middle-boxes make use of different assumptions about the traffic, e.g., traffic from a single connection always arrives along the same path. This along with many other assumptions may not be true anymore with the advent of MPTCP as traffic can be fragmented and sent over multiple paths simultaneously. We investigate how practical it is to evade a security middle-box by fragmenting and sending traffic across multiple paths using MPTCP. Realistic attack traffic is used to evaluate such attacks against Snort IDS to show that these attacks are feasible. We then go on to propose possible solutions to detect such attacks and implement them in an MPTCP proxy. The proxy aims to extend the MPTCP performance advantages to servers that only support standard TCP, while ensuring that intrusions can be detected as before. Finally, we investigate the potential MPTCP scenario where security middle-boxes only have access to some of the traffic. We propose and implement an algorithm to perform intrusion detection in such situations and achieve a nearly 90% detection accuracy. Another contribution of this work is a tool, that converts IDS rules into equivalent attack traffic to automate the evaluation of a middle-box.<br>Multipath TCP (MPTCP) is an extension to standard TCP that is close to being standardized. The design of the protocol is progressing, but most of the focus has so far been on its compatibility. The security aspect is confined to making sure that the MPTCP protocol itself offers the same security level as standard TCP. The topic of this thesis is to investigate the unexpected security implications raised by using MPTCP in a traditional networking environment. Today, the security middleboxes make use of different assumptions that may not be true anymore with the advent of MPTCP.We investigate how practical it is to evade a security middlebox by fragmenting and sending traffic across multiple paths using MPTCP. Realistic attack traffic generated from a tool that is also presented in this thesis is used to show that these attacks are feasible. We then go on to propose possible solutions to detect such attacks and implement them in an MPTCP proxy. The proxy aims to extend secure MPTCP performance advantages. We also investigate the MPTCP scenario where security middleboxes can only observe some of the traffic. We propose and implement an algorithm to perform intrusion detection in such situations and achieve a high detection accuracy.<br>HITS
APA, Harvard, Vancouver, ISO, and other styles
34

Pons, Muzzo Díaz María Elsa. "WU BRADING, Celia (Introducción, Recopilación e Ilustraciones). Testimonios Británicos de la ocupación chilena de Lima; Edit. Milla Batres, Lima, 1986; 158 pp." Pontificia Universidad Católica del Perú, 2014. http://repositorio.pucp.edu.pe/index/handle/123456789/122161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Mori, Tomoya. "Methods for Analyzing Tree-Structured Data and their Applications to Computational Biology." 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/202741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Andrews, Tara L. "Prolegomena to a critical edition of the Chronicle of Matthew of Edessa, with a discussion of computer-aided methods used to edit the text." Thesis, University of Oxford, 2009. http://ora.ouls.ox.ac.uk/objects/uuid%3A67ea947c-e3fc-4363-a289-c345e61eb2eb.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Fiala, Jan. "Transformace editačního systému N.e.s.p.i. na webové služby." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236713.

Full text
Abstract:
The diploma thesis describes the analysis, specification, origin and transformation of the current editing system N.E.S.P.I, which is the intellectual property of the company WebRex s.r.o. The main aim is to separate the administration of the web sites from their content, which can be seen on the Internet and by doing so to achieve the unified and complex administration interface for the customers. By bringing the service into work we do not only save time and simplify the work of the programmers, but we are also able to reach the stage when the editing system is not only the intellectual property of the company, but also its physical property, because the customer owns only the web sites, but the possibility to administrative them is offered as further service. The service should also provide the customers with modern access to informational technologies and propose enhanced assistance for their entrepreneurial plans and ideals.
APA, Harvard, Vancouver, ISO, and other styles
38

Hrbková, Lenka. "Marketingová strategie společnosti Fotovýběr." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-162279.

Full text
Abstract:
This thesis describes the steps leading to the promotion Web site Fotovýběr and building the company's name. The aim is through marketing tools to increase awareness about the services offered and to propose such a marketing strategy for the future will ensure an adequate supply of site visitors, ie potential customers. At the beginning of the analysis is devoted to the company and its current activities, then I suggest specific marketing practices and finally evaluate the results and prospects for the future. The thesis should help the company Fotovýběr optimize their marketing activities.
APA, Harvard, Vancouver, ISO, and other styles
39

Kehrer, Timo [Verfasser]. "Calculation and propagation of model changes based on user-level edit operations : a foundation for version and variant management in model-driven engineering / Timo Kehrer." Siegen : Universitätsbibliothek der Universität Siegen, 2015. http://d-nb.info/1077914199/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bezek, Perit. "A Clustering Method For The Problem Of Protein Subcellular Localization." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607981/index.pdf.

Full text
Abstract:
In this study, the focus is on predicting the subcellular localization of a protein, since subcellular localization is helpful in understanding a protein&rsquo<br>s functions. Function of a protein may be estimated from its sequence. Motifs or conserved subsequences are strong indicators of function. In a given sample set of protein sequences known to perform the same function, a certain subsequence or group of subsequences should be common<br>that is, occurrence (frequency) of common subsequences should be high. Our idea is to find the common subsequences through clustering and use these common groups (implicit motifs) to classify proteins. To calculate the distance between two subsequences, traditional string edit distance is modified so that only replacement is allowed and the cost of replacement is related to an amino acid substitution matrix. Based on the modified string edit distance, spectral clustering embeds the subsequences into some transformed space for which the clustering problem is expected to become easier to solve. For a given protein sequence, distribution of its subsequences over the clusters is the feature vector which is subsequently fed to a classifier. The most important aspect if this approach is the use of spectral clustering based on modified string edit distance.
APA, Harvard, Vancouver, ISO, and other styles
41

Kumar, Anand. "Efficient and Private Processing of Analytical Queries in Scientific Datasets." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4822.

Full text
Abstract:
Large amount of data is generated by applications used in basic-science research and development applications. The size of data introduces great challenges in storage, analysis and preserving privacy. This dissertation proposes novel techniques to efficiently analyze the data and reduce storage space requirements through a data compression technique while preserving privacy and providing data security. We present an efficient technique to compute an analytical query called spatial distance histogram (SDH) using spatiotemporal properties of the data. Special spatiotemporal properties present in the data are exploited to process SDH efficiently on the fly. General purpose graphics processing units (GPGPU or just GPU) are employed to further boost the performance of the algorithm. Size of the data generated in scientific applications poses problems of disk space requirements, input/output (I/O) delays and data transfer bandwidth requirements. These problems are addressed by applying proposed compression technique. We also address the issue of preserving privacy and security in scientific data by proposing a security model. The security model monitors user queries input to the database that stores and manages scientific data. Outputs of user queries are also inspected to detect privacy breach. Privacy policies are enforced by the monitor to allow only those queries and results that satisfy data owner specified policies.
APA, Harvard, Vancouver, ISO, and other styles
42

MORICHETTA, ANDREA. "Machine Learning and Big Data Approaches for Automatic Internet Monitoring." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2779392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kolli, Lakshmi Priya. "Mining for Frequent Community Structures using Approximate Graph Matching." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623166375110273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Pontén, Joon. "The final final final cut : Fan edits och hur de samverkar med filmindustrin." Thesis, Stockholms universitet, Institutionen för mediestudier, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-77200.

Full text
Abstract:
Begreppet ”fan edits” betecknar filmer som klipps om av fans, vilka är missnöjda med hur en adaption för vita duken som gjorts. I min uppsats vill jag påvisa dels hur samspelet mellan fans och filmmakare/filmbolag sett och ser ut, dels försöka klargöra varför copyright/fair use är så knepigt att applicera på området.
APA, Harvard, Vancouver, ISO, and other styles
45

Pocquet, du Haut-Jussé Tiphaine. "La Mémoire de l’oubli. La tragédie française entre 1629 à 1653." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCA135.

Full text
Abstract:
Henri IV met fin aux guerres civiles de religion en 1598 en décrétant la mémoire des troubles « éteinte et assoupie, comme de chose non advenue ». Comment se positionne le théâtre français par rapport à cette politique d’oubli, quel espace mémoriel offre-t-il ? Nous considérons la tragédie qui s’écrit entre 1629, fin officielle des guerres civiles et date de la dernière tragédie d’actualité, et 1653, fin de la Fronde et d’une nouvelle menace de division intérieure. La tragédie semble se détourner d’une actualité trop déchirante, en ce sens elle oublie, mais elle se trouve pourtant travaillée par cet oublié. En partant du plus visible : la mise en scène des princes cléments, nous montrons que cette forme d’oubli officiel et volontaire est très représentée sur la scène tragique. Mais l’oublié est aussi ce qui travaille les tragédies dans la représentation qu’elles offrent du conflit familial qui fournit bon nombre des sujets tragiques du temps. La tragédie fait donc affleurer le présent du passé, la mémoire de la division, par le détour allégorique. À un théâtre mélancolique où le passé pèse sur le présent de tout son poids s’oppose un théâtre de relance historique dans lequel peut s’ouvrir un avenir nouveau. Enfin, l’oubli apparaît dans ces années de théorisation dramatique comme un idéal pour le spectateur absorbé dans le spectacle, et comme une menace quand il conduit à l’oubli de soi chez certains comédiens ou spectateurs naïfs. L’oubli, dans son équivocité fondamentale, permet donc d’articuler théorie politique, dramatique et images scéniques, dans un premier dix-septième siècle où l’on ne cesse de penser la violence qui menace le lien et la communauté au risque de la division<br>Henry the 4th ends the religious civil wars in 1598 by ordaining that the remembrance of troubles is « extinguished and abated, like something that did not occur ». How does French drama stand in relation with this politics of oblivion ? What kind of memorial space does it open ? We consider tragedies written between 1629, official end of the troubles and date of publication of the last usual times tragedy, and 1653, end of a new internal division threat embodied by the Fronde. In appearance, tragedy seems to forget a harrowing recent past by turning away from it, but it is simultaneously deeply influenced by what has been forgotten. By starting with what is most visible, the staging of merciful princes, we demonstrate that this official and voluntary oblivion is very much represented on the tragic stage. But forgetfulness is also influencing tragedies in their displaying of family feuds, a frequent tragic topic of these times. Tragedy thus makes surface the present of the past, the memory of division, through allegoric detours. A double-face drama emerges : one of melancholy in which past weighs on present, one of historical reset with an ouverture for renascent prospects. Last, in these years of dramatic theorization, forgetfulness appears to be, for a spectator absorbed by the play, an ideal, as well as it can drag the most naive of them and some comedians into forgetting about their selves in denial of reality and confusion with fiction. The fundamental ambiguity of forgetfulness enables to articulate political theory, drama and staging, in a 17th century where violence is thought to threaten the community with division
APA, Harvard, Vancouver, ISO, and other styles
46

Riveros, Jaeger Cristian. "Repairing strings and trees." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:012d384f-d1d0-471b-ae6e-bbf337892680.

Full text
Abstract:
What do you do if a computational object fails a specification? An obvious approach is to repair it, namely, to modify the object minimally to get something that satisfies the constraints. In this thesis we study foundational problems of repairing regular specifications over strings and trees. Given two regular specifications R and T we aim to understand how difficult it is to transform an object satisfying R into an object satisfying T. The setting is motivated by considering R to be a restriction -- a constraint that the input object is guaranteed to satisfy -- while T is a target -- a constraint that we want to enforce. We first study which pairs of restriction and target specifications can be repaired with a ``small'' numbers of changes. We formalize this as the bounded repair problem -- to determine whether one can repair each object satisfying R into T with a uniform number of edits. We provide effective characterizations of the bounded repair problem for regular specifications over strings and trees. These characterizations are based on a good understanding of the cyclic behaviour of finite automata. By exploiting these characterizations, we give optimal algorithms to decide whether two specifications are bounded repairable or not. We also consider the impact of limitations on the editing process -- what happens when we require the repair to be done sequentially over serialized objects. We study the bounded repair problem over strings and trees restricted to this streaming setting and show that this variant can be characterized in terms of finite games. Furthermore, we use this characterization to decide whether one can repair a pair of specifications in a streaming fashion with bounded cost and how to obtain a streaming repair strategy in this case. The previous notion asks for a uniform bound on the number of edits, but having this property is a strong requirement. To overcome this limitation, we study how to calculate the maximum number of edits per character needed to repair any object in R into T. We formalize this as the asymptotic cost -- the limit of the number of edits divided by the length of the input in the worst case. Our contribution is an algorithm to compute the asymptotic cost for any pair of regular specifications over strings. We also consider the streaming variant of this cost and we show how to compute it by reducing this problem to mean-payoff games.
APA, Harvard, Vancouver, ISO, and other styles
47

Kubo, Miroslav. "Nestacionární pohyb tuhého tělesa v kapalině." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-229685.

Full text
Abstract:
This diploma thesis deals with computing of edit influences on assigned stiff body from the flow of inviscid liquid. There are derived equations for computation of the influences during translational or torsional wobble and follow-up calculation of the units of their tensors.
APA, Harvard, Vancouver, ISO, and other styles
48

Silva, Junior Paulo Matias da. "Distância de edição para estruturas de dados." reponame:Repositório Institucional da UFABC, 2018.

Find full text
Abstract:
Orientador: Prof. Dr. Rodrigo de Alencar Hausen<br>Coorientador: Prof. Dr. Jerônimo Cordoni Pellegrini<br>Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência da Computação, Santo André, 2018.<br>O problema de distância de edição geral de árvores consiste na comparação de duas Árvores enraizadas e rotuladas a partir de operações de edição tais como a deleção e a inserção de nós, buscando obter o menor custo necessário para uma sequência de operações que transforme uma árvore em outra. Neste trabalho provamos que encontrar a maior subfloresta comum pela deleção de nós dentre duas árvores dadas, chamada de LCS-floresta, é um caso particular de distância de edição. Para o problema de encontrar a subárvore comum máxima entre duas árvores, existe uma demonstração feita por Valiente[Val02] de que esse problema é um caso particular de distância de edição considerando uma condição que preserva fortemente a ancestralidade entre os pares de nós das árvores comparadas. Realizamos uma demonstração alternativa para esse problema que toma por condição a existência de caminhos entre os pares de nós. Também estabelecemos uma hierarquia que relaciona as distâncias obtidas como solução desses três problemas, mostrando que a distância que se obtém como solução do problema de edição mais geral é limite inferior para a distância encontrada como solução do LCS-floresta, e esta última é limite inferior para a distância obtida com a subárvore comum máxima. Na segunda parte do trabalho, descrevemos as estruturas de dados como árvores enraizadas e rotuladas, assim pudemos aplicar o conceito de distância de edição e, com isso, analisar os custos para comparar uma estrutura de dados consigo mesma após uma sequência de operações. Para tal, modelamos os custos das operações nas árvores das respectivas estruturas considerando informações como o número de nós da árvore e o nível do nó que passou pela operação. Nos modelos de pilha, lista ligada e árvore de busca binária as distâncias de edição foram relacionadas às complexidades de tempo de se operar nessas estruturas. Adaptamos também os custos operacionais para tries e árvores B. Realizamos experimentos para calcular as distâncias de edição de uma estrutura de dados consigo mesma após uma sequência aleatória de operações com o intuito de verificar como essas medidas de distância atuavam sobre cada estrutura. Observamos nesses testes que o tamanho da sequência influencia na distância final. Também verificamos que os custos operacionais que consideram o nível do nó operado obtinham distâncias menores se comparadas com aquelas obtidas pelo custo de tamanho da estrutura.<br>The general tree edit distance problem consists in the comparison between two rooted labelled trees using operations which change one tree into another. The tree edit distance is defined as the minimum cost sequence of edit operations needed to transform two trees. The edit operations studied are inserting, deleting and replacing nodes. In this work, we prove that find the largest common subforest between trees restricted to node deletion, called LCS-forest, is a particular case of tree edit distance. Valiente [Val02] proved that find the maximum common subtree is a particular case of tree edit distance considering a ancestrality preserving condition, while we present an alternative proof using paths between pair of nodes. These three problems of distance are shown related in a hierarchy, where the general tree edit distance is a lower bound of the distance value obtained from LCS-forest solution. The latter is a lower bound of the distance obtained from maximum common subtree solution. In the second part of this work, we describe data structures as rooted labelled trees. Then it is possible to compare a data structure with itself after a sequence of operations applying the tree edit distance. For this, the model of operational cost of a tree considers information like number of nodes in the tree and level of operated node. The data structures modeled as trees were stack, linked list and binary search tree. The models associate the edit distance with the time complexities of these data structures operations. The operational costs of tries and B-trees also were adaptated for the edit distances. Some experiments to compute the distances are presented. They compare each data structure with itself after random sequences of operations. The results show how each proposed measure operate on the respective structure. The sequence size was an influence factor on distance values. For the operational costs, the cost defined as the level of operated nodes obtain smaller distances compared to the case of cost defined as the structure size.
APA, Harvard, Vancouver, ISO, and other styles
49

Johner, Michel. "Les protestants de France et la sécularisation du mariage à la veille de la Révolution française (1784-1789) : Rabaut Saint-Etienne et l’édit de tolérance de 1787." Paris, EPHE, 2013. http://www.theses.fr/2013EPHE5026.

Full text
Abstract:
La sécularisation du mariage est un processus qui a été engagé en France, dans la période de l’édit de tolérance (1787), en réponse à la nécessité de mettre fin à la proscription dont était frappé le mariage des protestants depuis 1685, et qui s’est achevé dans le conflit entre la République et l’Église catholique sous la Révolution. Quelle fut la part des protestants eux-mêmes dans ce processus ? Quelles prédispositions leur propre doctrine du mariage, ébranlée ou renforcée par la répression endurée durant le siècle de la révocation de l’édit de Nantes, leur donnait-elle sur la question ? La partie préliminaire présente l’évolution de la politique royale envers le mariage des protestants au cours du XVIIIe siècle (chap. I et III), la discipline des protestants et leurs pratiques du mariage (chap. II) ainsi que la part active qu’ils ont prise à l’avancement du débat politique sur le sujet (chap. IV et V). Dans une seconde partie, on décrit le processus qui, entre 1784 et 1787, aboutit à la promulgation de l’édit de tolérance, auquel a contribué de façon directe le pasteur Rabaut Saint-Étienne (chap. VI à XIV). La troisième partie étudie la manière dont les Églises et synodes réformés, dans les deux années qui précèdent la Révolution, ont accueilli l’édit de novembre 1787, et exprimé par leurs règlements d’application les moyens par lesquels ils entendaient faire barrage à la sécularisation du mariage (chap. XV à XXX). L’épilogue décrit l’absence d’implication visible des protestants de France dans les travaux législatifs sur le mariage civil au cours de la période révolutionnaire (1791-1804)<br>The process of the secularisation of marriage set in motion during the period of the Edict of Tolerance (1787) was a response to the need to bring to an end the proscription inflicted on the marriage of Protestants since 1685, and it was finalised in the conflict opposing the Republic and the Roman Catholic Church under the Revolution. But did the Protestants themselves take an active part in this process ? How did their doctrine of marriage and the repression to which they had been subjected for a century after the revocation of the Edict of Nantes shake or reinforce their attitudes to the question ? The first section deals with the evolution of royal policy towards Protestant marriage during the XVIIIth century (Chaps. I and III), the ordinance and practice of marriage among Protestants (Chap. II) and the active part they played in advancing political debate on the subject (Chaps. IV and V). The second part describes the process which, between 1784 and 1787, led to the promulgation of the Edict of Tolerance, in which Pastor Rabaut Saint-Etienne was actively engaged (Chaps. VI to XIV). The third part shows the way in which Reformed Churches and synods dealt with the edict of November 1787 during the two years prior to the Revolution and how the rules and applications they set up show that they intended to oppose all secularisation of marriage (Chaps. VX to XXX). Finally, the epilogue describes the absence of any visible implication of French Protestants in drawing up the legislative texts concerning marriage in the revolutionary period (1791-1804)
APA, Harvard, Vancouver, ISO, and other styles
50

Aubert, Charles-Edouard. "Observer la loi, obéir au roi : les fondements doctrinaux de la pacification du royaume de l’édit de Nantes à la Paix d’Alès (1598-1629)." Electronic Thesis or Diss., Strasbourg, 2021. http://www.theses.fr/2021STRAA021.

Full text
Abstract:
L’étude des fondements doctrinaux de la pacification entre 1598 et 1629 appelle l’analyse des discours tenus sur la paix entre l’édit de Nantes et celui de Nîmes. Cette période est particulièrement propice pour tenter de mettre en lumière les idées directrices de la construction de la paix de religion dans le royaume de France. L’édit de pacification de Nantes promulgué en 1598 par le roi Henri IV réinstaure encore une fois le principe de tolérance civile. Ses premiers commentateurs, qui appartiennent par leurs idées au courant des Politiques, s’efforcent de montrer que la pacification repose sur l’observation de principes fondamentaux qu’ils se donnent alors pour mission d’expliquer. Il s’agit pour eux de refonder l’autorité du roi de laquelle procède l’obéissance, condition sine qua non d’une paix durable. La mort du roi Henri IV en 1610 retentit comme une mise à l’épreuve de la conduite à tenir établie par les Politiques. Henri IV ne constituant plus la garantie personnelle du texte, les discours produits, tant par les réformés que par les catholiques, témoignent de difficultés d’observation du texte liées à une remise en question de l’autorité royale et de l’obéissance dont le bilan est la reprise des guerres de religion jusqu’en 1629<br>The study of the doctrinal foundations of pacification between the years 1598 and 1629 calls for an analysis of the discourses held on peace from the edict of Nantes to that of Nîmes. This period is particularly propitious to try to highlight the guiding ideas for building peace of religion in the kingdom of France. The edict of pacification of Nantes promulgated in 1598 by king Henri IV re-establish the principle of legal toleration. The first analysts to comment this edict belong to the ideas of “Politiques” and they strive to demonstrate that the pacification relies on the application of fundamental principles which they attempt to explain. Their mission is to rehabilitate the obedience of the king’s authority, which is the absolute condition for peace. The death of king Henri IV in 1610 turns out to challenge the theories established by the “Politiques”. Since Henri IV can no longer testify the edict of Nantes, the Calvinists as well as the Catholics hold discourses questioning the obedience of the king and do not abide by the law anymore. This discord leads to the resumption of the war of religion until 1629
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography