Дисертації з теми "Text lines"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Text lines.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Text lines".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Faulkner, Andrew. "The Homeric hymn to Aphrodite : introduction, text and commentary on Lines 1-199." Thesis, University of Oxford, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.410786.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Huang, Ni. "Reading Between the Lines: Three Investigations of User Generated Content Using Text Analytics." Diss., Temple University Libraries, 2017. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/432860.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Business Administration/Marketing
Ph.D.
User-generated content (UGC) is a ubiquitous phenomenon on the Internet. UGC inform, entertain, and facilitate conversations among online users. The three essays of this dissertation examine different antecedents of UGC characteristics with text analytics. The first essay explored the effects of psychological distance on UGC positivity and found that spatial and temporal distance boost UGC positivity. The second essay investigates the effects of social media integration on the linguistic characteristic of UGC and showed that social media integration leads to increased review quantity, while more emotional, less rational and less negative language in UGC content. The third essay examines the impact of book-to-film adaptation on the rating and linguistic characteristics of UGC. The results suggest that, after the release of book-to-film adaptations, book ratings decline, and the use of language reflecting viewing, comparison and affective processes increase in book reviews. To summarize, the three essays in this dissertation contributes to research on UGC by improving our understanding on the various antecedents of UGC characteristics.
Temple University--Theses
3

Harrison-Snyder, Jill Elizabeth. "Pink Lines and Yellow Tables: A Production of Charles L. Mee's BIG LOVE." Master's thesis, Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/208821.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Theater
M.F.A.
A dramatic analysis and directorial reflection on Temple Theaters' production of Charles L. Mee's BIG LOVE, a modern rendering of Aeschylus' THE SUPPLIANT WOMEN. This thesis explores the entire process of directing the production, from research and text analysis, to visual collaboration and rendering, to casting and rehearsal, to tech and production. Ultimately, it is the author's intention to reveal a specific directorial perspective of BIG LOVE and the corresponding creative process utilized to render this interpretation.
Temple University--Theses
4

Simpson, Catherine. ""Reading between the lines" : a grounded theory study of text-based synchronous online therapy : how practitioners establish therapeutic relationships online." Thesis, London Metropolitan University, 2016. http://repository.londonmet.ac.uk/1138/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This qualitative research study explored the therapeutic relationship in online therapy from a counselling psychology perspective. An overview of the different types of online therapy and a brief history of the field were given and the existing literature surrounding online therapy and the therapeutic relationship was critically reviewed. Through this, a need was identified for an understanding of how therapeutic relationships are established in online therapy, with a particular focus on therapy via instant messaging. Semi-structured interviews were conducted asking online therapy practitioners about their experiences of therapeutic relationships. The resulting data were analysed using the grounded theory method and a tentative model of the processes that influence the formation of a therapeutic relationship online was created. An important factor in the model was therapists’ development of skills in online communication, which serve to overcome the lack of a physical presence and non-verbal communication that hinder text-based interactions. Another key influence was the management of the therapeutic frame, which is challenged by the nature of the online setting. Also significant was a client’s rationale for choosing online therapy, which influences their ability to engage in a therapy relationship online. The implications of the findings for counselling psychology professional practice, training and research were discussed.
5

Felhi, Mehdi. "Document image segmentation : content categorization." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0109/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous abordons le problème de la segmentation des images de documents en proposant de nouvelles approches pour la détection et la classification de leurs contenus. Dans un premier lieu, nous étudions le problème de l'estimation d'inclinaison des documents numérisées. Le but de ce travail étant de développer une approche automatique en mesure d'estimer l'angle d'inclinaison du texte dans les images de document. Notre méthode est basée sur la méthode Maximum Gradient Difference (MGD), la R-signature et la transformée de Ridgelets. Nous proposons ensuite une approche hybride pour la segmentation des documents. Nous décrivons notre descripteur de trait qui permet de détecter les composantes de texte en se basant sur la squeletisation. La méthode est appliquée pour la segmentation des images de documents numérisés (journaux et magazines) qui contiennent du texte, des lignes et des régions de photos. Le dernier volet de la thèse est consacré à la détection du texte dans les photos et posters. Pour cela, nous proposons un ensemble de descripteurs de texte basés sur les caractéristiques du trait. Notre approche commence par l'extraction et la sélection des candidats de caractères de texte. Deux méthodes ont été établies pour regrouper les caractères d'une même ligne de texte (mot ou phrase) ; l'une consiste à parcourir en profondeur un graphe, l'autre consiste à établir un critère de stabilité d'une région de texte. Enfin, les résultats sont affinés en classant les candidats de texte en régions « texte » et « non-texte » en utilisant une version à noyau du classifieur Support Vector Machine (K-SVM)
In this thesis I discuss the document image segmentation problem and I describe our new approaches for detecting and classifying document contents. First, I discuss our skew angle estimation approach. The aim of this approach is to develop an automatic approach able to estimate, with precision, the skew angle of text in document images. Our method is based on Maximum Gradient Difference (MGD) and R-signature. Then, I describe our second method based on Ridgelet transform.Our second contribution consists in a new hybrid page segmentation approach. I first describe our stroke-based descriptor that allows detecting text and line candidates using the skeleton of the binarized document image. Then, an active contour model is applied to segment the rest of the image into photo and background regions. Finally, text candidates are clustered using mean-shift analysis technique according to their corresponding sizes. The method is applied for segmenting scanned document images (newspapers and magazines) that contain text, lines and photo regions. Finally, I describe our stroke-based text extraction method. Our approach begins by extracting connected components and selecting text character candidates over the CIE LCH color space using the Histogram of Oriented Gradients (HOG) correlation coefficients in order to detect low contrasted regions. The text region candidates are clustered using two different approaches ; a depth first search approach over a graph, and a stable text line criterion. Finally, the resulted regions are refined by classifying the text line candidates into « text» and « non-text » regions using a Kernel Support Vector Machine K-SVM classifier
6

Wan, Connie. "Samuel Lines and sons : rediscovering Birmingham's artistic dynasty 1794-1898 through works on paper at the Royal Birmingham Society of Artists : Volume 1, Text ; Volume 2, Catalogue ; Volume 3, Illustrations." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3645/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis is the first academic study of nineteenth-century artist and drawing master Samuel Lines (1778-1863) and his five sons: Henry Harris Lines (1800-1889), William Rostill Lines (1802-1846), Samuel Rostill Lines (1804-1833), Edward Ashcroft Lines (1807-1875) and Frederick Thomas Lines (1809-1898). The thesis, with its catalogue, has been a result of a collaborative study focusing on a collection of works on paper by the sons of Samuel Lines, from the Permanent Collection of the Royal Birmingham Society of Artists (RBSA). Both the thesis and catalogue aim to re-instate the family’s position as one of Birmingham’s most prominent and distinguished artistic dynasties. The thesis is divided into three chapters and includes a complete and comprehensive catalogue of 56 works on paper by the Lines family in the RBSA Permanent Collection. The catalogue also includes discursive information on the family’s careers otherwise not mentioned in the main thesis itself. The first chapter explores the family’s role in the establishment of the Birmingham Society of Arts (later the RBSA). It also explores the influence of art institutions and industry on the production of the fine and manufactured arts in Birmingham during the nineteenth century. The second chapter discusses the Lines family’s landscape imagery, in relation to prevailing landscape aesthetics and the physically changing landscape of the Midlands. Henry Harris Lines is the main focus of the last chapter which reveals the extent of his skills as archaeologist, antiquarian and artist.
7

de, Medeiros Ribeiro Márcio. "Restructuring test variabilities in software product lines." Universidade Federal de Pernambuco, 2008. https://repositorio.ufpe.br/handle/123456789/1732.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Made available in DSpace on 2014-06-12T15:52:05Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Linhas de Produto de Software (LPS) englobam famílias de sistemas desenvolvidos a partir de artefatos reusáveis. Um fator importante durante a manutenção de LPS con- siste em decidir sobre qual mecanismo deve ser utilizado para reestruturar suas variações objetivando melhorar a modularidade de seus artefatos. Devido µa grande variedade de mecanismos, selecionar os corretos pode ser uma tarefa difícil. Por outro lado, selecionar os incorretos pode produzir efeitos negativos no custo de desenvolver a LPS. E importante salientar que este problema existe não somente no nível de código fonte, mas também em outros artefatos como requisitos de software e testes. Assim sendo, para reduzir tal problema no nível de testes automatizados, este trabalho prop~oe um modelo de decisão que ajuda desenvolvedores a escolher mecanismos para reestruturar variações de testes em LPS. Para construir o modelo, algumas variacões encontradas em casos de teste automatizados reais desenvolvidos pela Motorola foram analisadas. Neste caso, os testes servem para testar os sistemas de software dos telefones celulares da Motorola. Os testes lidam com as variacões dos diferentes celulares usando condicionais if-else. Portanto, dada uma variacão baseada em condicionais if-else, o modelo sugere um mecanismo para prover uma melhor modularidade da variacão. Adicionalmente, uma ferramenta para dar suporte aos desenvolvedores de LPS foi desenvolvida. A ferramenta recomenda os mecanismos de acordo com o modelo de decisão proposto. Aplicando o modelo de decisão e os mecanismos sugeridos por ele pode melhorar a modularidade das variações dos casos de teste e remover problemas como códigos dupli- cados. Ademais, mostra-se que a tarefa de reestruturar variações torna-se mais rápida e precisa quando a ferramenta é utilizada
8

Odia, Osaretin Edwin. "Testing in Software Product Lines." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3853.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis presents research aimed at investigating different activities involved in software product lines testing process and possible improvements towards achieving developing high quality software product lines at reduced cost and time. The research was performed using systematic review procedures of Kitchenham. The reviews carried out in this research covers several areas relating to software product lines testing. The reasons for performing a systematic review in this research are to; summarize the existing evidence covering testing in software product line context, to identify gaps in current research and to suggest areas for further research. The contribution of this thesis is research aimed at revealing the different activities, issues and challenges in software product lines testing. The research into the different activities in software product lines lead to the proposed SPLIT Model for software product lines testing. The model helps to clarify the steps and activities involved in the software product line testing process. It provides and easy to follow map for testers and managers in software product line development organizations. The results were mainly on how testing in software product lines can be improved upon, towards achieving software product line goals. The basic contribution is the proposed model for product line testing, investigation into, and possible improvement in, issues related to software product line testing activities.
The main purpose of the research as presented in this thesis is to present a clear picture of testing in the context of software product lines, which is quite different from testing in single product. The focus of this thesis is specifically the different steps and activities involved in software product lines testing and possible improvements in software product lines testing activities and issues towards achieving the goals of developing high quality software product lines at reduced cost and time. But, for software product lines to achieve its goals, there should be a comprehensive set of testing activities in software product lines development. The development activities from performing analyses and creating designs to integrating programs in software product line context, component testing and tools support for software product lines testing should be taken into consideration.
0046762913149 eddy_odia2002@yahoo.co.uk
9

Lima, Neto Crescencio Rodrigues. "SPLMT-TE: a software product lines system test case tool." Universidade Federal de Pernambuco, 2011. https://repositorio.ufpe.br/handle/123456789/2809.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Made available in DSpace on 2014-06-12T16:01:17Z (GMT). No. of bitstreams: 2 arquivo7562_1.pdf: 3512712 bytes, checksum: d7dd3b157b1e7c89309ff683efdc8a2f (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011
Atualmente a decisão de trabalhar, ou não, com Linhas de Produtos de Software (LPS) se tornou um requisito obrigatório para o planejamento estratégico das empresas que trabalham com domínio específico. LPS possibilita que as organizações alcancem reduções significativas nos custos de desenvolvimento e manutenção, melhorias quantitativas na produtividade, qualidade e satisfação do cliente. Por outro lado, os pontos negativos em adotar LPS são demanda extra de investimentos para criar os artefatos reusáveis, fazer mudana¸s organizacionais, etc. Além disso, teste é mais complicado e crítico em linhas de produtos do que em sistemas simples. Porém, continua sendo a forma mais efetiva para garantia de qualidade em LPS. Por isso, aprender a escolher as ferramentas certas para teste em LPS é um benefício que contribui pra redução de alguns desses problemas enfrentados pelas empresas. Apesar do crescente número de ferramentas disponíveis, teste em LPS ainda necessita de ferramentas que apoiem o nível de teste de sistema, gerenciando a variabilidade dos artefatos de teste. Neste contexto, este trabalho apresenta uma ferramenta de teste de linhas de produtos de software para construir testes de sistema a partir dos casos de uso que endereçam desafios para teste em LPS identificados na revisão literária. A ferramenta foi desenvolvida com o intuito de reduzir o esforço necessário para realizar as atividades de teste no ambiente de LPS. Além disso, esta dissertação apresenta um estudo exploratório sistemático que tem como objetivo investigar o estado da arte em relação a ferramentas de teste, sintetizando as evidências disponíveis e identificar lacunas entre as ferramentas, disponíveis na literatura. Este trabalho também apresenta um estudo experimental controlado para avaliar a eficácia da ferramenta proposta
10

Abuhaiba, Ibrahim S. I. "Recognition of off-line handwritten cursive text." Thesis, Loughborough University, 1996. https://dspace.lboro.ac.uk/2134/7331.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The author presents novel algorithms to design unconstrained handwriting recognition systems organized in three parts: In Part One, novel algorithms are presented for processing of Arabic text prior to recognition. Algorithms are described to convert a thinned image of a stroke to a straight line approximation. Novel heuristic algorithms and novel theorems are presented to determine start and end vertices of an off-line image of a stroke. A straight line approximation of an off-line stroke is converted to a one-dimensional representation by a novel algorithm which aims to recover the original sequence of writing. The resulting ordering of the stroke segments is a suitable preprocessed representation for subsequent handwriting recognition algorithms as it helps to segment the stroke. The algorithm was tested against one data set of isolated handwritten characters and another data set of cursive handwriting, each provided by 20 subjects, and has been 91.9% and 91.8% successful for these two data sets, respectively. In Part Two, an entirely novel fuzzy set-sequential machine character recognition system is presented. Fuzzy sequential machines are defined to work as recognizers of handwritten strokes. An algorithm to obtain a deterministic fuzzy sequential machine from a stroke representation, that is capable of recognizing that stroke and its variants, is presented. An algorithm is developed to merge two fuzzy machines into one machine. The learning algorithm is a combination of many described algorithms. The system was tested against isolated handwritten characters provided by 20 subjects resulting in 95.8% recognition rate which is encouraging and shows that the system is highly flexible in dealing with shape and size variations. In Part Three, also an entirely novel text recognition system, capable of recognizing off-line handwritten Arabic cursive text having a high variability is presented. This system is an extension of the above recognition system. Tokens are extracted from a onedimensional representation of a stroke. Fuzzy sequential machines are defined to work as recognizers of tokens. It is shown how to obtain a deterministic fuzzy sequential machine from a token representation that is capable'of recognizing that token and its variants. An algorithm for token learning is presented. The tokens of a stroke are re-combined to meaningful strings of tokens. Algorithms to recognize and learn token strings are described. The. recognition stage uses algorithms of the learning stage. The process of extracting the best set of basic shapes which represent the best set of token strings that constitute an unknown stroke is described. A method is developed to extract lines from pages of handwritten text, arrange main strokes of extracted lines in the same order as they were written, and present secondary strokes to main strokes. Presented secondary strokes are combined with basic shapes to obtain the final characters by formulating and solving assignment problems for this purpose. Some secondary strokes which remain unassigned are individually manipulated. The system was tested against the handwritings of 20 subjects yielding overall subword and character recognition rates of 55.4% and 51.1%, respectively.
11

Morillot, Olivier. "Reconnaissance de textes manuscrits par modèles de Markov cachés et réseaux de neurones récurrents : application à l'écriture latine et arabe." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La reconnaissance d’écriture manuscrite est une composante essentielle de l’analyse de document. Une tendance actuelle de ce domaine est de passer de la reconnaissance de mots isolés à celle d’une séquence de mots. Notre travail consiste donc à proposer un système de reconnaissance de lignes de texte sans segmentation explicite de la ligne en mots. Afin de construire un modèle performant, nous intervenons à plusieurs niveaux du système de reconnaissance. Tout d’abord, nous introduisons deux méthodes de prétraitement originales : un nettoyage des images de lignes de texte et une correction locale de la ligne de base. Ensuite, nous construisons un modèle de langage optimisé pour la reconnaissance de courriers manuscrits. Puis nous proposons deux systèmes de reconnaissance à l’état de l’art fondés sur les HMM (Hidden Markov Models) contextuels et les réseaux de neurones récurrents BLSTM (Bi-directional LongShort-Term Memory). Nous optimisons nos systèmes afin de proposer une comparaison de ces deux approches. Nos systèmes sont évalués sur l’écriture cursive latine et arabe et ont été soumis à deux compétitions internationales de reconnaissance d’écriture. Enfin, enperspective de notre travail, nous présentons une stratégie de reconnaissance pour certaines chaînes de caractères hors-vocabulaire
Handwriting recognition is an essential component of document analysis. One of the popular trends is to go from isolated word to word sequence recognition. Our work aims to propose a text-line recognition system without explicit word segmentation. In order to build an efficient model, we intervene at different levels of the recognition system. First of all, we introduce two new preprocessing techniques : a cleaning and a local baseline correction for text-lines. Then, a language model is built and optimized for handwritten mails. Afterwards, we propose two state-of-the-art recognition systems based on contextual HMMs (Hidden Markov Models) and recurrent neural networks BLSTM (Bi-directional Long Short-Term Memory). We optimize our systems in order to give a comparison of those two approaches. Our systems are evaluated on arabic and latin cursive handwritings and have been submitted to two international handwriting recognition competitions. At last, we introduce a strategy for some out-of-vocabulary character strings recognition, as a prospect of future work
12

Tian, Hai, Tom Trojak, and Charles Jones. "DATA COMMUNICATIONS OVER AIRCRAFT POWER LINES." International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604922.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada
This paper introduces a study of the feasibility and initial hardware design for transmitting data over aircraft power lines. The intent of this design is to significantly reduce the wiring in the aircraft instrumentation system. The potential usages of this technology include Common Airborne Instrumentation System (CAIS) or clock distribution. Aircraft power lines channel characteristics are presented and Orthogonal Frequency Division Multiplexing (OFDM) is introduced as an attractive modulation scheme for high-speed power line transmission. A design of a full-duplex transceiver with accurate frequency planning is then discussed. A general discussion of what communications protocols are appropriate for this technology is also provided.
13

Stoll, Christopher A. "Text Line Extraction Using Seam Carving." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1428077337.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Parlakol, Nazif Bulent. "A Test Oriented Service And Object Model For Software Product Lines." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611769/index.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis, a new modeling technique is proposed for minimizing regression testing effort in software product lines. The &ldquo
Product Flow Model&rdquo
is used for the common representation of products in application engineering and the &ldquo
Domain Service and Object Model&rdquo
represents the variant based relations between products and core assets. This new approach provides a solution for avoiding unnecessary work load of regression testing using the principles of sub-service decomposition and variant based product/sub-service traceability matrices. The proposed model is adapted to a sample product line targeting the banking domain, called Loyalty and Campaign Management System, where loyalty campaigns for credit cards are the products derived from core assets. Reduced regression test scope after the realization of new requirements is demonstrated through a case study. Finally, efficiency improvement in terms of time and effort in the test process with the adaptation of the proposed model is discussed.
15

Mullen, W. Grigg. "An evaluation of the utility of four in-situ test methods for transmission line foundation design /." This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-07112007-092850/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Galindo, Duarte José Ángel. "Evolution, testing and configuration of variability systems intensive." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S008/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Une particularité importante du logiciel est sa capacité à être adapté et configuré selon différents scénarios. Récemment, la variabilité du logiciel a été étudiée comme un concept de première classe dans différents domaines allant des lignes de produits logiciels aux systèmes ubiquitaires. La variabilité est la capacité d'un produit logiciel à varier en fonction de différentes circonstances. Les systèmes à forte variabilité mettent en jeu des produits logiciels où la gestion de la variabilité est une activité d'ingénierie prédominante. Les diverses parties de ces systèmes sont couramment modélisées en utilisant des formes différentes de ''modèle de variabilité'', qui est un formalisme de modélisation couramment utilisé. Les modèles de caractéristiques (feature models) ont été introduits par Kang et al. en 1990 et sont une représentation compacte d'un ensemble de configurations pour un système à forte variabilité. Le grand nombre de configurations d'un modèle de caractéristiques ne permet pas une analyse manuelle. De fait, les mécanismes assistés par ordinateur sont apparus comme une solution pour extraire des informations utiles à partir de modèles de caractéristiques. Ce processus d'extraction d'information à partir de modèles de caractéristiques est appelé dans la littérature scientifique ''analyse automatisée de modèles de caractéristiques'' et a été l'un des principaux domaines de recherche ces dernières années. Plus de trente opérations d'analyse ont été proposées durant cette période. Dans cette thèse, nous avons identifié différentes questions ouvertes dans le domaine de l'analyse automatisée et nous avons considéré plusieurs axes de recherche. Poussés par des scénarios du monde réel (e.g., la téléphonie mobile ou la vidéo protection), nous avons contribué à appliquer, adapter ou étendre des opérations d'analyse automatisée pour l’évolution, le test et la configuration de systèmes à forte variabilité
The large number of configurations that a feature model can encode makes the manual analysis of feature models an error prone and costly task. Then, computer-aided mechanisms appeared as a solution to extract useful information from feature models. This process of extracting information from feature models is known as ''Automated Analysis of Feature models'' that has been one of the main areas of research in the last years where more than thirty analysis operations have been proposed. In this dissertation we looked for different tendencies in the automated analysis field and found several research opportunities. Driven by real-world scenarios such as smart phone or videosurveillance domains, we contributed applying, adapting or extending automated analysis operations in variability intensive systems evolution, testing and configuration
17

Moysset, Bastien. "Détection, localisation et typage de texte dans des images de documents hétérogènes par Réseaux de Neurones Profonds." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI044/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Lire automatiquement le texte présent dans les documents permet de rendre accessible les informations qu'ils contiennent. Pour réaliser la transcription de pages complètes, la localisation des lignes de texte est une étape cruciale. Les méthodes traditionnelles de détection de lignes, basées sur des approches de traitement d'images, peinent à généraliser à des jeux de données hétérogènes. Pour cela, nous proposons dans cette thèse une approche par réseaux de neurones profonds. Nous avons d'abord proposé une approche de segmentation mono-dimensionnelle des paragraphes de texte en lignes à l'aide d'une technique inspirée des modèles de reconnaissance, où une classification temporelle connexionniste (CTC) est utilisée pour aligner implicitement les séquences. Ensuite, nous proposons un réseau qui prédit directement les coordonnées des boîtes englobant les lignes de texte. L'ajout d'un terme de confiance à ces boîtes hypothèses permet de localiser un nombre variable d'objets. Nous proposons une prédiction locale des objets afin de partager les paramètres entre les localisations et, ainsi, de multiplier les exemples d'objets vus par chaque prédicteur de boîte lors de l'entraînement. Cela permet de compenser la taille restreinte des jeux de données utilisés. Pour récupérer les informations contextuelles permettant de prendre en compte la structure du document, nous ajoutons, entre les couches convolutionnelles, des couches récurrentes LSTM multi-dimensionnelles. Nous proposons trois stratégies de reconnaissance pleine page qui permettent de tenir compte du besoin important de précision au niveau des positions et nous montrons, sur la base hétérogène Maurdor, la performance de notre approche pour des documents multilingues pouvant être manuscrits et imprimés. Nous nous comparons favorablement à des méthodes issues de l'état de l'art. La visualisation des concepts appris par nos neurones permet de souligner la capacité des couches récurrentes à apporter l'information contextuelle
Being able to automatically read the texts written in documents, both printed and handwritten, makes it possible to access the information they convey. In order to realize full page text transcription, the detection and localization of the text lines is a crucial step. Traditional methods tend to use image processing based approaches, but they hardly generalize to very heterogeneous datasets. In this thesis, we propose to use a deep neural network based approach. We first propose a mono-dimensional segmentation of text paragraphs into lines that uses a technique inspired by the text recognition models. The connexionist temporal classification (CTC) method is used to implicitly align the sequences. Then, we propose a neural network that directly predicts the coordinates of the boxes bounding the text lines. Adding a confidence prediction to these hypothesis boxes enables to locate a varying number of objects. We propose to predict the objects locally in order to share the network parameters between the locations and to increase the number of different objects that each single box predictor sees during training. This compensates the rather small size of the available datasets. In order to recover the contextual information that carries knowledge on the document layout, we add multi-dimensional LSTM recurrent layers between the convolutional layers of our networks. We propose three full page text recognition strategies that tackle the need of high preciseness of the text line position predictions. We show on the heterogeneous Maurdor dataset how our methods perform on documents that can be printed or handwritten, in French, English or Arabic and we favourably compare to other state of the art methods. Visualizing the concepts learned by our neurons enables to underline the ability of the recurrent layers to convey the contextual information
18

zhang, fan. "Test of Equality Between Regression Lines in Presence of Errors in Variables." Thesis, Uppsala universitet, Statistiska institutionen, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-175809.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Jech, L. E., S. H. Husman, and M. J. Ottman. "Wheat, Barley, and Durum and Advanced Lines Test, Gila Bend, AZ, 1996." College of Agriculture, University of Arizona (Tucson, AZ), 1996. http://hdl.handle.net/10150/202438.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Varshosaz, Mahsa. "Test Models and Algorithms for Model-Based Testing of Software Product Lines." Licentiate thesis, Högskolan i Halmstad, Centrum för forskning om inbyggda system (CERES), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-33893.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Software product line (SPL) engineering has become common practice for mass production and customization of software. A software product line comprises a family of software systems which share a managed core set of artifacts. There are also a set of well-defined variabilities between the products of a product line. The main idea in SPL engineering is to enable systematic reuse in different phases of software development to reduce cost and time to release. Model-Based Testing (MBT) is a technique that is widely used for checking the quality of software systems. In MBT, test cases are generated from an abstract model, which captures the desired behavior of the system. Then, the test cases are executed against a real implementation of the system and the compliance of the implementation to the specification is checked by comparing the observed outputs with the ones prescribed by the model. Software product lines have been applied in many domains in which sys- tems are mission critical and MBT is one of the techniques that is widely used for quality assurance of such systems. As the number of products can be potentially large in an SPL, using conventional approaches for MBT of the products of an SPL individually and as single systems can be very costly and time consuming. Hence, several approaches have been proposed in order to enable systematic reuse in different phases of the MBT process. An efficient modeling technique is the first step towards an efficient MBT technique for SPLs. There have been several formalisms proposed for modeling SPLs. In this thesis, we conduct a study on such modeling techniques, focusing on three fundamental formalisms, namely featured transition systems, modal transition systems, and product line calculus of communicating systems. We compare the expressive power and the succinctness of these formalisms. Furthermore, we investigate adapting existing MBT methods for efficient testing of SPLs. As a part of this line of our research, we adapt the test case generation algorithm of one of the well-known black-box testing approaches, namely, Harmonized State Identification (HSI) method by exploiting the idea of delta-oriented programming. We apply the adapted test case generation algorithm to a case study taken from industry and the results show up to 50 percent reduction of time in test case generation by using the delta-oriented HSI method. In line with our research on investigating existing MBT techniques, we compare the relative efficiency and effectiveness of the test case generation algorithms of the well-known Input-Output Conformance (ioco) testing approach and the complete ioco which is another testing technique used for input output transition systems that guarantees fault coverage. The comparison is done using three case studies taken from the automotive and railway domains. The obtained results show that complete ioco is more efficient in detecting deep faults (i.e., the faults reached through longer traces) in large state spaces while ioco is more efficient in detecting shallow faults (i.e., the faults reached through shorter traces) in small state spaces.
21

McGarry, Theresa. "Listen and Complete: Understanding One-Liners." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etsu-works/6157.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Book Summary: New Ways in Teaching With Music shows how music can be incorporated into your lessons as a way to decrease anxiety, increase motivation and retention, and invigorate both students and teachers. This book is a collection of adaptable lessons that use music as a catalyst for effective, engaging, and enjoyable language learning. 101 activities for students of all skill levels and a companion website with online resources are included. The lessons are broken down by topic including: Reading, Writing, Listening, Speaking, Grammar, Vocabulary, Cultural Exploration, Integrated Skills
22

Jech, L. E., S. H. Husman, M. J. Ottman, and G. A. Hareland. "Wheat, Barley, Durum and Advanced Lines Test, Gila Bend, AZ 1995 (Final Report)." College of Agriculture, University of Arizona (Tucson, AZ), 1996. http://hdl.handle.net/10150/202439.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Cregger, S. S., and Phillip R. Scheuerman. "A Rapid Biochemical Test Using Cell Lines for Measuring Chemical Toxicity in Aquatic Systems." Digital Commons @ East Tennessee State University, 1993. https://dc.etsu.edu/etsu-works/2896.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Samih, Hamza. "Test basé sur les modèles appliqué aux lignes de produits." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S109/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'ingénierie des lignes de produits est une approche utilisée pour développer une famille de produits. Ces produits partagent un ensemble de points communs et un ensemble de points de variation. Aujourd'hui, la validation est une activité disjointe du processus de développement des lignes de produits. L'effort et les moyens fournis dans les campagnes de tests de chaque produit peuvent être optimisés dans un contexte plus global au niveau de la ligne de produits. Le model-based testing est une technique de génération automatique des cas de test à partir d'un modèle d'états et de transitions construit à partir des exigences fonctionnelles. Dans cette thèse, nous présentons une approche pour tester une ligne de produits logiciels avec le model-based testing. La première contribution consiste à établir un lien entre le modèle de variabilité et le modèle de test, à l'aide des exigences fonctionnelles. La deuxième contribution est un algorithme qui extrait automatiquement un modèle de test spécifique à un produit membre de la famille de produits sous test. L'approche est illustrée par une famille de produits de tableaux de bord d'automobiles et expérimentée par un industriel du domaine aéronautique dans le cadre du projet Européen MBAT
Software product line engineering is an approach that supports developing products in family. These products are described by common and variable features. Currently, the validation activity is disjointed from the product lines development process. The effort and resources provided in the test campaigns for each product can be optimized in the context of product lines. Model-based testing is a technique for automatically generating a suite of test cases from requirements. In this thesis report, we present an approach to test a software product line with model-based testing. This technique is based on an algorithm that establishes the relationship between the variability model released with OVM and the test model, using traceability of functional requirements present in both formalisms. Our contribution is an algorithm that automatically extracts a product test model. It is illustrated with a real industrial case of automotive dashboards and experimented by an industrial of aeronautic domain in the MBAT European project context
25

Bonakdar, Sakhi Omid. "Segmentation of heterogeneous document images : an approach based on machine learning, connected components analysis, and texture analysis." Phd thesis, Université Paris-Est, 2012. http://tel.archives-ouvertes.fr/tel-00912566.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Document page segmentation is one of the most crucial steps in document image analysis. It ideally aims to explain the full structure of any document page, distinguishing text zones, graphics, photographs, halftones, figures, tables, etc. Although to date, there have been made several attempts of achieving correct page segmentation results, there are still many difficulties. The leader of the project in the framework of which this PhD work has been funded (*) uses a complete processing chain in which page segmentation mistakes are manually corrected by human operators. Aside of the costs it represents, this demands tuning of a large number of parameters; moreover, some segmentation mistakes sometimes escape the vigilance of the operators. Current automated page segmentation methods are well accepted for clean printed documents; but, they often fail to separate regions in handwritten documents when the document layout structure is loosely defined or when side notes are present inside the page. Moreover, tables and advertisements bring additional challenges for region segmentation algorithms. Our method addresses these problems. The method is divided into four parts:1. Unlike most of popular page segmentation methods, we first separate text and graphics components of the page using a boosted decision tree classifier.2. The separated text and graphics components are used among other features to separate columns of text in a two-dimensional conditional random fields framework.3. A text line detection method, based on piecewise projection profiles is then applied to detect text lines with respect to text region boundaries.4. Finally, a new paragraph detection method, which is trained on the common models of paragraphs, is applied on text lines to find paragraphs based on geometric appearance of text lines and their indentations. Our contribution over existing work lies in essence in the use, or adaptation, of algorithms borrowed from machine learning literature, to solve difficult cases. Indeed, we demonstrate a number of improvements : on separating text columns when one is situated very close to the other; on preventing the contents of a cell in a table to be merged with the contents of other adjacent cells; on preventing regions inside a frame to be merged with other text regions around, especially side notes, even when the latter are written using a font similar to that the text body. Quantitative assessment, and comparison of the performances of our method with competitive algorithms using widely acknowledged metrics and evaluation methodologies, is also provided to a large extend.(*) This PhD thesis has been funded by Conseil Général de Seine-Saint-Denis, through the FUI6 project Demat-Factory, lead by Safig SA
26

Angerborn, Felix. "Better text formatting for the mobile web with javascript." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-123314.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As people read more and longer texts on the web, the simple formatting options that exists in todays browsers creates worse results than necessary. On behalf of Opera Software in Linköping, a better algorithm has been implemented in Javascript with the purpose of delivering a visually better experience for the reader. The implementation is first and foremost for mobile devices and therefore a large part of the thesis has been the evaluation and optimization of performance.
27

Pollard, Jane Maree. "Mesostructure : towards a linguistic framework for the description of topic in written texts." Thesis, University of Exeter, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302563.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Maity, Chaitali. "Determining the role of a candidate gene in Drososphila muscle development." Oxford, Ohio : Miami University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=miami1145459719.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kasap, Huseyin. "Investigation Of Stockbridge Dampers For Vibration Control Of Overhead Transmission Lines." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614865/index.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis aims to examine the performance of Stockbridge dampers used to suppress aeolian vibrations on overhead transmission lines arising from the wind. In this respect, a computer program, based on the Energy Balance Method, is developed using MATLAB. The developed computer program has also a graphical user interface (GUI), which allows the program to interactively simulate Stockbridge damper performance for vibration control of overhead transmission lines. Field tests results obtained from literature are used in various case studies in order to validate and evaluate the developed software. Moreover, sample Stockbridge damper characterization tests, which then could be introduced to the software, are performed. A custom test fixture is designed due to its unavailability of commercial alternatives in the market. In the design of the test fixture, modal and transmissibility analyses are done by using ANSYS Workbench. To further validate the test setup, transmissibility test is done and consistent results with the transmissibility analyses are observed in the range of expected aeolian vibration frequencies. Finally, the stepped-sine and swept-sine tests are performed with and without damper for the characterization test, where the latter one is performed to eliminate the negative effects of the test setup. Both tests yield almost same damper power dissipation curves.
30

Almeida, Amanda Francieli de. "Avaliação de materiais argilosos da Formação Corumbataí para uso em liners compactados (CCL)." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/18/18132/tde-23032016-091001/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A disposição final dos resíduos, de forma a minimizar a contaminação das águas, é feita, em geral, em aterros sanitários os quais devem apresentar na base camadas de argila compactada (CCL) que também são conhecidas como liners. Esses sistemas de barreiras desempenham funções diversas, dentre as quais se destacam o isolamento do resíduo e a diminuição da infiltração e a minimização da migração de contaminantes (filtragem, sorção e outras reações geoquímicas) em direção à água subterrânea. O objetivo deste trabalho foi avaliar os materiais argilosos relacionados à Formação Corumbataí com o intuito de selecionar os materiais que reúnem as melhores características para serem usados em liners compactados. Os aspectos avaliados foram a retenção de contaminantes por meio dos ensaios de equilíbrio em lote (batch test) e percolação em coluna com solução de CuCl2.2H2O, e avaliação da resistência à compressão simples do solo compactado, para suportar as cargas exercidas em um aterro sanitário. Para os cálculos dos parâmetros de adsorção utilizando o batch test, procedeu-se à construção e linearização das isotermas e, a partir do coeficiente de determinação, foi possível observar que os melhores ajustes foram com os modelos linear e de Freundlich. A isoterma de melhor ajuste para o cátion foi à de Freundlich em todas as amostras, destacando principalmente AM-2 e AM-16 com R² de 0,9983 e 0,9978 respectivamente. Na percolação em coluna os valores do fator de retardamento (Rd) para o Cl- e Cu++ foram determinados utilizando os métodos de Freeze e Cherry (1979) e Shackelford (1994) nas curvas de chegada. Na resistência à compressão simples a amostra mais significativa foi a AM-3 que resistiu uma força média de 992,1 N, chegando a uma tensão média de 477,4 kPa. Após uma análise integrada as amostras com maior desempenho foram AM-2 e AM-3, sendo que a AM-2 não foi apta apenas em um cenário elaborado para analisar a resistência à compressão simples.
The final waste disposal is usually the landfills. In order to minimize water contamination because of the waste, the landfills ought to have layers of clay compacted (CCL). Those layers are also known as liners. The barriers system has many functions, for instance, the isolation of the waste, the reduction of infiltration and also the reduction of contaminants migrations (filtering, sorption and other geochemical reactions) toward groundwater. This paper aims to evaluate the clay materials presents in Corumbataí Formation. The main objective was to select materials that have the best characteristics to be used in compacted liner. The aspects that were analyzed includes: the retentions of contaminants using batch test, and also column percolation with CuCl2.2H2O solution. It was also evaluated the resistance of the compacted soil to stand the loads exerted in a landfill. To calculate the adsorption parameters by using the batch test, the constructions and also the linearization of the isotherms were made, through coefficient of determination as its base. Because of those tests it was possible to identify that the best settings are the linear model and also the Freundlich model. The isotherm that presented the best adjustment for the cation was Freundlich isotherm. It was the best adjustment in all samples, mainly in AM-2 and also in AM-16 with R² of 0,9983 and 0,9978 respectively. In percolation column the values of retardation factor (Rd) for Cl- and also for Cu++, were determined by using Freeze and Cherry (1979) and also Shackelford\'s methods (1994) on breakthrough curves. In the \"compressive strength\", the most significant sample was AM-3 that resisted an average force of 992.1 N, reaching an average stress of 477.4 kPa. After an integrated analysis, the best samples were AM-2 and AM-3. However, the AM-2 was not able to work in a scenario that was created to analyze an unconfined compressive strength.
31

Souza, Rafaela Faciola Coelho de. "Migração de poluentes inorgânicos em liners compostos." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/18/18132/tde-23032010-101309/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Neste trabalho analisa-se o comportamento de duas configurações de liners através da percolação com solução de \'K\'CL\'. São utilizadas amostras de solo compactadas, do interior do estado de São Paulo, provenientes da Formação Corumbataí, combinadas a um geocomposto bentonítico (GCL) de fabricação nacional. São utilizados ensaios em coluna de percolação em dois corpos-de-prova, nas configurações: solo compactado acima do GCL e solo compactado abaixo do GCL. Esses ensaios permitiram a determinação da condutividade hidráulica e dos parâmetros de transporte dos materiais estudados. Dessa forma, compara-se o comportamento desses materiais combinados com os resultados obtidos por Musso (2008), que adotou a configuração independente. Após o início da percolação com solução \'K\'CL\' a condutividade hidráulica (\'K\') das duas configurações apresentou comportamento crescente. No entanto, este aumento no \'K\' não afetou o desempenho hidráulico dos materiais, e a condutividade hidráulica mostrou-se com valores da ordem de \'10 POT.-11\' m/s. O fator de retardamento da configuração na qual o GCL encontra-se acima da camada de solo compactado se mostrou maior com relação à outra configuração analisada. No geral, considerou-se que esta configuração apresentou melhor desempenho como liner composto. Na comparação dos resultados obtidos nesta pesquisa com os apresentados por Musso (op. cit.) a condutividade hidráulica não diferiu, e as configurações de liner compostos apresentaram maiores fatores de retardamento do que o liner do solo compactado isoladamente.
This research analyzes the behavior of two sets of liners subjected to the percolation of \'K\'CL\' solution. Samples of compacted soil from Corumbataí Formation, combined with a geosynthetic clay liner (GCL) of brazilian manufacture were used. Column percolation tests were used in two specimens, in the settings: compacted soil above the GCL and compacted soil below the GCL. These tests allowed the determination of hydraulic conductivity and transport parameters of the materials under study. Thus, the behavior of these composite liners was compared with the results obtained by Musso (2008), which tested the independent configuration. After the start of percolation of the \'K\'CL\' solution the hydraulic conductivity (\'K\') of the two settings showed an increase. However, this increase in \'K\' did not affect the hydraulic performance of materials, and the hydraulic conductivity was observed with values of about \'10 POT.-11\' m/s. The retardation factor of the configuration in which the GCL is above the layer of compacted soil was larger in relation to the other configuration analyzed. Overall, it was considered that this configuration showed better performance as composite liner. Comparing the results with those presented by Musso (2008), the hydraulic conductivity didn\'t differ, and the composite liners had higher retardation factors than the liner of compacted soil alone.
32

Kohlschütter, Christian [Verfasser]. "Exploiting links and text structure on the Web : a quantitative approach to improving search quality / Christian Kohlschütter." Hannover : Technische Informationsbibliothek und Universitätsbibliothek Hannover, 2011. http://d-nb.info/101196922X/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Kleyn, Judith. "The performance of the preliminary test estimator under different loss functions." Thesis, University of Pretoria, 2014. http://hdl.handle.net/2263/43132.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis different situations are considered in which the preliminary test estimator is applied and the performance of the preliminary test estimator under different proposed loss functions, namely the reflected normal , linear exponential (LINEX) and bounded LINEX (BLINEX) loss functions is evaluated. In order to motivate the use of the BLINEX loss function rather than the reflected normal loss or the LINEX loss function, the risk for the preliminary test estimator and its component estimators derived under BLINEX loss is compared to the risk of the preliminary test estimator and its components estimators derived under both reflected normal loss and LINEX loss analytically (in some sections) and computationally. It is shown that both the risk under reflected normal loss and the risk under LINEX loss is higher than the risk under BLINEX loss. The key focus point under consideration is the estimation of the regression coefficients of a multiple regression model under two conditions, namely the presence of multicollinearity and linear restrictions imposed on the regression coefficients. In order to address the multicollinearity problem, the regression coefficients were adjusted by making use of Hoerl and Kennard’s (1970) approach in ridge regression. Furthermore, in situations where under- or overestimation exist, symmetric loss functions will not give optimal results and it was necessary to consider asymmetric loss functions. In the economic application, it was shown that a loss function which is both asymmetric and bounded to ensure a maximum upper bound for the loss, is the most appropriate function to use. In order to evaluate the effect that different ridge parameters have on the estimation, the risk values were calculated for all three ridge regression estimators under different conditions, namely an increase in variance, an increase in the level of multicollinearity, an increase in the number of parameters to be estimated in the regression model and an increase in the sample size. These results were compared to each other and summarised for all the proposed estimators and proposed loss functions. The comparison of the three proposed ridge regression estimators under all the proposed loss functions was also summarised for an increase in the sample size and an increase in variance.
Thesis (PhD)--University of Pretoria, 2014.
lk2014
Statistics
PhD
Unrestricted
34

Ylinen, Tomi. "Search for Gamma-ray Lines from Dark Matter with the Fermi Large Area Telescope." Doctoral thesis, KTH, Partikel- och astropartikelfysik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-12853.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dark matter (DM) constitutes one of the most intriguing but so far unresolved issues in physics. In many extensions of the Standard Model of particle physics, the existence of a stable Weakly Interacting Massive Particle (WIMP) is predicted. The WIMP is an excellent DM particle candidate. One of the most interesting scenarios is the creation of monochromatic gamma-rays from the annihilation or decay of these particles. This type of signal would represent a “smoking gun” for DM, since no other known astrophysical process should be able to produce it. In this thesis, the search for spectral lines with the Large Area Telescope (LAT) onboard the Fermi Gamma-ray Space Telescope (Fermi) is presented. The satellite was successfully launched from Cape Canaveral in Florida, USA, on 11 June, 2008. The energy resolution and performance of the detector are both key factors in the search and are investigated here using beam test data, taken at CERN in 2006 with a scaled-down version of the Fermi-LAT instrument. A variety of statistical methods, based on both hypothesis tests and confidence interval calculations, are then reviewed and tested in terms of their statistical power and coverage. A selection of the statistical methods are further developed into peak finding algorithms and applied to a simulated data set called obssim2, which corresponds to one year of observations with the Fermi-LAT instrument, and to almost one year of Fermi-LAT data in the energy range 20–300 GeV. The analysis on Fermi-LAT data yielded no detection of spectral lines, so limits are placed on the velocity-averaged cross-section, , and the decay lifetime, , and theoretical implications are discussed.
QC20100525
GLAST
35

Charbachi, Peter, and Linus Eklund. "Thesis for the Degree of Bachelor of Science in Computer Science by Peter Charbachi and Linus Eklund : PAIRWISE TESTING FOR PLC EMBEDDED SOFTWARE." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-32054.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis we investigate the use of pairwise testing for PLC embedded software. We compare these automatically generated tests with tests created manually by industrial engineers. The tests were evaluated in terms of fault detection, code coverage and cost. In addition, we compared pairwise testing with randomly generated tests of the same size as pairwise tests. In order to automatically create test suites for PLC software a previously created tool called Combinatorial Test Tool (CTT) was extended to support pairwise testing using the IPOG algorithm. Once test suites were created using CTT they were executed on real industrial programs. The fault detection was measured using mutation analysis. The results of this thesis showed that manual tests achieved better fault detection (8% better mutation score in average) than tests generated using pairwise testing. Even if pairwise testing performed worse in terms of fault detection than manual testing, it achieved better fault detection in average than random tests of the same size. In addition, manual tests achieved in average 97.29% code coverage compared to 93.95% for pairwise testing, and 84.79% for random testing. By looking closely on all tests, manual testing performed equally good as pairwise in terms of achieved code coverage. Finally, the number of tests for manual testing was lower (12.98 tests in average) compared to pairwise and random testing (21.20 test in average). Interestingly enough, for the majority of the programs pairwise testing resulted in fewer tests than manual testing.
36

Pastor, Pellicer Joan. "Neural Networks for Document Image and Text Processing." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90443.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nowadays, the main libraries and document archives are investing a considerable effort on digitizing their collections. Indeed, most of them are scanning the documents and publishing the resulting images without their corresponding transcriptions. This seriously limits the document exploitation possibilities. When the transcription is necessary, it is manually performed by human experts, which is a very expensive and error-prone task. Obtaining transcriptions to the level of required quality demands the intervention of human experts to review and correct the resulting output of the recognition engines. To this end, it is extremely useful to provide interactive tools to obtain and edit the transcription. Although text recognition is the final goal, several previous steps (known as preprocessing) are necessary in order to get a fine transcription from a digitized image. Document cleaning, enhancement, and binarization (if they are needed) are the first stages of the recognition pipeline. Historical Handwritten Documents, in addition, show several degradations, stains, ink-trough and other artifacts. Therefore, more sophisticated and elaborate methods are required when dealing with these kind of documents, even expert supervision in some cases is needed. Once images have been cleaned, main zones of the image have to be detected: those that contain text and other parts such as images, decorations, versal letters. Moreover, the relations among them and the final text have to be detected. Those preprocessing steps are critical for the final performance of the system since an error at this point will be propagated during the rest of the transcription process. The ultimate goal of the Document Image Analysis pipeline is to receive the transcription of the text (Optical Character Recognition and Handwritten Text Recognition). During this thesis we aimed to improve the main stages of the recognition pipeline, from the scanned documents as input to the final transcription. We focused our effort on applying Neural Networks and deep learning techniques directly on the document images to extract suitable features that will be used by the different tasks dealt during the following work: Image Cleaning and Enhancement (Document Image Binarization), Layout Extraction, Text Line Extraction, Text Line Normalization and finally decoding (or text line recognition). As one can see, the following work focuses on small improvements through the several Document Image Analysis stages, but also deals with some of the real challenges: historical manuscripts and documents without clear layouts or very degraded documents. Neural Networks are a central topic for the whole work collected in this document. Different convolutional models have been applied for document image cleaning and enhancement. Connectionist models have been used, as well, for text line extraction: first, for detecting interest points and combining them in text segments and, finally, extracting the lines by means of aggregation techniques; and second, for pixel labeling to extract the main body area of the text and then the limits of the lines. For text line preprocessing, i.e., to normalize the text lines before recognizing them, similar models have been used to detect the main body area and then to height-normalize the images giving more importance to the central area of the text. Finally, Convolutional Neural Networks and deep multilayer perceptrons have been combined with hidden Markov models to improve our transcription engine significantly. The suitability of all these approaches has been tested with different corpora for any of the stages dealt, giving competitive results for most of the methodologies presented.
Hoy en día, las principales librerías y archivos está invirtiendo un esfuerzo considerable en la digitalización de sus colecciones. De hecho, la mayoría están escaneando estos documentos y publicando únicamente las imágenes sin transcripciones, limitando seriamente la posibilidad de explotar estos documentos. Cuando la transcripción es necesaria, esta se realiza normalmente por expertos de forma manual, lo cual es una tarea costosa y propensa a errores. Si se utilizan sistemas de reconocimiento automático se necesita la intervención de expertos humanos para revisar y corregir la salida de estos motores de reconocimiento. Por ello, es extremadamente útil para proporcionar herramientas interactivas con el fin de generar y corregir la transcripciones. Aunque el reconocimiento de texto es el objetivo final del Análisis de Documentos, varios pasos previos (preprocesamiento) son necesarios para conseguir una buena transcripción a partir de una imagen digitalizada. La limpieza, mejora y binarización de las imágenes son las primeras etapas del proceso de reconocimiento. Además, los manuscritos históricos tienen una mayor dificultad en el preprocesamiento, puesto que pueden mostrar varios tipos de degradaciones, manchas, tinta a través del papel y demás dificultades. Por lo tanto, este tipo de documentos requiere métodos de preprocesamiento más sofisticados. En algunos casos, incluso, se precisa de la supervisión de expertos para garantizar buenos resultados en esta etapa. Una vez que las imágenes han sido limpiadas, las diferentes zonas de la imagen deben de ser localizadas: texto, gráficos, dibujos, decoraciones, letras versales, etc. Por otra parte, también es importante conocer las relaciones entre estas entidades. Estas etapas del pre-procesamiento son críticas para el rendimiento final del sistema, ya que los errores cometidos en aquí se propagarán al resto del proceso de transcripción. El objetivo principal del trabajo presentado en este documento es mejorar las principales etapas del proceso de reconocimiento completo: desde las imágenes escaneadas hasta la transcripción final. Nuestros esfuerzos se centran en aplicar técnicas de Redes Neuronales (ANNs) y aprendizaje profundo directamente sobre las imágenes de los documentos, con la intención de extraer características adecuadas para las diferentes tareas: Limpieza y Mejora de Documentos, Extracción de Líneas, Normalización de Líneas de Texto y, finalmente, transcripción del texto. Como se puede apreciar, el trabajo se centra en pequeñas mejoras en diferentes etapas del Análisis y Procesamiento de Documentos, pero también trata de abordar tareas más complejas: manuscritos históricos, o documentos que presentan degradaciones. Las ANNs y el aprendizaje profundo son uno de los temas centrales de esta tesis. Diferentes modelos neuronales convolucionales se han desarrollado para la limpieza y mejora de imágenes de documentos. También se han utilizado modelos conexionistas para la extracción de líneas: primero, para detectar puntos de interés y segmentos de texto y, agregarlos para extraer las líneas del documento; y en segundo lugar, etiquetando directamente los píxeles de la imagen para extraer la zona central del texto y así definir los límites de las líneas. Para el preproceso de las líneas de texto, es decir, la normalización del texto antes del reconocimiento final, se han utilizado modelos similares a los mencionados para detectar la zona central del texto. Las imagenes se rescalan a una altura fija dando más importancia a esta zona central. Por último, en cuanto a reconocimiento de escritura manuscrita, se han combinado técnicas de ANNs y aprendizaje profundo con Modelos Ocultos de Markov, mejorando significativamente los resultados obtenidos previamente por nuestro motor de reconocimiento. La idoneidad de todos estos enfoques han sido testeados con diferentes corpus en cada una de las tareas tratadas., obtenie
Avui en dia, les principals llibreries i arxius històrics estan invertint un esforç considerable en la digitalització de les seues col·leccions de documents. De fet, la majoria estan escanejant aquests documents i publicant únicament les imatges sense les seues transcripcions, fet que limita seriosament la possibilitat d'explotació d'aquests documents. Quan la transcripció del text és necessària, normalment aquesta és realitzada per experts de forma manual, la qual cosa és una tasca costosa i pot provocar errors. Si s'utilitzen sistemes de reconeixement automàtic es necessita la intervenció d'experts humans per a revisar i corregir l'eixida d'aquests motors de reconeixement. Per aquest motiu, és extremadament útil proporcionar eines interactives amb la finalitat de generar i corregir les transcripcions generades pels motors de reconeixement. Tot i que el reconeixement del text és l'objectiu final de l'Anàlisi de Documents, diversos passos previs (coneguts com preprocessament) són necessaris per a l'obtenció de transcripcions acurades a partir d'imatges digitalitzades. La neteja, millora i binarització de les imatges (si calen) són les primeres etapes prèvies al reconeixement. A més a més, els manuscrits històrics presenten una major dificultat d'analisi i preprocessament, perquè poden mostrar diversos tipus de degradacions, taques, tinta a través del paper i altres peculiaritats. Per tant, aquest tipus de documents requereixen mètodes de preprocessament més sofisticats. En alguns casos, fins i tot, es precisa de la supervisió d'experts per a garantir bons resultats en aquesta etapa. Una vegada que les imatges han sigut netejades, les diferents zones de la imatge han de ser localitzades: text, gràfics, dibuixos, decoracions, versals, etc. D'altra banda, també és important conéixer les relacions entre aquestes entitats i el text que contenen. Aquestes etapes del preprocessament són crítiques per al rendiment final del sistema, ja que els errors comesos en aquest moment es propagaran a la resta del procés de transcripció. L'objectiu principal del treball que estem presentant és millorar les principals etapes del procés de reconeixement, és a dir, des de les imatges escanejades fins a l'obtenció final de la transcripció del text. Els nostres esforços se centren en aplicar tècniques de Xarxes Neuronals (ANNs) i aprenentatge profund directament sobre les imatges de documents, amb la intenció d'extraure característiques adequades per a les diferents tasques analitzades: neteja i millora de documents, extracció de línies, normalització de línies de text i, finalment, transcripció. Com es pot apreciar, el treball realitzat aplica xicotetes millores en diferents etapes de l'Anàlisi de Documents, però també tracta d'abordar tasques més complexes: manuscrits històrics, o documents que presenten degradacions. Les ANNs i l'aprenentatge profund són un dels temes centrals d'aquesta tesi. Diferents models neuronals convolucionals s'han desenvolupat per a la neteja i millora de les dels documents. També s'han utilitzat models connexionistes per a la tasca d'extracció de línies: primer, per a detectar punts d'interés i segments de text i, agregar-los per a extraure les línies del document; i en segon lloc, etiquetant directament els pixels de la imatge per a extraure la zona central del text i així definir els límits de les línies. Per al preprocés de les línies de text, és a dir, la normalització del text abans del reconeixement final, s'han utilitzat models similars als utilitzats per a l'extracció de línies. Finalment, quant al reconeixement d'escriptura manuscrita, s'han combinat tècniques de ANNs i aprenentatge profund amb Models Ocults de Markov, que han millorat significativament els resultats obtinguts prèviament pel nostre motor de reconeixement. La idoneïtat de tots aquests enfocaments han sigut testejats amb diferents corpus en cadascuna de les tasques tractad
Pastor Pellicer, J. (2017). Neural Networks for Document Image and Text Processing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90443
TESIS
37

Mullen, W. G. "An evaluation of the utility of four in-situ test methods for transmission line foundation design." Diss., Virginia Tech, 1991. http://hdl.handle.net/10919/38760.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Vural, Aydin. "Fmcw Radar Altimeter Test Board." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/2/1219526/index.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis, principles of a pulse modulated frequency modulated continuous wave radar is analyzed and adding time delay to transmitted signal in the laboratory environment performed. The transmitted signal from the radar has a time delay for traveling the distance between radar and target. The distance from radar to target is more than one kilometers thus test of the functionality of the radar in the laboratory environment is unavailable. The delay is simulated regarding to elapsed time for the transmitted signal to be received. This delay achieved by using surface acoustic wave (SAW) delay line in the laboratory environment. The analyses of the components of the radar and the delay line test board are conducted.
39

Huatorpet, Elin. ""Man vill ha Likes! : En kvalitativ studie om ungdomars kunskaper och ageranden vid text- och bildpublicering på sociala medier." Thesis, Karlstads universitet, Avdelningen för medie- och kommunikationsvetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-32722.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The purpose of this study was to investigate what knowledge 16 year old highschool students have when it comes to text- and imageposting on social media and how they act on the basis of their skills and attitudes on text and image publishing. An important aspect of this was to investigate the impact of young people 's identity in relation to text- and image posting on social media as this according to research is a big part of what motivates young people in their everyday practices (Buckingham, 2008). The essay questions was , What is young people’s experience on posting text- and image on social media? How aware are young people at the pros and cons on posting text and images on social media? What are young people's attitudes about text and image posting? And how do young people act on the basis of experience , awareness and attitude? Which role plays identity in their actions? To seek answers to my questions and achieve the aim of the thesis was conducted three semi-structured focusgroup interviews in collaboration with two high schools. The results have been reported, interpreted and analyzed by pattern-seeking.   The theoretical framework used in this paper is the research and theories of primarily media researchers who have distinguished themselves in the Media and Youth research, For example, Ulla Carlsson, David Buckingham, Sonia Livingstone and Kristen Drotner. I have primarily studied books with different themes within the topic of identity, Media literacy, children , media and culture. Through these researchers and their theories I could gather knowledge about young people's media habits and identity processes in relation to media use. Media Literacy during the course of the study proved to be a very important topic and especially the book Children and Youth in The Digital Media Culture, editor Ulla Carlsson, contains a lot of interesting theories and reflections on the subject. Through the various articles in the book I have gained an understanding and knowledge on the topic Media Literacy, a topic that most of the researchers argue is so important, but seems to be forgotten, or even not taken into consideration in schools. This study showed that young people are living a life where the boundary between the physical and the digital world is in principle non-existent but that several of the youngsters lacked guidance and support when it came to things that happened on the internet. Despite the fact that the digital world is as integrated into the physical world in the lives of young people, adults are not present in the young people's life’s on the internet. However, one of the schools in this study had focused on teaching Media Literacy and these young people showed a greater confidence and security in their virtual living. These young people had a clearer strategy and awareness of the risks of social media. The researchers' strong argumentation for educations in Media Literacy in schools teach from the youngsters in this study to be highly relevant arguments, however, Media Literacy seems to be a forgotten subject in schools , a subject that in most schools is’nt taken into consideration. Why? Is it the lack of time or the lack of knowledge about the youngsters or the subject?   Some interesting conclusions could be drawn from the focusgroup interviews in this study and it was that there is a big difference in how knowledge is achieved amoung the students. The girls had knowledge of the advantages and disadvantages of text and picture published by particularly bad experiences and had thus learned how best to act on social media and the internet. The boys, however had media and information education in school and experienced social media as a relatively safe place to stay on. This in theory could be due to differences between the sexes when girls are generally more vulnerable in society, but also according to researchers it could have to do with Media Literacy. Media Literacy is currently no compulsory element in the curriculum, but can pose a big difference to young people 's digital lifestyle. Why let go of the children in the digital world when we hold their hands in the physichal?
40

LIU, JIANXUN. "TEST PATTERN GENERATION FOR CROSSTALK FAULT IN DYNAMIC PLA." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1069779563.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Osama, Hassan Eltayeb Khalid. "Development of the Simulation Model for the CoSES Laboratory Test Microgrid in Modelica." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Evolution of the traditional consumer in a power system to a prosumer has posed many problems in the traditional uni-directional grid. This evolution in the grid model has made it important to study the behaviour of microgrids. This thesis deals with the laboratory microgrid setup at the Munich School of Engineering, built to assist researchers in studying microgrids. The model is built in Dymola which is a tool for the OpenModelica language. Models for the different components were derived, suiting the purpose of this study. The equivalent parameters were derived from data sheets and other simulation programs such as PSCAD. The parameters were entered into the model grid and tested at steady state, firstly. This yielded satisfactory results that were similar to the reference results from MATPOWER power flow. Furthermore, fault conditions at several buses were simulated to observe the behaviour of the grid under these conditions. Recommendations for further developing this model to include more detailed models for components, such as power electronic converters, were made at the end of the thesis.
42

Maamar, Ali Hussein. "A 32-bit self-checking RISC processor using Dong's Code." Thesis, University of Newcastle Upon Tyne, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285335.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Dufková, Klára. "Využití on-line diagnostiky při výběru pracovníků." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-71741.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis deals with the possibilities of using on-line diagnostic tools in selecting employees. The theoretical part of the thesis summarizes the theoretical basis for selection of employees, psychodiagnostics and on-line diagnostics. Attention is paid to psycho diagnostic characteristics and determination of preferences and risks of on-line diagnostics. The practical section examines and assesses the individual tools on-line diagnostics available for selection of employees on the Czech market - specifically, by the company Access Assessment s.r.o., Assessment Systems s.r.o., cut-e czech s.r.o and www.SCIO.cz, s.r.o. The tools are evaluated on the basis of psychometric features, factor is the standardization, objectivity, reliability and validity, as these characteristics determine the quality of the questionnaire or test. The result is a recommendation of an on-line questionnaire and tests that are for standard practice in the selection procedure most applicable.
44

Wigington, Curtis Michael. "End-to-End Full-Page Handwriting Recognition." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7099.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Despite decades of research, offline handwriting recognition (HWR) of historical documents remains a challenging problem, which if solved could greatly improve the searchability of online cultural heritage archives. Historical documents are plagued with noise, degradation, ink bleed-through, overlapping strokes, variation in slope and slant of the writing, and inconsistent layouts. Often the documents in a collection have been written by thousands of authors, all of whom have significantly different writing styles. In order to better capture the variations in writing styles we introduce a novel data augmentation technique. This methods achieves state-of-the-art results on modern datasets written in English and French and a historical dataset written in German.HWR models are often limited by the accuracy of the preceding steps of text detection and segmentation.Motivated by this, we present a deep learning model that jointly learns text detection, segmentation, and recognition using mostly images without detection or segmentation annotations.Our Start, Follow, Read (SFR) model is composed of a Region Proposal Network to find the start position of handwriting lines, a novel line follower network that incrementally follows and preprocesses lines of (perhaps curved) handwriting into dewarped images, and a CNN-LSTM network to read the characters. SFR exceeds the performance of the winner of the ICDAR2017 handwriting recognition competition, even when not using the provided competition region annotations.
45

Musgrove, MaryLynn. "Temporal links between climate and hydrology : insights from central Texas cave deposits and groundwater /." Digital version accessible at:, 2000. http://wwwlib.umi.com/cr/utexas/main.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Demba, Susanne. "Sensor-based detection of the teat load caused by a collapsing liner using a pressure-indicating film." Doctoral thesis, Humboldt-Universität zu Berlin, 2017. http://dx.doi.org/10.18452/18564.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ziel der Arbeit war es zum Einen die Eignung der Messung statischer Drücke in unterschiedlichen Größenordnungen mit Hilfe von roter Farbdichtevariation zur direkten Messung des Druckes zwischen Zitze und Zitzengummi beim Melken zu testen. Zum Anderen wurden verschiedene Einflussfaktoren auf diesen Druck analysiert. Dafür wurden Untersuchungen im Versuchsmelkstand unter der Verwendung verschiedener Zitzenmodelle durchgeführt. Der Einfluss verschiedener Anlagenvakua, Pulsationsraten, Pulsphasenverhältnisse und Zitzengummis auf die Zitzenbelastung wurde umfangreich analysiert. Es wurde festgestellt, dass sich die getestete Methode zur direkten Messung des Druckes zwischen Zitze und Zitzengummi eignet. Des Weiteren konnte ein signifikanter Einfluss aller getesteten Faktoren nachgewiesen werden. Die Zitzenbelastung beim Melken nimmt mit ansteigendem Anlagenvakuum, ansteigender Pulsationsrate und ansteigendem Phasenverhältnis zu. Die technischen Eigenschaften eines Zitzengummis, vor allem aber die Form des Zitzengummischaftes, unterscheiden sich signifikant hinsichtlich des von ihnen applizierten Druckes auf die Zitze. In allen Untersuchungen wurde der größte Druck auf das Zitzenende ausgeübt.
The aim of the present thesis was to test whether the measurement of static pressure distribution and magnitude with the aid of red color density variation is appropriate to directly measure the teat load caused by a collapsing liner and to identify different factors influencing this load. Therefore, investigations were carried out in a laboratory milking parlor using different artificial teats. The influence of the machine vacuum, the pulsation rate, the pulsation ratio, and the liner type were analyzed. The present investigations showed that the tested method is appropriate to directly measure the teat load due to liner collapse. A significant effect of all tested factors could be found as well. The higher the machine vacuum, pulsation rate, and pulsation ratio, the higher the teat load caused by a collapsing liner. The technical characteristics of a liner, especially the shape of the barrel, differ significantly with regard to the teat load. In all investigations more pressure was applied to the teat end.
47

Berglund, Josefin, and Kaisa Hasselquist. "Fonologi hos svenska 5- och 6-åringar med typisk språkutveckling : Referensmaterial till det fonologiska testmaterialet LINUS." Thesis, Linköpings universitet, Logopedi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-109068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A new phonological test, LINUS, for Swedish-speaking children between the ages of three and seven, has been developed at the speech and language pathology department at Linköping University. The aim of the present study was to create a reference manual for the long version of the new test. The participants in the present study were children between the ages of five to seven in a medium-sized municipality in the Northern part of Kalmar County, Sweden. In total, 124 native Swedish-speaking children (58 girls and 66 boys) with typical language development participated. The children were divided into two age groups, 5;0-5;11;31 and 6;0-6;11;31 years. The collected data was analysed with respect to acquisition of phonemes and word structure processes. Percentage of correctly produced words (PWC), consonants (PCC) and vowels (PVC) were calculated.All phonemes, except /s/, were established in both age groups. The phoneme /s/ was found to be either substituted or distorted. Among the 5-year old children /s/ was established for 84%, substitutions of /s/ were found in 7% and distortions were found in 23%. Among the 6-year old children /s/ was established for 88%. Substitutions of /s/ were found in 3% and distortions in 16% of the 6-year old children. The phoneme /r/ proved to be a borderline case for acquisition in the younger age group (91%). The most common word structure process in both groups was assimilation. A significant difference between the two age groups was found for assimilation (p=0,022), with lower occurrence in the older group. Two-consonant clusters (CC) and three-consonant clusters (CCC) were not frequently reduced, although it was found that CC-clusters were reduced more frequently than CCC-clusters. Both the age groups had high percentages of correctly produced words, consonants and vowels. The analysis revealed the following results: PWC for 5-years olds was 93% and 6-years olds 97%. PCC for 5-year olds was 98% and for 6-year olds 99%. PVC for both age groups was 100%. An age difference was shown for PWC, but not for the other measures. No gender differences were found.
Ett nytt fonologiskt testmaterial, LINUS, är framtaget för svensktalande barn. I föreliggande studie har referensmaterial till den långa versionen av det nya testet insamlats bland barn 5-7 år i en mellanstor kommun i norra Kalmar län. Totalt deltog 124 barn (58 flickor och 66 pojkar) med svenska som modersmål och typisk språkutveckling. Barnen delades upp i två åldersintervall, 5;0-5;11;31 och 6;0-6;11;31 år. Det insamlade materialet analyserades avseende etablering av fonem och konsonantkombinationer, samt förekomst av ordstrukturprocesser. Andelsmått för korrekt uttalade ord (PWC), konsonanter (PCC) och vokaler (PVC) beräknades.Samtliga fonem, utom /s/, var etablerade i båda åldersgrupperna. Fonemet /s/ realiserades med substitutioner eller kvalitativt marginella avvikelser (KMA). I 5-årsgruppen var /s/ etablerat för 84% av deltagarna. Bland 5-åringarna förekom substitutioner av /s/ hos 7% och KMA förekom hos 23%. I 6-årsgruppen var /s/ etablerat till 88%, och det förekom substitutioner hos 3% av 6-åringarna, och KMA förekom hos 16%. Fonemet /r/ visade sig vara ett gränsfall avseende etablering för den yngre åldersgruppen (91%). Den vanligaste ordstrukturprocessen för båda grupperna var assimilationer. Signifikanta skillnader mellan de två åldersgrupperna återfanns avseende assimilationer, p=0,022. Förekomsten av assimilationer minskade med stigande ålder. Tvåkonsonantskombinationer (CC) och trekonsonantskombinationer (CCC) visade sig ej vara påverkade av förenklingar i någon större grad. Dock visade sig CC-kombinationer vara mer utsatta för förenklingar än CCC-kombinationer. Båda åldersgrupperna hade höga andelar korrekt uttalade ord, konsonanter och vokaler. De två grupperna fick följande resultat: PWC för 5-åringar var 93% och 6-åringar 97%. PCC var 98% respektive 99% för grupperna. PVC var 100% för båda åldersgrupperna. Åldersskillnader kunde ses för PWC, dock inte för de andra måtten. Inga könsskillnader noterades.
48

Chibane, Idir. "Impact des liens hypertextes sur la précision en recherche d'information." Phd thesis, Université Paris Sud - Paris XI, 2008. http://tel.archives-ouvertes.fr/tel-00463066.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le Web est caractérisé par un volume d'information exponentiellement croissant ainsi que par l'hétérogénéité de ses ressources. Face au très grand nombre de réponses fournies par un moteur de recherche, il s'agit de fournir des réponses pertinentes parmi les premières réponses. Nous nous intéressons aux algorithmes de propagation de pertinence pour des corpus de documents hypertextes, et en particulier à l'analyse des liens afin d'exploiter l'information véhiculée par ses liens et par le voisinage des documents Web. Cependant, les différentes techniques proposées dépendent de paramètres statiques, fixés à priori selon le type de collection et l'organisation des pages Web. Dans cette thèse, nous proposons une nouvelle méthode de propagation de pertinence en utilisant des paramètres calculés dynamiquement, indépendamment de la collection utilisée. En effet, nous proposons de modéliser une fonction de correspondance d'un système de recherche d'information en prenant en compte à la fois le contenu d'un document et le voisinage de ce document. Ce voisinage est calculé dynamiquement en pondérant les liens hypertextes reliant les documents en fonction du nombre de termes distincts de la requête contenus dans ces documents. Pour traiter l'hétérogénéité des documents Web, nous modélisons les ressources Web à différents niveaux de granularité (site, page, bloc) afin de prendre en compte les différents thèmes contenus dans un même document. Nous proposons aussi une méthode de segmentation thématique des pages Web en utilisant des critères visuels et de représentation du contenu des pages afin d'extraire des blocs thématiques qui seront utilisés pour améliorer les performances de la recherche d'information. Nous avons expérimenté notre système sur deux collections de test WT10g et GOV. Nous concluons que notre modèle fournit de bons résultats par rapport aux algorithmes classiques reposant sur le contenu seul d'un document et ceux reposant sur l'analyse des liens (PageRank, HITS, propagation de pertinence).
49

ZHANG, WEI. "COLUMN LEACHING TESTS TO STUDY MOBILIZATION OF RADIONUCLIDES IN LINER SYSTEM OF ON-SITE DISPOSAL FACILITY AT FERNALD SITE." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin997966038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Nebut, Clémentine. "Génération automatique de tests à partir des exigences et application aux lignes de produits logicielles." Rennes 1, 2004. http://www.theses.fr/2004REN10099.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La contribution de cette thèse est une approche de génération automatique de tests fonctionnels à partir des exigences, prenant en compte la maîtrise du coût de test, l'adaptabilité au contexte des lignes de produits, la compatibilité avec les pratiques industrielles et la complexité des logiciels réels. Notre approche se base sur un modèle de cas d'utilisation étendus, relié à un analyseur de langage naturel contrôlé en amont et un générateur de tests en aval. Le langage contrôlé rapproche la méthode des pratiques industrielles, et formalise assez les exigences pour les transformer en un modèle de cas d'utilisation simulables (via l'ajout de contrats interprétables). Des critères de test permettent alors de générer des objectifs de test de haut niveau, qui sont ensuite raffinés vers des cas de test en utilisant des scénarios. La variabilité dans les exigences est prise en compte à chaque niveau de la génération de tests, cette approche est donc adaptée aux lignes de produits.

До бібліографії