To see the other types of publications on this topic, follow the link: ETL Processing.

Dissertations / Theses on the topic 'ETL Processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'ETL Processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Albrecht, Alexander, and Felix Naumann. "Understanding cryptic schemata in large extract-transform-load systems." Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2012/6125/.

Full text
Abstract:
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.<br>Extract-Transform-Load (ETL) Tools werden häufig beim Erstellen, der Wartung und der Weiterentwicklung von Data Warehouses, Data Marts und operationalen Datenbanken verwendet. ETL Workflows befüllen diese Systeme mit Daten aus vielen unterschiedlichen Quellsystemen. Ein ETL Workflow besteht aus mehreren Transformationsschritten, die einen DAG-strukturierter Graphen bilden. Mit der Zeit entstehen hunderte individueller ETL Workflows, da neue Datenquellen integriert oder neue Anforderungen umgesetzt werden müssen. Die Wartung und Weiterentwicklung von großen ETL Systemen benötigt viel Zeit und manuelle Arbeit. Ein zentrales Problem ist dabei das Verständnis unbekannter Attributnamen in Quell- und Zieldatenbanken und ETL Transformationen. Schwer verständliche Attributnamen führen zu Frustration und hohen Zeitaufwänden bei der Entwicklung und dem Verständnis von ETL Workflows. Wir präsentieren eine Schema Decryption Technik, die ETL Entwicklern das Verständnis kryptischer Schemata in Quell- und Zieldatenbanken und ETL Transformationen erleichtert. Unser Ansatz berücksichtigt für ein gegebenes ETL System die Vielzahl verknüpfter Attributnamen in den existierenden ETL Workflows. So werden gute und aussagekräftige "Decryptions" gefunden und wir sind in der Lage Attributnamen, die aus unbekannten Abkürzungen bestehen, zu "decrypten". So wird z.B. für den Attributenamen UNP_PEN_INT als Decryption UNPAIN_PENALTY_INTEREST vorgeschlagen. Unser Schema Decryption Ansatz wurde für drei ETL-Repositories evaluiert und es zeigte sich, dass unser Ansatz qualitativ hochwertige Decryptions für kryptische Attributnamen vorschlägt.
APA, Harvard, Vancouver, ISO, and other styles
2

Veselý, Jan. "Implementace BI v servisním oddělení telekomunikační společnosti." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-165129.

Full text
Abstract:
The subject of the thesis is the proof of concept of Business Intelligence (BI) implementation in the Communications service department from the Kapsch BusinessCom s.r.o. company. The aim of the work is developing a BI proof solution and consequently producing an output for the final users. The main outcome of the thesis is operating BI solutions used in practice. The solution was carried out using tools MS SQL Server 2008 R2 Management Studio, MS SQL Server 2008 R2 Business Intelligence Development studio and MS Excel 2007. The project analyses the data obtained from the ERP (Enterprise Resource Planning) database acquired from the MS Dynamics Navision system. Information on "Voice" projects from the Communications service department, which mainly focuses on the service of PBXs and similar products, are recorded in the project among others. A multidimensional analysis was carried out, data warehouse was designed, data pumps were created, OLAP (On-line Analytical Processing) block and output in the Microsoft Excel 2007 program were carried out during the project.
APA, Harvard, Vancouver, ISO, and other styles
3

Zitoun, Isabel, and Ann Josefsson. "ETL-processen : teori kontra praktik." Thesis, Uppsala universitet, Informationssystem, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-130257.

Full text
Abstract:
Abstract Nowadays, companies and organizations manage large amounts of data to be used as support in decision making. A good solution for data management is in the form of data warehouses. A data warehouse is used to gather information from multiple sources within a company and arrange the information in a way that makes it easy to retrieve and analyze.   Designing a data warehouse is not a simple task. The study herein will focus on what a data warehouse is and how the ETL process is used for data storage in the data warehouse. ETL stands for Extract, Transform and Load and is the process that retrieves data, transforms it and finally loads it into the data warehouse. In addition to gathering information from literature we have also conducted an interview with data architect Rasmus Lindén with Know IT AB. During this interview Lindén described how Know IT AB had constructed a data warehouse for Upplands Lokaltrafik, with primary focus on how they used the ETL process to treat data to be stored in UL’s data warehouse.   The purpose of this paper is to examine the theory behind the ETL process and subsequently investigate how it may be applied by comparing the theory and how the company Know IT AB has applied it when designing data warehouses. One of the observations we made is that the theory we have studied fails to mention or illustrate some factors we during our interview found were considered important in designing data warehouses.<br>Sammanfattning Företag och organisationer hanterar idag stora mängder data som används som beslutsunderlag. För att hantera all data kan ett datalager vara en bra lösning. Ett datalagers uppgift är att samla information från flera olika källor inom företaget och ordna informationen så att den senare är lätt att ta fram och analysera.   Att konstruera ett datalager är ingen enkel uppgift. Den här studien kommer att fokusera på vad ett datalager är och hur ETL-processen används för att lagra data i datalagret. ETL står för ”Extract” ”Transform” och ”Load” och är den process som hämtar data, transformerar och slutligen laddar in det i datalagret. Förutom att ha samlat information från litteratur har vi även gjort en intervju med dataarkitekten Rasmus Lindén på Know IT AB. Under intervjun beskrev Lindén hur Know IT AB hade konstruerat ett datalager åt Upplands Lokaltrafik och fokus låg på hur de använde ETL-processen för att behandla den data som skulle lagras i UL:s datalager.   Syftet med uppsatsen är att undersöka hur ETL-processen ser ut i teorin och att därefter undersöka hur det kan se ut i praktiken, genom att jämföra hur företaget Know IT AB har tillämpar det när de bygger datalager. Något vi upptäckte i våra jämförelser är att den teorin som vi har tagit del av inte nämner eller belyser vissa saker som vi under vår intervju fann var viktiga då man byggde ett datalager.
APA, Harvard, Vancouver, ISO, and other styles
4

Mašek, Martin. "Datové sklady - principy, metody návrhu, nástroje, aplikace, návrh konkrétního řešení." Master's thesis, Vysoká škola ekonomická v Praze, 2007. http://www.nusl.cz/ntk/nusl-10145.

Full text
Abstract:
The main goal of this thesis is to summarize and introduce general theoretical concepts of Data Warehousing by using the systems approach. The thesis defines Data Warehousing and its main areas and delimitates Data Warehousing area in terms of higher-level area called Business Intelligence. It also describes the history of Data Warehousing & Business Intelligence, focuses on key principals of Data Warehouse building and explains the practical applications of this solution. The aim of the practical part is to perform the evaluation of theoretical concepts. Based on that, design and build Data Warehouse in environment of an existing company. The final solution shall include Data Warehouse design, hardware and software platform selection, loading with real data by using ETL services and building of end users reports. The objective of the practical part is also to demonstrate the power of this technology and shall contribute to business decision-making process in this company.
APA, Harvard, Vancouver, ISO, and other styles
5

Pavlová, Petra. "Měření výkonnosti podniku." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-165086.

Full text
Abstract:
This thesis deals with the application of Business Intelligence (BI) to support the corporate performance management in ISS Europe, spol. s r. o. This company provides licences and implements original software products as well as third-party software products. First, an analysis is conducted in the given company, which then serves as basis for the implementation of the BI solution that should be interconnected with the company strategies. The main goal is the implementation of a pilot BI solution to aid the monitoring and optimisation of corporate performance. Among secondary goals are the analysis of related concepts, business strategy analysis, strategic goals and systems identification and the proposition and implementation of a pilot BI solution. In its theoretical part, this thesis focuses on the analysis of concepts related to corporate performance and BI implementations and shortly describes the company together with its business strategy. The following practical part is based on the theoretical findings. An analysis of the company is carried out using the Balanced Scorecard (BSC) methodology, the result of which is depicted in a strategic map. This methodology is then supplemented by the Activity Based Costing (ABC) analytical method, which divides expenses according to assets. The results are informational data about which expenses are linked to handling individual developmental, implementational and operational demands for particular contracts. This is followed by an original proposition and the implementation of a BI solution which includes the creation of a Data Warehouse (DWH), designing Extract Transform and Load (ETL) and Online Analytical Processing (OLAP) systems and generating sample reports. The main contribution of this thesis is in providing the company management with an analysis of company data using a multidimensional perspective which can be used as basis for prompt and correct decision-making, realistic planning and performance and product optimisation.
APA, Harvard, Vancouver, ISO, and other styles
6

Jakubičková, Nela. "Návrh metodiky testování BI řešení." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-114404.

Full text
Abstract:
This thesis deals with Business Intelligence and its testing. It seeks to highlight the differences from the classical software testing and finally design a methodology for BI solutions testing that could be used in practice on real projects of BI companies. The aim of thesis is to design a methodology for BI solutions testing based on theoretical knowledge of Business Intelligence and software testing with an emphasis on the specific BI characteristics and requirements and also in accordance with Clever Decision's requirements and test it in practice on a real project in this company. The paper is written up on the basis of studying literature in the field of Business Intelligence and software testing from Czech and foreign sources as well as on the recommendations and experience of Clever Decision's employees. It is one of the few if not the first sources dealing with methodology for BI solutions testing in the Czech language. This work could also serve as a basis for more comprehensive methodologies of BI solutions testing. The thesis can be divided into theoretical and practical part. The theoretical part tries to explain the purpose of Business Intelligence use in enterprises. It elucidates particular components of the BI solution, then the actual software testing, various types of tests, with emphasis on the differences and specificities of Business Intelligence. The theoretical part is followed by designed methodology for BI solutions using a generic model for the BI/DW solution testing. The practical part's highlight is the description of real BI project testing in Clever Decision according to the designed methodology.
APA, Harvard, Vancouver, ISO, and other styles
7

Lucena, Maria Inez Probst. "Processing input : an investigation into brazilian efl students." reponame:Repositório Institucional da UFSC, 1998. http://repositorio.ufsc.br/xmlui/handle/123456789/77783.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro de Comunicação e Expressão<br>Made available in DSpace on 2012-10-17T07:28:02Z (GMT). No. of bitstreams: 0Bitstream added on 2016-01-09T01:07:44Z : No. of bitstreams: 1 149134.pdf: 5190658 bytes, checksum: 97a216e695858d9dcc4d7231a494f0c6 (MD5)<br>Baseados na psicologia e na perspectiva de processamento de input, pesquisadores têm tentado obter informações sobre a relação que existe entre o input e os processos cognitivos dos aprendizes. A principal afirmação dentro dessa perspectiva é que, uma vez que o ser humano não tem uma capacidade ilimitada de atenção, é dificil atender a todo o input ao mesmo tempo. Van Patten (1990) investigou a divisão da atenção do aprendiz entre conteúdo e aspectos formais da língua e verificou que a atenção ao primeiro compete com a atenção ao segundo. O presente estudo teve como objetivo investigar, através da replicação do estudo de Van Patten (1990), se alunos brasileiros de inglês como língua estrangeira apresentavam um desempenho diferente do constatado no experimento original. Os resultados fornecem evidência de que atender ao conteúdo e às formas gramaticais foi mais dificil do que atender somente ao conteúdo ou ao conteúdo e a um item lexical e que somente os alunos em níveis mais avançados puderam atender mais facilmente à forma sem afetar a compreensão do conteúdo. Abstract : Much research in second/ foreign language (L2) acquisition field has been carried out on the way learners process input in an attempt to determine how second/ foreign languages are best learned. Based on cognitive psychology and within an input processing perspective, researchers have been seeking to obtain information about the relationship between input and learners' cognitive processes. The main claim within this perspective is that since humans do not have unlin-úted supplies of attention, it is difficult to attend to everything in the input at the same time. Van Patten (1990) investigated the relationship between learners' attention to the meaning and to the formal features of the language input and found that focusing on meaning competes with focusing on form, and that only when comprehension as a skill is automatized can learners simultaneously attend to form without loss of information. The purpose of the present study was to investigate, through the replication of Van Patten's (1990) study, whether Brazilian EFL students at the secondary school levei perform in different ways when they are asked to attend to both form and meaning in listening tasks. The participants in this study were 71 Brazilian secondary school students enrolled in EFL classes at Colégio de Aplicação, a public high-school linked to the Federal University of Santa Catarina. The experiment followed the same general procederes used by Van Patten. Students at three different levels of competence listened to recorded passages. At each levei students were divided into four different groups and each group was expected to carry out a slightly different task. Thus, in each task learners were expected to pay attention to different things: In Task I, only to the content; in Task II, to the content and to the key lexical item Einstein; in Task III, to the content and to the definite article the, and in Task IV, to the content and to the past verb morpheme -ed. Subjects were to demonstrate their attention to target items by placing an X on their papers every time they heard an occurrence. They were asked to write freely in Portuguese, everything they remembered from the passages and their performance in each task was assessed in terms of the number of idea units recalled. Results provided evidence that attention to content and grammatical forms was more difficult than attention to only content or content plus a lexical item, and only the more advanced learners showed that they could more easily focus on form without affecting comprehension. The results of the present study reinforce Van Patten's claims that learners' attention during input processing is focused first on meaning and that only when comprehension as a skill becomes automatic, learners' are more able to detect grammatical items while detecting information without negatively affecting comprehension.
APA, Harvard, Vancouver, ISO, and other styles
8

Anibal, Luana Peixoto. "Istar : um esquema estrela otimizado para Image Data Warehouses baseado em similaridade." Universidade Federal de São Carlos, 2011. https://repositorio.ufscar.br/handle/ufscar/484.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:05:54Z (GMT). No. of bitstreams: 1 3993.pdf: 3294402 bytes, checksum: 982c043143364db53c8a4e2084205995 (MD5) Previous issue date: 2011-08-26<br>A data warehousing environment supports the decision-making process through the investigation and analysis of data in an organized and agile way. However, the current data warehousing technologies do not allow that the decision-making processe be carried out based on images pictorial (intrinsic) features. This analysis can not be carried out in a conventional data warehousing because it requires the management of data related to the intrinsic features of the images to perform similarity comparisons. In this work, we propose a new data warehousing environment called iCube to enable the processing of OLAP perceptual similarity queries over images, based on their pictorial (intrinsic) features. Our approach deals with and extends the three main phases of the traditional data warehousing process to allow the use of images as data. For the data integration phase, or ETL phase, we propose a process to represent the image by its intrinsic content (such as color or texture numerical descriptors) and integrate this data with conventional data in the DW. For the dimensional modeling phase, we propose a star schema, called iStar, that stores both the intrinsic and the conventional image data. Moreover, at this stage, our approach models the schema to represent and support the use of different user-defined perceptual layers. For the data analysis phase, we propose an environment in which the OLAP engine uses the image similarity as a query predicate. This environment employs a filter mechanism to speed-up the query execution. The iStar was validated through performance tests for evaluating both the building cost and the cost to process IOLAP queries. The results showed that our approach provided an impressive performance improvement in IOLAP query processing. The performance gain of the iCube over the best related work (i.e. SingleOnion) was up to 98,21%.<br>Um ambiente de data warehousing (DWing) auxilia seus usuários a tomarem decisões a partir de investigações e análises dos dados de maneira organizada e ágil. Entretanto, os atuais recursos de DWing não possibilitam que o processo de tomada de decisão seja realizado com base em comparações do conteúdo intrínseco de imagens. Esta análise não pode ser realizada por aplicações de DW convencionais porque essa utiliza, como base, imagens digitais e necessita realizar operações baseadas em similaridade, para as quais um DW convencional não oferece suporte. Neste trabalho, é proposto um ambiente de data warehouse chamado iCube que provê suporte ao processamento de consultas IOLAP (Image On-Line Analytical Processing) baseadas em diversas percepções de similaridade entre as imagens. O iCube realiza adaptações nas três principais fases de um ambiente de data warehousing convencional para permitir o uso de imagens como dados de um data warehouse (DW). Para a fase de integração, ou fase ETL (Extract, Trasnform and Load), nós propomos um processo para representar as imagens a partir de seu conteúdo intrínseco (i.e., por exemplo por meio de descritores numéricos que representam cor ou textura dessas imagens) e integrar esse conteúdo intrínseco a dados convencionais em um DW. Neste trabalho, nós também propomos um esquema estrela otimizado para o iCube, denominado iStar, que armazena tanto dados convencionais quanto dados de representação do conteúdo intrínseco das imagens. Ademais, nesta fase, o iStar foi projetado para representar e prover suporte ao uso de diferentes camadas perceptuais definidas pelo usuário. Para a fase de análise de dados, o iCube permite que processos OLAP sejam executados com o uso de comparações de similaridade como predicado de consultas e com o uso de mecanismos de filtragem para acelerar o processamento de consultas OLAP. O iCube foi validado a partir de testes de desempenho para a construção da estrutura e para o processamento de consultas IOLAP. Os resultados demonstraram que o iCube melhora significativamente o desempenho no processamento de consultas IOLAP quando comparado aos atuais recursos de IDWing. Os ganhos de desempenho do iCube contra o melhor trabalho correlato (i.e. SingleOnion) foram de até 98,21%.
APA, Harvard, Vancouver, ISO, and other styles
9

Zuckernick, Howard. "The processing of words in Finnish EFL oral reading." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1994. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ51543.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sieh, Yu-Cheng. "Phonological processing skills in young learners' EFL vocabulary acquisition." Thesis, University of Newcastle Upon Tyne, 2007. http://hdl.handle.net/10443/266.

Full text
Abstract:
Four factors were examined to investigate English vocabulary learning among 9-year-old elementary school children in Taiwan. The four factors were use of their native language, length of English instruction, and two phonological processing capabilities-phonological memory and phonological sensitivity. Apart from a series of paper-and-pencila nd computerized vocabulary assessmentst,w o nonword repetition tasks along with five detection and production tasks of rimes and phonemes were used to measure phonological memory and phonological sensitivity. The young learners' scores on all vocabulary tests were positively correlated with phonological memory and phonological sensitivity, as was also evidenced in studies by Gathercole et al. (1997) and Bowey (1996). A similar pattern of association was found between the learners' vocabulary performance and their length of English instruction. However, neither of the two phonological processing capabilities was associatedw ith English instruction length. The reaction times of the two online vocabulary tests suggest that an extra input of Ll gloss in explicit vocabulary teaching might have resulted in faster aural recognition of single English words. This supports Kroll and Stewart's (1994) revised hierarchical model of bilingual representation,w hich postulatest hat beginning L2 learners have their two languages interconnected at the lexical level. Results of stepwise and hierarchical regression analyses confirmed that English phonological sensitivity was the best predictor of young learners' English vocabulary performance and contributed uniquely to their vocabulary scores after age, English instruction length, vocabulary knowledge from school textbooks, Chinese phonological sensitivity, and phonological memory were statistically controlled.
APA, Harvard, Vancouver, ISO, and other styles
11

Madron, Lukáš. "Datové sklady a OLAP v prostředí MS SQL Serveru." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235916.

Full text
Abstract:
This paper deals with data warehouses and OLAP. These technologies are defined and described here. Then an introduction of the architecture of product MS SQL Server and its tools for work with data warehouses and OLAP folow. The knowledge gained is used for creation of sample application.
APA, Harvard, Vancouver, ISO, and other styles
12

Turner, Julia J. "Processing Nature." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/4697.

Full text
Abstract:
In my artwork, I merge nature with typography. I use macro-level photography to capture details of nature, such as the pistils of a flower or the sensory hairs of an insect. I print enlargements and transfer these photos onto pages of poetic text about nature, or collage them onto canvas. Once transferred, I use multiple media to alter and enhance features of the photos. I intentionally obscure much of the text which allows me to place focus on the overall layout and design. The arrangement of lines of text and spacing of words is used to create a visual rhythm. The poetic script acts as a foundation or platform for presenting the often unnoticed elements of nature. In developing a piece, I use a wet transfer which allows me to obtain subtle textures and imperfections that occur from the process. To add visual interest and complexity, I continue to manipulate the surfaces with various media and marks. This approach helps to create a blending of text and image. Similar to reading a poem or observing nature, I want to engage the viewer in an experience of visual poetry.
APA, Harvard, Vancouver, ISO, and other styles
13

Ferrigno, Andrea Ann. "Processing information." Thesis, University of Iowa, 2013. https://ir.uiowa.edu/etd/2491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Chan, Nok Chin Lydia. "Grammar "bores the crap out of me!": A mixed-method study on the XTYOFZ construction and its usage by ESL and ENL speakers." Thesis, Stockholms universitet, Engelska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-194086.

Full text
Abstract:
Different from Generative Grammar which sees grammar as a formal system of how words are put together to form sentences, Construction Grammar suggests that grammar is more than just rules and surface forms; instead, grammar includes many form-and-meaning pairings which are called constructions. For years, Construction Grammarians have been investigating constructions with various approaches, including corpus-linguistics, pedagogical, second language acquisition and so on, yet there is still room for exploration. The present paper aims to further investigate the [V the Ntaboo-word out of]-construction (Hoeksema &amp; Napoli, 2008; Haïk, 2012; Perek, 2016; Hoffmann, 2020) (e.g., I kick the hell out of him.) and propose a new umbrella construction, “X the Y out of Z” (XTYOFZ) construction, for it. Another aim is to examine the usage and comprehension of the XTYOFZ construction by English as a Second Language (ESL) and English as Native Language (ENL) speakers. The usage context, syntactic and semantic characteristics of the XTYOFZ construction were examined through corpus linguistic methodology. Furthermore, processing and understanding of the construction by ESL and ENL speakers were tested via an online timed Lexical Decision Task as well as an online follow-up survey consisting of questions on English acquisition and usage, and a short comprehension task on the XTYOFZ construction. Corpus data shows that in general, the combination of non-motion action verbs (e.g., scare, beat) as X and taboo terms (e.g., shit, hell) as Y was the most common. Also, it was found that the construction occurs mostly in non-academic contexts such as websites and TV/movies. On the other hand, results from the Lexical Decision Task show that ESL speakers access constructional meaning slightly more slowly than ENL speakers. The follow-up survey also reflects that ESL speakers seem to have a harder time to produce and comprehend the construction compared to ENL speakers. By investigating the features of a relatively less-discussed construction and its usage by ESL speakers, this study hopes to increase the knowledge base of Construction Grammar and ESL construction comprehension and usage, particularly on the constructions that are mainly used in more casual settings.
APA, Harvard, Vancouver, ISO, and other styles
15

Fausset, Cara Bailey. "On processing line graphs." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Goldfus, Caroline Ida. "Reading comprehension and EFL adolescent students with difficulties: exploring a cognitive processing model." Thesis, University of Birmingham, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.503591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Childress, Lawrence. "The Loss-Processing Framework." Digital Commons @ East Tennessee State University, 2021. https://dc.etsu.edu/etd/3896.

Full text
Abstract:
The circumstances of responding to loss due to human death are among the most stressful experiences encountered in life. Although grief’s symptoms are typically considered essential to their gradual diminishment, possible negative impacts of complications related to grief are also well known, and have been associated with detriments to mental and physical health. Grief, however, can also generate transformative positive change. Thus, albeit ineludible, responding to loss is not uniformly experienced, expressed, or understood. It is also culturally-shaped, making attempts to define “normal” grief, as well as to label some grief “abnormal”—and to medicalize it—possibly problematic. Bereavement (the situation surrounding a death) and mourning (the publicly expressed response to loss due to death) are changing. Some of these changes (e.g., the increase in hospice care settings prior to deaths, and alterations in the ritual responses following all deaths—irrespective of their context) may have important implications for avoiding grief’s possible complications and for promoting its potential benefits. An improved alignment of grief theory, research, and practice is warranted; but theories of grief are diverse, and historically have not been empirically well-supported. This research articulates a new grief model, the loss-processing framework, featuring three dimensional components (perception, orientation, and direction). As a first step toward validation of the framework, also included is an empirical study examining retrospective descriptive reports of adult loss response relating to the first of these three dimensions (perception). As an interpretive, translational approach to understanding grief, the loss-processing framework may serve to positively impact grieving, health, and life quality.
APA, Harvard, Vancouver, ISO, and other styles
18

Rehfuss, Paul Stephen. "Parallelism in contextual processing /." Full text open access at:, 1999. http://content.ohsu.edu/u?/etd,272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lin, Yowyu. "Word order, animacy, and agreement cues in sentence processing by Li Mandarin EFL learners." Thesis, University of Hawaii at Manoa, 2003. http://hdl.handle.net/10125/7109.

Full text
Abstract:
Research within the Competition Model has shown that different language speakers use different strategies to interpret sentences. Despite the fact that crosslinguistic studies have incorporated a variety of cues in the experiments, studies investigating Mandarin used mostly word order and animacy cues only. Modifying experiment designs of the previous studies and adding agreement cues in the research, the present study presented subjects a series of both grammatical and ungrammatical English sentences and asked them to identify the "actor" of these sentences. One group of English native speakers and two groups of Taiwanese English learners with different English proficiency were recruited. Three research questions were investigated in the present study: (1) Which cues, word order, agreement and animacy, would be used most for the three different groups of subjects? (2) What kind of language transfer would be found in nonnative learners of English, especially Taiwanese high school students? (3) Would learners of English with different levels of proficiency differ in their use of cues in sentence processing? Findings of the present research indicated that English native speakers used word order as the primary cues, while intermediate and advanced Taiwanese English learners used mainly animacy and word order cues respectively, which demonstrated that a different length of exposure to a foreign language did influence learners' strategies in sentence processing. In addition, results of the present study also revealed a pattern of forward transfer in sentence processing by intermediate nonnative subjects, while the advanced group of subjects did not show this transfer. Interestingly, after modifying previous research designs, the effect of animacy cues in the group of English native speakers did not reach significance in the present study. This may shed light on the influence of real world bias in the previous studies.<br>xv, 133 leaves
APA, Harvard, Vancouver, ISO, and other styles
20

Mohamed, Hana. "Student centred intercultural interactive processing model of reading EFL fiction in the Libyan context." Thesis, University of Aberdeen, 2017. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=232406.

Full text
Abstract:
Using literary texts in the EFL classroom has been widely practised in the field of EFL teaching and learning. Many scholars ascertain that English literary texts provide language learners with a kind of authentic language used by native speakers in real contexts. Research studies in the EFL field illustrate that EFL learners' problems in reading English literary texts are due to two main reasons. Firstly, the complex structure of literary texts. Secondly, lack of familiarity with the cultural content of English literary texts. However, the present study sets out to suggest that collaborative work in the classroom can bridge learners' difficulties in constructing the meaning of literary texts. In Libyan universities, learners in the Department of English Language and Linguistics study literature for a considerable period of their university program. The present research suggests a new model to improve the teaching of literary short fiction in one of the English departments in Libyan universities. The new model emphasizes three main tenets: 1. The role of background knowledge in processing literary short fiction. The background knowledge includes not only knowledge of English language but also familiarity with cultural content of the literary text as well as the formal organization of the literary texts. 2. Since language and culture are intertwined, the approach focuses on developing Libyan learners' cultural and intercultural awareness. 3. The approach suggests the use of Learning Conversations as a scaffolding procedure that allows more interaction and negotiation for co-constructing the meaning of the text. The study adopts a qualitative research approach. The investigation is carried out across three phases. Phase 1 focuses on the assessment of the Libyan EFL participant problems in reading literary texts by interviewing the ten participants and using a diagnostic test. The second phase is interventional. It seeks to investigate learners' development in constructing the meaning of literary text through the suggested scaffolding procedure (i.e. Learning Conversations). The third phase of the investigation explores learners' reflections on the effectiveness of interactive work in reading literature. The study aims at providing evidence of Libyan EFL learners' perspective of the new model and the development of their understanding.
APA, Harvard, Vancouver, ISO, and other styles
21

Hart, Sean Michael. "Intelligent Processing of PMR-15." W&M ScholarWorks, 1992. https://scholarworks.wm.edu/etd/1539625733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Cheimariou, Spyridoula. "Prediction in aging language processing." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/3056.

Full text
Abstract:
This thesis explores how predictions about upcoming linguistic stimuli are generated during real-time language comprehension in younger and older adults. Previous research has shown humans' ability to use rich contextual information to compute linguistic prediction during real-time language comprehension. Research in the modulating factors of prediction has shown, first, that predictions are informed by our experience with language and second, that these predictions are modulated by cognitive factors such as working memory and processing speed. However, little is known about how these factors interact in aging in which verbal intelligence remains stable or even increases, whereas processing speed, working memory, and inhibitory control decline with age. Experience-driven models of language learning argue that learning occurs across the life span instead of terminating once representations are learned well enough to approximate a stable state. In relation to aging, these models predict that older adults are likely to possess stronger learned associations, such that the predictions they generate during on-line processing may be stronger. At the same time, however, processing speed, working memory, and inhibitory control decline as a function of age, and age-related declines in these processes may reduce the degree to which older adults can predict. Here, I explored the interplay between language and cognitive factors in the generation of predictions and hypothesized that older adults will show stronger predictability effects than younger adults likely because of their language experience. In this thesis, I provide evidence from reading eye-movements, event-related potentials (ERPs), and EEG phase synchronization, for the role of language experience and cognitive decline in prediction in younger and older English speakers. I demonstrated that the eye-movement record is influenced by linguistic factors, which produce greater predictability effects as linguistic experience advances, and cognitive factors, which produce smaller predictability effects as they decline. Similarly, the N400, an ERP response that is modulated by a word's predictability, was also moderated by cognitive factors. Most importantly, older adults were able to use context efficiently to facilitate upcoming words in the ERP study, contrary to younger adults. Further, I provide initial evidence that coherence analysis may be used as a measure of cognitive effort to illustrate the facilitation that prediction confers to language comprehenders. The results indicate that for a comprehensive account of predictive processing research needs to take into account the role of experience acquired through lifetime and the declines that aging brings.
APA, Harvard, Vancouver, ISO, and other styles
23

Fisher, John T. "Ferret CFTR processing and function." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/3453.

Full text
Abstract:
The most common cystic fibrosis transmembrane conductance regulator (CFTR) gene mutation is δF508 and this causes cystic fibrosis (CF). Animal models that recapitulate the human disease phenotype are critical to understanding pathophysiologic mechanisms in CF and developing therapies. New CF models in the pig and ferret have been generated that develop lung, pancreatic, liver, and intestinal pathologies that reflect disease in CF patients. Species-specific biology in the processing of CFTR has demonstrated that pig and mouse δF508-CFTR proteins are more effectively processed to the apical membrane of airway epithelia than human δF508-CFTR. The processing behavior of ferret wild-type (WT) and δF508-CFTR proteins remain unknown and such information is important to predicting the utility of a δF508-CFTR ferret. To this end, we sought to compare processing, membrane stability, and function of human and ferret WT- and δF508-CFTR proteins in a heterologous expression system using HT1080, HEK293T, BHK21, and Cos7 cells, as well as human and ferret CF polarized airway epithelia. Analysis of the protein processing and stability by metabolic pulse-chase and surface On-Cell Western blots revealed that WT-fCFTR half-life and membrane stability were increased relative to WT-hCFTR. Furthermore, in BHK21, Cos7, and CuFi cells, human and ferret δF508-CFTR processing was negligible, while low levels of processing of δF508-fCFTR could be seen in HT1080 and HEK293T cells. Only the WT-fCFTR, but not δF508-fCFTR, produced functional cAMP-inducible chloride currents in both CF human and ferret airway epithelia. Further elucidation of the mechanism responsible for elevated fCFTR protein stability may lead to new therapeutic approaches to augment CFTR function. These findings also suggest that generation of a ferret CFTRδF508/δF508 animal model may be useful. Furthermore, in the CFTR and CFTR+/+ ferret model we have characterized abnormalities in the bioelectric properties of the trachea, stomach, intestine and gallbladder of newborn CF ferrets. Short circuit current (ISC) analysis of CF and WT tracheas revealed the following similarities and differences: 1) amiloride sensitive sodium currents were similar between genotypes, 2) responses to 4,4'-diisothiocyano-2,2'-stilbene disulphonic acid (DIDS) were ~4-fold greater in CF animals, suggesting elevated baseline chloride transport through non-CFTR channels, and 3) as expected, there was a lack of IBMX/forskolin-stimulated and GlyH-101-inhibited currents in CF animals due to the lack of CFTR. CFTR mRNA and protein was present throughout all levels of the WT ferret and IBMX/forskolin-inducible ISC was only observed in WT animals. Interestingly, IBMX/forskolin-inducible intestinal ISC in WT animals was not inhibited by the CFTR inhibitor GlyH-101 or bumetanide. The luminal pH of the CF ferret stomach was significantly decreased relative to the controls, while both genotypes maintained near neutral pH along the length of the intestine. The WT stomach and gallbladder exhibited significantly enhanced IBMX/forskolin ISC responses and inhibition by GlyH-101 relative to CF samples. These findings demonstrate that multiple organs affected by disease in the CF ferret have bioelectric abnormalities consistent with the lack of cAMP-mediated chloride transport.
APA, Harvard, Vancouver, ISO, and other styles
24

Shah, Vijay Pravin. "An advanced signal processing toolkit for Java applications." Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-11102002-141018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sun, Ning. "Friction stir processing of aluminum alloys." Worcester, Mass. : Worcester Polytechnic Institute, 2009. http://www.wpi.edu/Pubs/ETD/Available/etd-050109-144331/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Patel, Paresh S. "Automatic CAD model processing for downstream applications." Diss., Mississippi State : Mississippi State University, 2005. http://sun.library.msstate.edu/ETD-db/ETD-browse/browse.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Scoggins, Randy Keith. "SIGNAL PROCESSING APPROACHES FOR APPEARANCE MATCHING." MSSTATE, 2003. http://sun.library.msstate.edu/ETD-db/theses/available/etd-04162003-143335/.

Full text
Abstract:
The motivation for this work is to study methods of estimating appropriate level-of-detail (LoD) object models by quantifying appearance errors prior to image synthesis. Visualization systems have been developed that employ LoD objects, however, the criteria are often based on heuristics that restrict the form of the object model and rendering method. Also, object illumination is not considered in the LoD selection. This dissertation proposes an image-based scene learning pre-process to determine appropriate LoD for each object in a scene. Scene learning employs sample images of an object, from many views and with a range of geometric representations, to produce a profile of the LoD image error as a function of viewing distance. Signal processing techniques are employed to quantify how images change with respect to object model resolution, viewing distance, and lighting direction. A frequency-space analysis is presented which includes use of the vision system?s contrast sensitivity to evaluate perceptible image differences with error metrics. The initial development of scene learning is directed to sampling the object?s appearance as a function of viewing distance and object geometry in scene space. A second phase allows local lighting to be incorporated in the scene learning pre-process. Two methods for re-lighting are presented that differ in accuracy and overhead; both allow properties of an object?s image to be computed without rendering. In summary, full-resolution objects pro-duce the best image since the 3D scene is as real as possible. A less realistic 3D scene with simpler objects produces a different appearance in an image, but by what amount? My the-sis is such can be had. Namely that object fidelity in the 3D scene can be loosened further than has previously been shown without introducing significant appearance change in an object and that the relationship between 3D object realism and appearance can be expressed quantitatively.
APA, Harvard, Vancouver, ISO, and other styles
28

Moody, Jonathan. "Genetic diversity in the processing and transcriptomic diversity in the targeting of microRNAs." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/25409.

Full text
Abstract:
MicroRNAs are short RNA molecules that are central to the regulation of many cellular and developmental pathways. They are processed in several stages from structured precursors in the nucleus, into mature microRNAs in the cytoplasm where they direct protein complexes to regulate gene expression through, often imperfect base-pairing with target messenger RNAs. The broad aim of this project is to better understand how polymorphisms and new mutations can disrupt microRNA processing and targeting, and ultimately define their contributions to human disease. I have taken two approaches towards this. The first approach is to comprehensively identify the microRNA targets by developing and applying a novel computational pipeline to identify microRNA binding events genome-wide in RNA-RNA interaction datasets. I use this to examine the transcriptomic diversity of microRNA binding, finding microRNA binding events along the full length of protein coding transcripts and with a variety of non-coding RNAs. This reveals enrichment for non-canonical microRNA binding at promoters and intronic regions around splice sites, and identifies highly spatially clustered binding sites within transcripts that may be acting as competitive endogenous RNAs to compete for microRNAs, effectively sequestering them. Using statistical models and new cell fractionated RNA-seq data, I rank the features of microRNAs and their binding sites which contribute to the strength and specificity of their interaction to provide a better understanding of the major determinants of microRNA targeting. The second approach is to directly identify DNA sequence changes in microRNA precursors that alter processing efficiency affecting mature microRNA abundance which are routinely overlooked in the search for disease or trait associated causal variants. I have systematically screened public datasets for both rare and common polymorphisms that overlap microRNA precursors and are correlated with mature microRNA levels as measured in short RNA sequencing. I use these eQTL SNPs to examine the most important microRNA precursor regions and sequence motifs. Several of these SNPs have been observed as risk factors in cancer or other clinically relevant traits, and correlated with microRNA processing efficiency. I demonstrate that a specific DNA change which is known to be important in the development of some cancers, is located in a microRNA precursor and affects the balance of its two products, miR-146a-3p and miR-146a-5p, that can be produced from that single precursor providing new insights into the mechanisms of microRNA production and the aspects of genetic mis-regulation that result in cancer. I find further examples of common human polymorphisms that appear to affect microRNA production from their precursors, several of these variants are independently implicated in human immune disease, cancer susceptibility and associated with other complex traits. As they exhibit a molecular phenotype and immediately lead to mechanistic hypotheses of trait causality that can be tested, these variants could provide a route into the frequently intractable problem of mechanistically linking non-coding genetic variation to human phenotypes. Applying similar studies to patient DNA has revealed rare and unique DNA changes that are now candidates for causing human disease that are being subject to follow-up experimental studies. Collectively this work has started to define which sequences changes in microRNAs are likely to disrupt their function and provides a paradigm for the analysis of microRNA sequence variants in human genetic disease.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhao, Jin. "Video/Image Processing on FPGA." Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/503.

Full text
Abstract:
Video/Image processing is a fundamental issue in computer science. It is widely used for a broad range of applications, such as weather prediction, computerized tomography (CT), artificial intelligence (AI), and etc. Video-based advanced driver assistance system (ADAS) attracts great attention in recent years, which aims at helping drivers to become more concentrated when driving and giving proper warnings if any danger is insight. Typical ADAS includes lane departure warning, traffic sign detection, pedestrian detection, and etc. Both basic and advanced video/image processing technologies are deployed in video-based driver assistance system. The key requirements of driver assistance system are rapid processing time and low power consumption. We consider Field Programmable Gate Array (FPGA) as the most appropriate embedded platform for ADAS. Owing to the parallel architecture, an FPGA is able to perform high-speed video processing such that it could issue warnings timely and provide drivers longer time to response. Besides, the cost and power consumption of modern FPGAs, particular small size FPGAs, are considerably efficient. Compared to the CPU implementation, the FPGA video/image processing achieves about tens of times speedup for video-based driver assistance system and other applications.
APA, Harvard, Vancouver, ISO, and other styles
30

Caswell, Cynthia Ann. "Relational processing theory and instructional design, designing writing resource materials for ESL teachers and students." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ40135.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Fernandez, Noemi. "Statistical information processing for data classification." FIU Digital Commons, 1996. http://digitalcommons.fiu.edu/etd/3297.

Full text
Abstract:
This thesis introduces new algorithms for analysis and classification of multivariate data. Statistical approaches are devised for the objectives of data clustering, data classification and object recognition. An initial investigation begins with the application of fundamental pattern recognition principles. Where such fundamental principles meet their limitations, statistical and neural algorithms are integrated to augment the overall approach for an enhanced solution. This thesis provides a new dimension to the problem of classification of data as a result of the following developments: (1) application of algorithms for object classification and recognition; (2) integration of a neural network algorithm which determines the decision functions associated with the task of classification; (3) determination and use of the eigensystem using newly developed methods with the objectives of achieving optimized data clustering and data classification, and dynamic monitoring of time-varying data; and (4) use of the principal component transform to exploit the eigensystem in order to perform the important tasks of orientation-independent object recognition, and di mensionality reduction of the data such as to optimize the processing time without compromising accuracy in the analysis of this data.
APA, Harvard, Vancouver, ISO, and other styles
32

Alaql, Abdulrahman Mohammad. "Analysis and Processing of Human Electroretinogram." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6059.

Full text
Abstract:
Electroretinagram (ERG) is the recording of electrical activity of retinal cells elicited by light stimulation, which has been widely used to help diagnose different types of retinal dysfunctions. The ERG response signal is a short non-stationary signal that contains overlapping components. Different Digital Signal Processing (DSP) techniques are investigated using MATLAB to study the time-frequency responses of the ERG signal such as Short Time Fourier Transform (STFT), Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT). The Photopic ERG signal was processed and analyzed in this thesis and the results of each technique have been investigated in detail. The Photopic ERG components have been extracted using DWT and ERG Models.
APA, Harvard, Vancouver, ISO, and other styles
33

Rajaram, Srinath. "Multilayer Nanomagnetic Systems for Information Processing." Scholar Commons, 2014. http://scholarcommons.usf.edu/etd/5109.

Full text
Abstract:
The Spin-Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) has opened new doors as an emerging technology with high potential to replace traditional CMOS-based memory technology. This has come true due to the density, speed and non- volatility that have been demonstrated. The STT-MRAM uses Magnetic Tunnel Junction (MTJ) elements as non-volatile memory storage devices because of the recent discovery of spin-torque phenomenon for switching the magnetization states. The magnetization of the free layer in STT-MRAM can be switched from logic "1" to logic "0" by the use of a spin-transfer torque. However, the STT-MRAMs have till now only been used as universal memory. As a result, STT-MRAMs are not yet commercially used as computing elements, though they have the potential to be used as Logic-In-Memory computation applications. In order to advance this STT-MRAM technology for computation, we have used different MRAM devices that are available as memory elements with different geometries, to use it as computing elements. This dissertation presents design and implementation of such devices using different multilayer magnetic material stacks for computation. Currently, the design of STT-MRAMs is limited to only memory architectures, and there have been no proposals on the viability of STT-MRAMs as computational devices. In the present work, we have developed a design, which could be implemented for universal logic computation. We have utilized the majority gate architecture, which uses the magneto-static interaction between the freelayers of the multilayer nanomagnets, to perform computation. Furthermore, the present work demonstrates the study of dipolar interaction between nanomagnetic disks, where we observed multiple magnetization states for a nanomagnetic disk with respect to its interaction energy with its neighboring nanomagnets. This was achieved by implementing a single layer nanomagnetic disk with critical dimension selected from the phase plot of single domain state (SDS) and vortex state (VS). In addition, we found that when the interaction energy between the nanomagnetic disks with critical dimension decreases (increase in center-to-center distance) the magnetization state of the nanomagnetic disks changes from single domain state to vortex state within the same dimension. We were able to observe this effect due to interaction between the neighboring nanomagnets. Finally, we have presented the design and implementation of a Spin-Torque driven Re- configurable Array of Nanomagnets (STRAN) that could perform Boolean and non-Boolean computation. The nanomagnets are located at every intersection of a very large crossbar array structure. We have placed these nanomagnets in such a way that the ferromagnetic free layers couple with each other. The reconfigurable array design consists of an in-plane (IP) free layer and a fixed polarizer [magnetized out-of-plane (OP)]. The cells that need to be deselected from the array are taken to a non-computing oscillating state.
APA, Harvard, Vancouver, ISO, and other styles
34

Tang, Xianmin. "Low damage processing and process characterization." W&M ScholarWorks, 2000. https://scholarworks.wm.edu/etd/1539623979.

Full text
Abstract:
Two novel plasma sources (one neutral source and one pulsed inductively coupled plasma source) and ashing process characterization were investigated. The primary goal was to characterize these source properties and develop corresponding applications. The study includes process damage assessment with these two sources and another continuous wave (13.56MHz) plasma source. A global average simulation of the pulsed discharges was also included.;The transient plasma density and electron temperature from the double probe analysis were compared with single Langmuir probe results with sheath displacement corrections in pulsed discharges (200Hz--10kHz). The equivalent resistance method can be used effectively to analyze these double probe data. The transient behaviors of the plasma density and electron temperature are in accord with the model of the discharge. The hyper-thermal neutral source based on the surface reflection neutralization techniques was shown to provide enough fast neutrals for ashing applications. The surface roughness of the post-cleaned wafer was less than 10A. Ex-situ and in-situ measurements yield typical removal rates of about 10 A/s without stream collimation. The removal rates at increasing pressures show a trade-off between creating higher density plasma, leading to a large initial neutral flux and attenuation of neutrals due to collisions. Changing the reflector plate changes the neutral energy without changing the discharge composition. A novel technique, combining momentum and heat flux measurements shows that neutral stream energy is 3--6 eV and the neutral flux is on the order of 1015 cm-2 s-1. The derived etch rates from the measured neutral flux and energy values and the experimental rates are in good agreement. Quasi-static capacitance-voltage measurements demonstrate that the low energy neutral source induces much less damage than other plasma sources. Most of the neutral process damage is caused by uv photons escaping from the plasma source zone. The process-induced damage vary with the reflector bias and rf power.
APA, Harvard, Vancouver, ISO, and other styles
35

Franzel, Louis. "Modification of Nanostructures via Laser Processing." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3024.

Full text
Abstract:
Modification of nanostructures via laser processing is of great interest for a wide range of applications such as aerospace and the storage of nuclear waste. The primary goal of this dissertation is to improve the understanding of nanostructures through two primary routes: the modification of aerogels and pulsed laser ablation in ethanol. A new class of materials, patterned aerogels, was fabricated by photopolymerizing selected regions of homogeneous aerogel monoliths using visible light. The characterization and fabrication of functionally graded, cellular and compositionally anisotropic aerogels and ceramics is discussed. Visible light was utilized due to it’s minimal absorption and scattering by organic molecules and oxide nanoparticles within wet gels. This allowed for the fabrication of deeply penetrating, well resolved patterns. Similarly, nanoporous monoliths with a typical aerogel core and a mechanically robust exterior ceramic layer were synthesized from silica aerogels cross-linked with polyacrylonitrile. Simple variations of the exposure geometry allowed fabrication of a wide variety of anisotropic materials without requiring layering or bonding. Nanoparticle solutions were prepared by laser ablation of metal foils (Fe and Mo) in ethanol. Ablation of Fe generated Fe3O4 and Fe3C nanoparticles which were superparamagnetic with a saturation magnetization Ms = 124 emu/g. Zero field cooled (ZFC) measurements collected at an applied field of 50 Oe displayed a maximum magnetic susceptibility at 120 K with a broad distribution. Field cooled (FC) measurements showed a thermal hysteresis indicative of temperature dependent magnetic viscosity. Pulsed laser ablation of a Mo foil in ethanol generated inhomogeneous nanoparticles where Mo and MoC coexisted within the same aggregate. Formation of these unique nanoparticles is likely due to phase separation that occurs when a high temperature carbide phase cools after the laser pulse terminates. Similarly, magnetic nanoparticle suspensions were generated by pulsed laser ablation of Fe and Mo in ethanol. The formation of several carbide phases with no discernable alloy formation was seen. A decrease in magnetization with a decrease in Fe concentration was seen which was reconciled with the decreased Fe content in the system. However, at Fe concentrations below ~ 40%, an increase in Ms and Hc was observed which was reconciled with the disappearance of the ε–Fe3C. TEM analysis showed the formation of core-shell nanoparticles and Energy Filtered TEM showed the distribution of Fe-based nanoparticles in the suspensions.
APA, Harvard, Vancouver, ISO, and other styles
36

Schultz, Steven Peter. "Attentional Window and Global/Local Processing." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6383.

Full text
Abstract:
How does the focus of attention influence the encoding of information? Research has shown that size and allocation of the attentional window has an influence on what information is attended to or missed. The size-scale of features also effects processing of visual information. Previous research involving hierarchical stimuli suggests precedence for global features. In the present experiment, I investigated the influence of attentional window size on accuracy of encoding hierarchical stimuli at the global and local level. Here I introduce a new method for manipulating the size of the attentional window and for collecting unconstrained responses. At the start of each trial, observers tracked a dashed-line rectangular box, which either broadened or narrowed in size after onset. This sequence was immediately followed by a brief presentation of two hierarchical letters presented simultaneously on the left and right sides of the screen. The box preceding the hierarchical letters either broadened to a size large enough to include both letters at the global level, or narrowed to a size small enough to include a maximum of two letters at the local level at either side of the screen. Observers reported all letters they were able to identify. Results from two experiments indicate an overall precedence of global letters. However, a narrow attentional window reduced global precedence, as would be expected with more focused attention. The narrow windows also produced more same-side identifications of both global and local letters. The second experiment also showed that reducing the processing time decreased the global advantage.
APA, Harvard, Vancouver, ISO, and other styles
37

Nash, Michelle. "Menstrual Cycle and Visual Information Processing." BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1966.

Full text
Abstract:
This project examined the effects menstruation may have on visual attention in women. A recent study examined electroencephalographic (EEG) gender differences using a visual object recognition task. Results indicated certain EEG amplitudes (specifically, P300 and N400) are greater in women than men. This study extended the previous findings to determine if these increased EEG amplitudes vary across menstrual phases. Eighteen female participants participated in a series of 3 EEG recording sessions using the same visual object recognition task from the previous study; 18 male participants completed this task once. Analyses from 15 of the 18 female and 16 of the 18 male participants support the previous finding of larger P300 amplitudes in response to relevant stimuli for women compared with men. While there was no distinctive N400 component in this study, there was a late negative (LN) component which was found to vary significantly between men and women. In addition, multiple visual evoked potential (VEP) components varied significantly across the menstrual cycle. In particular, the N200 component appeared to provide greater differences between menstrual phases than either the P300 or LN components; however, the results varied greatly by head location. The differentiation found with VEP components in response to the pop-out task used in this study provide support for basic visual processing variation across the menstrual cycle and between genders.
APA, Harvard, Vancouver, ISO, and other styles
38

Geng, Zhe. "Radar Signal Processing for Interference Mitigation." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3571.

Full text
Abstract:
It is necessary for radars to suppress interferences to near the noise level to achieve the best performance in target detection and measurements. In this dissertation work, innovative signal processing approaches are proposed to effectively mitigate two of the most common types of interferences: jammers and clutter. Two types of radar systems are considered for developing new signal processing algorithms: phased-array radar and multiple-input multiple-output (MIMO) radar. For phased-array radar, an innovative target-clutter feature-based recognition approach termed as Beam-Doppler Image Feature Recognition (BDIFR) is proposed to detect moving targets in inhomogeneous clutter. Moreover, a new ground moving target detection algorithm is proposed for airborne radar. The essence of this algorithm is to compensate for the ground clutter Doppler shift caused by the moving platform and then to cancel the Doppler-compensated clutter using MTI filters that are commonly used in ground-based radar systems. Without the need of clutter estimation, the new algorithms outperform the conventional Space-Time Adaptive Processing (STAP) algorithm in ground moving target detection in inhomogeneous clutter. For MIMO radar, a time-efficient reduced-dimensional clutter suppression algorithm termed as Reduced-dimension Space-time Adaptive Processing (RSTAP) is proposed to minimize the number of the training samples required for clutter estimation. To deal with highly heterogeneous clutter more effectively, we also proposed a robust deterministic STAP algorithm operating on snapshot-to-snapshot basis. For cancelling jammers in the radar mainlobe direction, an innovative jamming elimination approach is proposed based on coherent MIMO radar adaptive beamforming. When combined with mutual information (MI) based cognitive radar transmit waveform design, this new approach can be used to enable spectrum sharing effectively between radar and wireless communication systems. The proposed interference mitigation approaches are validated by carrying out simulations for typical radar operation scenarios. The advantages of the proposed interference mitigation methods over the existing signal processing techniques are demonstrated both analytically and empirically.
APA, Harvard, Vancouver, ISO, and other styles
39

Hecht, Lauren Nicole. "Temporal processing of figures and grounds." Diss., University of Iowa, 2009. https://ir.uiowa.edu/etd/298.

Full text
Abstract:
Research on figure-ground organization focused primarily on identifying cues that are used to establish regions as figure or ground. Recently, others have demonstrated behavioral consequences of figure-ground assignment, including speeded responses and higher accuracy for figures. However, other outcomes of figure-ground assignment have been demonstrated. For example, figures' spatial resolution is enhanced for figures relative to grounds. Still, the consequences of figure-ground assignment can extend beyond spatial processing to other domains, including temporal processing. To investigate the consequences of figure-ground assignment for temporal processing, I first examined whether targets could be perceived as appearing temporally earlier on figures than on grounds (i.e., prior entry effect). My results suggest that figural regions are available to perceptual level processes sooner than grounds. Upon confirming a prior-entry-like effect for figures, I then examined other temporal processing differences between figures and grounds. Specifically, I demonstrated that targets presented on figures are perceived as offsetting later than targets appearing on grounds, suggesting that figures receive extended perceptual level processing relative to grounds. Consequently, I found that extended processing of figures degrades temporal resolution compared to ground regions. Finally, I presented a computational model that captures the temporal processing effects of figure-ground assignment, demonstrating that these effects can arise from a single architecture.
APA, Harvard, Vancouver, ISO, and other styles
40

Ding, Luping. "Metadata-aware query processing over data streams." Worcester, Mass. : Worcester Polytechnic Institute, 2008. http://www.wpi.edu/Pubs/ETD/Available/etd-042208-194826/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ruzindana, Mark William. "Digital Signal Processing Algorithms Implemented on Graphics Processing Units and Software Development for Phased Array Receiver Systems." BYU ScholarsArchive, 2021. https://scholarsarchive.byu.edu/etd/8987.

Full text
Abstract:
Phased array receivers are a set of antenna elements that are capable of forming multiple simultaneous beams over a field of view. In radio astronomy, the study of deep space radio sources, a phased array feed (PAF) is placed at the focus of a large dish telescope that spatially samples the focal plane. PAFs provide an increase in the field of view as compared to the traditional single pixel horn feed, thereby increasing survey speed while maintaining low sensitivity. Phased arrays are also capable of radio frequency interference (RFI) mitigation which is useful in both radio astronomy and wireless communications when detecting signals in the presence of interferers such as satellites. Digital signal processing algorithms are used to process and analyze data provided by phased array receivers. During the commissioning of the Focal-plane L-band Array feed for the Green Bank telescope (FLAG), sensitivity consistent with an equivalent system temperature below 18 K was measured. To demonstrate the astronomical capability of the receiver, a pulsar (PSR B2011+38) was detected, and an HI source (NGC4258) was mapped with the real-time beamformer and fine channel correlator, respectively. This work also details improvements made to the software of the FLAG digital backend such as the design and implementation of an algorithm to remove scalloping ripple from the spectrum of two cascading polyphase filter banks (PFB). This work will also provide a brief introduction to a model-based beam interpolation algorithm capable of increasing spatial resolution of radio source maps as well as reducing time spent performing calibration. The development of a phased array receiver digital back end for the Office of Naval Research (ONR) is also detailed. This broadband system will be capable of communication in hostile RFI-rich environments with the aid of a real-time RFI mitigation algorithm currently implemented in software. This algorithm will be compatible with other PAF receiver systems and will enable RFI mitigation in other applications such as radio astronomy. This work will provide details on the implementation of this algorithm, the development and modification of other system software as well as full system tests of the 150 MHz bandwidth receiver have been conducted and will be shown in this document.
APA, Harvard, Vancouver, ISO, and other styles
42

Ghauri, Farzan Naseer. "Hybrid Photonic Signal Processing." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3233.

Full text
Abstract:
This thesis proposes research of novel hybrid photonic signal processing systems in the areas of optical communications, test and measurement, RF signal processing and extreme environment optical sensors. It will be shown that use of innovative hybrid techniques allows design of photonic signal processing systems with superior performance parameters and enhanced capabilities. These applications can be divided into domains of analog-digital hybrid signal processing applications and free-space--fiber-coupled hybrid optical sensors. The analog-digital hybrid signal processing applications include a high-performance analog-digital hybrid MEMS variable optical attenuator that can simultaneously provide high dynamic range as well as high resolution attenuation controls; an analog-digital hybrid MEMS beam profiler that allows high-power watt-level laser beam profiling and also provides both submicron-level high resolution and wide area profiling coverage; and all optical transversal RF filters that operate on the principle of broadband optical spectral control using MEMS and/or Acousto-Optic tunable Filters (AOTF) devices which can provide continuous, digital or hybrid signal time delay and weight selection. The hybrid optical sensors presented in the thesis are extreme environment pressure sensors and dual temperature-pressure sensors. The sensors employ hybrid free-space and fiber-coupled techniques for remotely monitoring a system under simultaneous extremely high temperatures and pressures.<br>Ph.D.<br>Optics and Photonics<br>Optics and Photonics<br>Optics PhD
APA, Harvard, Vancouver, ISO, and other styles
43

Sullivan, Kelly Daniel Marzluff William F. "Cotranscriptional processing of histone pre-mRNA." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2008. http://dc.lib.unc.edu/u?/etd,2870.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2009.<br>Title from electronic title page (viewed Jun. 4, 2010). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Biochemistry and Biophysics." Discipline: Biochemistry and Biophysics; Department/School: Medicine.
APA, Harvard, Vancouver, ISO, and other styles
44

Liu, Lukai. "A Study of Myoelectric Signal Processing." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-dissertations/34.

Full text
Abstract:
This dissertation of various aspects of electromyogram (EMG: muscle electrical activity) signal processing is comprised of two projects in which I was the lead investigator and two team projects in which I participated. The first investigator-led project was a study of reconstructing continuous EMG discharge rates from neural impulses. Related methods for calculating neural firing rates in other contexts were adapted and applied to the intramuscular motor unit action potential train firing rate. Statistical results based on simulation and clinical data suggest that performances of spline-based methods are superior to conventional filter-based methods in the absence of decomposition error, but they unacceptably degrade in the presence of even the smallest decomposition errors present in real EMG data, which is typically around 3-5%. Optimal parameters for each method are found, and with normal decomposition error rates, ranks of these methods with their optimal parameters are given. Overall, Hanning filtering and Berger methods exhibit consistent and significant advantages over other methods. In the second investigator-led project, the technique of signal whitening was applied prior to motion classification of upper limb surface EMG signals previously collected from the forearm muscles of intact and amputee subjects. The motions classified consisted of 11 hand and wrist actions pertaining to prosthesis control. Theoretical models and experimental data showed that whitening increased EMG signal bandwidth by 65-75% and the coefficients of variation of temporal features computed from the EMG were reduced. As a result, a consistent classification accuracy improvement of 3-5% was observed for all subjects at small analysis durations (< 100 ms). In the first team-based project, advanced modeling methods of the constant posture EMG-torque relationship about the elbow were studied: whitened and multi-channel EMG signals, training set duration, regularized model parameter estimation and nonlinear models. Combined, these methods reduced error to less than a quarter of standard techniques. In the second team-based project, a study related biceps-triceps surface EMG to elbow torque at seven joint angles during constant-posture contractions. Models accounting for co-contraction estimated that individual flexion muscle torques were much higher than models that did not account for co-contraction.
APA, Harvard, Vancouver, ISO, and other styles
45

Lei, Chuan. "Recurring Query Processing on Big Data." Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-dissertations/550.

Full text
Abstract:
The advances in hardware, software, and networks have enabled applications from business enterprises, scientific and engineering disciplines, to social networks, to generate data at unprecedented volume, variety, velocity, and varsity not possible before. Innovation in these domains is thus now hindered by their ability to analyze and discover knowledge from the collected data in a timely and scalable fashion. To facilitate such large-scale big data analytics, the MapReduce computing paradigm and its open-source implementation Hadoop is one of the most popular and widely used technologies. Hadoop's success as a competitor to traditional parallel database systems lies in its simplicity, ease-of-use, flexibility, automatic fault tolerance, superior scalability, and cost effectiveness due to its use of inexpensive commodity hardware that can scale petabytes of data over thousands of machines. Recurring queries, repeatedly being executed for long periods of time on rapidly evolving high-volume data, have become a bedrock component in most of these analytic applications. Efficient execution and optimization techniques must be designed to assure the responsiveness and scalability of these recurring queries. In this dissertation, we thoroughly investigate topics in the area of recurring query processing on big data. In this dissertation, we first propose a novel scalable infrastructure called Redoop that treats recurring query over big evolving data as first class citizens during query processing. This is in contrast to state-of-the-art MapReduce/Hadoop system experiencing significant challenges when faced with recurring queries including redundant computations, significant latencies, and huge application development efforts. Redoop offers innovative window-aware optimization techniques for recurring query execution including adaptive window-aware data partitioning, window-aware task scheduling, and inter-window caching mechanisms. Redoop retains the fault-tolerance of MapReduce via automatic cache recovery and task re-execution support as well. Second, we address the crucial need to accommodate hundreds or even thousands of recurring analytics queries that periodically execute over frequently updated data sets, e.g., latest stock transactions, new log files, or recent news feeds. For many applications, such recurring queries come with user-specified service-level agreements (SLAs), commonly expressed as the maximum allowed latency for producing results before their merits decay. On top of Redoop, we built a scalable multi-query sharing engine tailored for recurring workloads in the MapReduce infrastructure, called Helix. Helix deploys new sliced window-alignment techniques to create sharing opportunities among recurring queries without introducing additional I/O overheads or unnecessary data scans. Furthermore, Helix introduces a cost/benefit model for creating a sharing plan among the recurring queries, and a scheduling strategy for executing them to maximize the SLA satisfaction. Third, recurring analytics queries tend to be expensive, especially when query processing consumes data sets in the hundreds of terabytes or more. Time sensitive recurring queries, such as fraud detection, often come with tight response time constraints as query deadlines. Data sampling is a popular technique for computing approximate results with an acceptable error bound while reducing high-demand resource consumption and thus improving query turnaround times. In this dissertation, we propose the first fast approximate query engine for recurring workloads in the MapReduce infrastructure, called Faro. Faro introduces two key innovations: (1) a deadline-aware sampling strategy that builds samples from the original data with reduced sample sizes compared to uniform sampling, and (2) adaptive resource allocation strategies that maximally improve the approximate results while assuring to still meet the response time requirements specified in recurring queries. In our comprehensive experimental study of each part of this dissertation, we demonstrate the superiority of the proposed strategies over state-of-the-art techniques in scalability, effectiveness, as well as robustness.
APA, Harvard, Vancouver, ISO, and other styles
46

Sun, Ning. "Friction Stir Processing of Aluminum Alloys." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/552.

Full text
Abstract:
Friction stir processing (FSP) has been developed based on the basic principles of friction stir welding (FSW), a solid-state joining process originally developed for aluminum alloys. What is attractive about FSP is that it can be incorporated in the overall manufacturing cycle as a post-processing step during the machining operation to provide localized modification and control of microstructures in near-surface layers of metallic components. FSP has emerged as an important post-processing technique, and has been identified as a process that may have a high impact, and perhaps is a disruptive manufacturing process. In this study, FSP has been applied to Al cast alloy A206, which is a high strength, widely used cast alloy in the manufacturing industry. Motivations behind this work are to (1) investigate the feasibility of FSP on manipulating the cast microstructure and strengthening the material, and (2) to explore the viability of FSP to produce a localized particle reinforced zone in cast A206 aluminum components. The thesis will show that we have optimized FSP for processing of Al alloys to locally manipulate the cast microstructure, eliminate casting defects, and attain grain refinement and second phase homogenization. We have established the mechanism leading to the microstructure evolution and have evaluated the resultant mechanical properties, i.e. hardness, tensile property and fatigue properties. We have also synthesized a localized composite material in the A206 work piece with three different reinforcement materials via FSP. These results will be presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
47

Holmes, Jeanne Joanne. "Office occupations/word processing curriculum guide." CSUSB ScholarWorks, 1997. https://scholarworks.lib.csusb.edu/etd-project/1545.

Full text
Abstract:
The purpose of this project is to design a new course outline for the Office Occupations/Word Processing course for high school and adult students. Specifically, the course outline will serve students who are at least 16 years old. The content of the course consists of interpersonal and communication skills, hardware/software management, text editing concepts and skills, clerical skills, job search skills, data entry and database, information processing, and telecommunication skills.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Min. "The development of spelling and its relationship to decoding and phonological processing in Chinese ESL children." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0032/NQ53755.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kato, Shigeo. "Interaction between processing and storage in L2 reading : evidence for developmental changes in Japanese ESL learners." Thesis, University of Essex, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.272581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Samuelsen, Chad L. "Chemosensory processing in the amygdala." Tallahassee, Florida : Florida State University, 2009. http://etd.lib.fsu.edu/theses/available/etd-09212009-161414/.

Full text
Abstract:
Thesis (Ph. D.)--Florida State University, 2009.<br>Advisor: Michael Meredith, Florida State University, College of Arts and Sciences, Dept. of Biological Science. Title and description from dissertation home page (viewed on May 4, 2010). Document formatted into pages; contains xv, 131 pages. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!