To see the other types of publications on this topic, follow the link: Word processing. Information technology.

Dissertations / Theses on the topic 'Word processing. Information technology'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Word processing. Information technology.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Carter, Simon Matthew James. "Changing complex documents /." St. Lucia, Qld, 2001. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16479.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Beauvais, Erik Alexander Maoui. "An investigation of a framework to evaluate computer supported collaborative work." Thesis, Rhodes University, 1999. http://eprints.ru.ac.za/1383/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Howell, Gwyneth Veronica James. "Using the informational processing paradigm to design commercial rumour response strategies on the World Wide Web." UWA Business School, 2006. http://theses.library.uwa.edu.au/adt-WU2007.0024.

Full text
Abstract:
[Truncated abstract] Rumours can lead to unpredictable events: the manner in which an organisation responds to a commercial rumour can alter its reputation, and can affect its profitability as well as, ultimately, its survival. Commercial rumours are now a prominent feature of the business environment. They can emerge from organisational change, pending workforce layoffs, mergers, and changes to management, in addition, commercial rumours can lower morale and undermine productivity. There are several well-known examples of commercial rumours that have been, or continue to be, circulated. Commercial rumours are typically either about a conspiracy or contamination issue. Conspiracy rumours usually target those organisational practices or policies which are identified as undesirable by the stakeholders. This form of rumour is often precipitated by situations where people do not have all the information about a situation, for example the rumour about Proctor & Gamble being run by the Moonies. Snapple, the soft drink company, was rumoured in 1992 to be supporting the Ku Klux Klan in closing abortion clinics. Contamination rumours are wide-ranging and typically have revulsion theme, such as McDonald’s "worms in the burger", Pop Rock’s candies which exploded in the stomach, and poison in Herron’s paracetamol . . . Marketers suggest that web sites Commerical Rumour Responses on the Web represent the future of marketing communications on the Internet. The key implication of this study for organisations is when faced with a negative rumour, specific and selected Web pages can be used manage company’s stakeholders recall the rumour and organisational stakeholders can be persuaded by the company’s rumour response strategies.
APA, Harvard, Vancouver, ISO, and other styles
4

Newton, Sandra Kay. "The information technology professional's psychological contract viewed through their employment arrangement and the relationship to organizational behaviors." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Garcia, Natanya. "Anti-circumvention technology legislation in Canada : drafting a new law in the wake of the DMCA." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=19625.

Full text
Abstract:
In becoming a signatory to the World Intellectual Property Organization (WIPO) Treaties, Canada has undertaken the obligation to provide protection against the circumvention of technological measures designed to protect copyright works. While on its face the obligation appears simple, in reality it brings about an intersection of policy, law and technology; a complex situation with far reaching repercussions. The U.S., a co-signatory to the WIPO Treaties, responded to this tension by enacting the Digital Milennium Copyright Act (DMCA), which heavily regulated circumvention technology and garnered wide-spread criticism. Critics labeled the law as unpredictable and overbroad legislation, which has chilled free speech, violated fair use, stifled research and study and encouraged monopolies by eliminating competition. Drawing largely on the U.S. experience, this thesis aims to suggest a possible route for Canada to take when fulfilling its own obligations under the WIPO. It will begin with a review of the relevant provisions of the Treaties to determine the extent of Canada's obligation. It will then examine Canada's proposal papers and the responses of its citizens to the questioned posed regarding future anti-circumvention legislation. It will also examine the DMCA in detail and attempt to distil its flaws. Finally, it will investigate the extent of the need for new anti-circumvention legislation in Canada by examining Canada's existing laws dealing with the protection of technology measures. Such process will provide evidence that Canada has, to a large extent, complied with its obligations under the WIPO while maintaining the delicate balance between the stakeholders of copyright law. Thus while new anti-circumvention legislation may still be in order, Canada has the latitude to craft a law that fully recognizes the rights of all stakeholders in the copyright equation and is consistent with its own copyright policies.
APA, Harvard, Vancouver, ISO, and other styles
6

Revelles, Patricia A. "Curriculum for a course in word/information processing." CSUSB ScholarWorks, 1990. https://scholarworks.lib.csusb.edu/etd-project/768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chernecki, Donna (Donna Rae) Carleton University Dissertation Psychology. "Disrupting holistic word recognition; evidence for word-specific visual patterns." Ottawa, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dare, Natasha. "Out of this word : the effect of parafoveal orthographic information on central word processing." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4008.

Full text
Abstract:
The aim of this thesis is to investigate the effect of parafoveal information on central word processing. This topic impacts on two controversial areas of research: the allocation of attention during reading, and letter processing during word recognition. Researchers into the role of attention during reading are split into two camps, with some believing that attention is allocated serially to consecutive words and others that it is spread across multiple words in parallel. This debate has been informed by the results of recent experiments that test a key prediction of the parallel processing theory that parafoveal and foveal processing occur concurrently. However, there is a gap in the literature for tightly-controlled experiments to further test this prediction. In contrast, the study of the processing that letters undergo during word recognition has a long history, with many researchers concluding that letter identity is processed only conjointly with letter ‘slot’ position within a word, known as ‘slot-based’ coding. However, recent innovative studies have demonstrated that more word priming is produced from prime letter strings containing letter transpositions than from primes containing letter substitutions, although this work has not been extended to parafoveal letter prime presentations. This thesis will also discuss the neglected subject of how research into these separate topics of text reading and isolated word recognition can be integrated via parafoveal processing. It presents six experiments designed to investigate how our responses to a central word are affected by varying its relationship with simultaneously presented parafoveal information. Experiment 1 introduced the Flanking Letters Lexical Decision task in which a lexical decision was made to words flanked by bigrams either orthographically related or unrelated to the response word; the results indicated that there is parafoveal orthographic priming but did not support the ‘slot-based’ coding theory as letter order was unimportant. Experiments 2-4 involved eye-tracking of participants who read sentences containing a boundary change that allowed the presentation of an orthographically related word in parafoveal vision. Experiment 2 demonstrated that an orthographically related word at position n+1 reduces first-pass fixations on word n, indicating parallel processing of these words. Experiment 4 replicated this result, and also showed that altering the letter identity of word n+1 reduced orthographic priming whereas altering letter order did not, indicating that slot-based coding of letters does not occur during reading. However, Experiment 3 found that an orthographically related word presented at position n-1 did not prime word n, signifying the influence of reading direction on parafoveal processing. Experiment 5 investigated whether the parallel processing that words undergo during text reading conditions our representations of isolated words; lexical decision times to words flanked by bigrams that formed plausible or implausible contexts did not differ. Lastly, one possible cause of the reading disorder dyslexia is under- or over- processing of parafoveal information. Experiment 6 therefore replicated Experiment 1 including a sample of dyslexia sufferers but found no interaction between reading ability and parafoveal processing. Overall, the results of this thesis lead to the conclusion that there is extensive processing of parafoveal information during both reading (indicating parallel processing) and word recognition (contraindicating slot-based coding), and that underpinning both our reading and word recognition processes is the flexibility of our information-gathering mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
9

Constaninides, George Anthony. "High level synthesis and word length optimization of digital signal processing systems." Thesis, Imperial College London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ho, Wai-cheong, and 何偉昌. "Business and information technology alignment." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31268833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Stettheimer, Timothy Dwight. "Modeling Utilization of Planned Information Technology." Thesis, University of North Texas, 2000. https://digital.library.unt.edu/ark:/67531/metadc2496/.

Full text
Abstract:
Implementations of information technology solutions to address specific information problems are only successful when the technology is utilized. The antecedents of technology use involve user, system, task and organization characteristics as well as externalities which can affect all of these entities. However, measurement of the interaction effects between these entities can act as a proxy for individual attribute values. A model is proposed which based upon evaluation of these interaction effects can predict technology utilization. This model was tested with systems being implemented at a pediatric health care facility. Results from this study provide insight into the relationship between the antecedents of technology utilization. Specifically, task time provided significant direct causal effects on utilization. Indirect causal effects were identified in task value and perceived utility constructs. Perceived utility, along with organizational support also provided direct causal effects on user satisfaction. Task value also impacted user satisfaction in an indirect fashion. Also, results provide a predictive model and taxonomy of variables which can be applied to predict or manipulate the likelihood of utilization for planned technology.
APA, Harvard, Vancouver, ISO, and other styles
12

Sennema-Skowronek, Anke. "The use of focus markers in second language word processing." Phd thesis, Universität Potsdam, 2009. http://opus.kobv.de/ubp/volltexte/2009/3723/.

Full text
Abstract:
There are many factors which make speaking and understanding a second language (L2) a highly complex challenge. Skills and competencies in in both linguistic and metalinguistic areas emerge as parts of a multi-faceted, flexible concept underlying bilingual/multilingual communication. On the linguistic level, a combination of an extended knowledge of idiomatic expressions, a broad lexical familiarity, a large vocabulary size, and the ability to deal with phonetic distinctions and fine phonetic detail has been argued necessary for effective nonnative comprehension of spoken language. The scientific interest in these factors has also led to more interest in the L2’s information structure, the way in which information is organised and packaged into informational units, both within and between clauses. On a practical level, the information structure of a language can offer the means to assign focus to a certain element considered important. Speakers can draw from a rich pool of linguistic means to express this focus, and listeners can in turn interpret these to guide them to the highlighted information which in turn facilitates comprehension, resulting in an appropriate understanding of what has been said. If a speaker doesn’t follow the principles of information structure, and the main accent in a sentence is placed on an unimportant word, then there may be inappropriate information transfer within the discourse, and misunderstandings. The concept of focus as part of the information structure of a language, the linguistic means used to express it, and the differential use of focus in native and nonnative language processing are central to this dissertation. Languages exhibit a wide range of ways of directing focus, including by prosodic means, by syntactic constructions, and by lexical means. The general principles underlying information structure seem to contrast structurally across different languages, and they can also differ in the way they express focus. In the context of L2 acquisition, characteristics of the L1 linguistic system are argued to influence the acquisition of the L2. Similarly, the conceptual patterns of information structure of the L1 may influence the organization of information in the L2. However, strategies and patterns used to exploit information structure for succesful language comprehension in the native L1, may not apply at all, or work in different ways or todifferent degrees in the L2. This means that L2 learners ideally have to understand the way that information structure is expressed in the L2 to fully use the information structural benefit in the L2. The knowledge of information structural requirements in the L2 could also imply that the learner would have to make adjustments regarding the use of information structural devices in the L2. The general question is whether the various means to mark focus in the learners’ native language are also accessible in the nonnative language, and whether a L1-L2 transfer of their usage should be considered desirable. The current work explores how information structure helps the listener to discover and structure the forms and meanings of the L2. The central hypothesis is that the ability to access information structure has an impact on the level of the learners’ appropriateness and linguistic competence in the L2. Ultimately, the ability to make use of information structure in the L2 is believed to underpin the L2 learners’ ability to effectively communicate in the L2. The present study investigated how use of focus markers affects processing speed and word recall recall in a native-nonnative language comparison. The predominant research question was whether the type of focus marking leads to more efficient and accurate word processing in marked structures than in unmarked structures, and whether differences in processing patterns can be observed between the two language conditions. Three perception studies were conducted, each concentrating on one of the following linguistic parameters: 1. Prosodic prominence: Does prosodic focus conveyed by sentence accent and by word position facilitate word recognition? 2. Syntactical means: Do cleft constructions result in faster and more accurate word processing? 3. Lexical means: Does focus conveyed by the particles even/only (German: sogar/nur) facilitate word processing and word recall? Experiments 2 and 3 additionally investigated the contribution of context in the form of preceding questions. Furthermore, they considered accent and its facilitative effect on the processing of words which are in the scope of syntactic or lexical focus marking. All three experiments tested German learners of English in a native German language condition and in English as their L2. Native English speakers were included as a control for the English language condition. Test materials consisted of single sentences, all dealing with bird life. Experiment 1 tested word recognition in three focus conditions (broad focus, narrow focus on the target, and narrow focus on a constituent than the target) in one condition using natural unmanipulated sentences, and in the other two conditions using spliced sentences. Experiment 2 (effect of syntactic focus marking) and Experiment 3 (effect of lexical focus marking) used phoneme monitoring as a measure for the speed of word processing. Additionally, a word recall test (4AFC) was conducted to assess the effective entry of target-bearing words in the listeners’ memory. Experiment 1: Focus marking by prosodic means Prosodic focus marking by pitch accent was found to highlight important information (Bolinger, 1972), making the accented word perceptually more prominent (Klatt, 1976; van Santen & Olive, 1990; Eefting, 1991; Koopmans-van Beinum & van Bergem, 1989). However, accent structure seems to be processed faster in native than in nonnative listening (Akker& Cutler, 2003, Expt. 3). Therefore, it is expected that prosodically marked words are better recognised than unmarked words, and that listeners can exploit accent structure better for accurate word recognition in their L1 than they do in the L2 (L1 > L2). Altogether, a difference in word recognition performance in L1 listening is expected between different focus conditions (narrow focus > broad focus). Results of Experiments 1 show that words were better recognized in native listening than in nonnative listening. Focal accent, however, doesn’t seem to help the German subjects recognize accented words more accurately, in both the L1 and the L2. This could be due to the focus conditions not being acoustically distinctive enough. Results of experiments with spliced materials suggest that the surrounding prosodic sentence contour made listeners remember a target word and not the local, prosodic realization of the word. Prosody seems to indeed direct listeners’ attention to the focus of the sentence (see Cutler, 1976). Regarding the salience of word position, VanPatten (2002; 2004) postulated a sentence location principle for L2 processing, stating a ranking of initial > final > medial word position. Other evidence mentions a processing adantage of items occurring late in the sentence (Akker & Cutler, 2003), and Rast (2003) observed in an English L2 production study a trend of an advantage of items occurring at the outer ends of the sentence. The current Experiment 1 aimed to keep the length of the sentences to an acceptable length, mainly to keep the task in the nonnative lnaguage condition feasable. Word length showed an effect only in combination with word position (Rast, 2003; Rast & Dommergues, 2003). Therefore, word length was included in the current experiment as a secondary factor and without hypotheses. Results of Experiment 1 revealed that the length of a word doesn’t seem to be important for its accurate recognition. Word position, specifically the final position, clearly seems to facilitate accurate word recognition in German. A similar trend emerges in condition English L2, confirming Klein (1984) and Slobin (1985). Results don’t support the sentence location principle of VanPatten (2002; 2004). The salience of the final position is interpreted as recency effect (Murdock, 1962). In addition, the advantage of the final position may benefit from the discourse convention that relevant background information is referred to first, and then what is novel later (Haviland & Clark, 1974). This structure is assumed to cue the listener as to what the speaker considers to be important information, and listeners might have reacted according to this convention. Experiment 2: Focus marking by syntactic means Atypical syntactic structures often draw listeners’ attention to certain information in an utterance, and the cleft structure as a focus marking device appears to be a common surface feature in many languages (Lambrecht, 2001). Surface structure influences sentence processing (Foss & Lynch, 1969; Langford & Holmes, 1979), which leads to competing hypotheses in Experiment 2: on the one hand, the focusing effect of the cleft construction might reduce processing times. On the other, cleft constructions in German were found to be used less to mark fo than in English (Ahlemeyer & Kohlhof, 1999; Doherty, 1999; E. Klein, 1988). The complexity of the constructions, and the experience from the native language might work against an advantage of the focus effect in the L2. Results of Experiment 2 show that the cleft structure is an effective device to mark focus in German L1. The processing advantage is explained by the low degree of structural markedness of cleft structures: listeners use the focus function of sentence types headed by the dummy subject es (English: it) due to reliance on 'safe' subject-prominent SVO-structures. The benefit of cleft is enhanced when the sentences are presented with context, suggesting a substantial benefit when focus effects of syntactic surface structure and coherence relation between sentences are integrated. Clefts facilitate word processing for English native speakers. Contrary to German L1, the marked cleft construction doesn’t reduce processing times in English L2. The L1-L2 difference was interpreted as a learner problem of applying specific linguistic structures according to the principles of information structure in the target language. Focus marking by cleft did not help German learners in native or in nonnative word recall. This could be attributed to the phonological similarity of the multiple choice options (Conrad & Hull, 1964), and to a long time span between listening and recall (Birch & Garnsey, 1995; McKoon et al., 1993). Experiment 3: Focus marking by lexical means Focus particles are elements of structure that can indicate focus (König, 1991), and their function is to emphasize a certain part of the sentence (Paterson et al., 1999). I argue that the focus particles even/only (German: sogar/nur) evoke contrast sets of alternatives resp. complements to the element in focus (Ni et al., 1996), which causes interpretations of context. Therefore, lexical focus marking isn’t expected to lead to faster word processing. However, since different mechanisms of encoding seem to underlie word memory, a benefit of the focusing function of particles is expected to show in the recall task: due to focus particles being a preferred and well-used feature for native speakers of German, a transfer of this habitualness is expected, resulting in a better recall of focused words. Results indicated that focus particles seem to be the weakest option to mark focus: Focus marking by lexical particle don’t seem to reduce word processing times in either German L1, English L2, or in English L1. The presence of focus particles is likely to instantiate a complex discourse model which lets the listener await further modifying information (Liversedge et al., 2002). This semantic complexity might slow down processing. There are no indications that focus particles facilitate native language word recall in German L1 and English L1. This could be because focus particles open sets of conditions and contexts that enlarge the set of representations in listeners rather than narrowing it down to the element in the scope of the focus particle. In word recall, the facilitative effect of focus particles emerges only in the nonnative language condition. It is suggested that L2 learners, when faced with more demanding tasks in an L2, use a broad variety of means that identify focus for a better representation of novel words in the memory. In Experiments 2 and 3, evidence suggests that accent is an important factor for efficient word processing and accurate recall in German L1 and English L1, but less so in English L2. This underlines the function of accent as core speech parameter and consistent cue to the perception of prominence native language use (see Cutler & Fodor, 1979; Pitt & Samuel, 1990a; Eriksson et al., 2002; Akker & Cutler, 2003); the L1-L2 difference is attributed to patterns of expectation that are employed in the L1 but not (yet?) in the L2. There seems to exist a fine-tuned sensitivity to how accents are distributed in the native language, listeners expect an appropriate distribution and interpret it accordingly (Eefting, 1991). This pleads for accent placement as extremely important to L2 proficiency; the current results also suggest that accent and its relationship with other speech parameters has to be newly established in the L2 to fully reveal its benefits for efficient processing of speech. There is evidence that additional context facilitates processing of complex syntactic structures but that a surplus of information has no effect if the sentence construction is less challenging for the listener. The increased amount of information to be processed seems to impede better word recall, particularly in the L2. Altogether, it seems that focus marking devices and context can combine to form an advantageous alliance: a substantial benefit in processing efficiency is found when parameters of focus marking and sentence coherence are integrated. L2 research advocates the beneficial aspects of providing context for efficient L2 word learning (Lawson & Hogben, 1996). The current thesis promotes the view that a context which offers more semantic, prosodic, or lexical connections might compensate for the additional processing load that context constitutes for the listeners. A methodological consideration concerns the order in which language conditions are presented to listeners, i.e., L1-L2 or L2-L1. Findings suggest that presentation order could enforce a learning bias, with the performance in the second experiment being influenced by knowledge acquired in the first (see Akker & Cutler, 2003). To conclude this work: The results of the present study suggest that information structure is more accessible in the native language than it is in the nonnative language. There is, however, some evidence that L2 learners have an understanding of the significance of some information-structural parameters of focus marking. This has a beneficial effect on processing efficiency and recall accuracy; on the cognitive side it illustrates the benefits and also the need of a dynamic exchange of information-structural organization between L1 and L2. The findings of the current thesis encourage the view that an understanding of information structure can help the learner to discover and categorise forms and meanings of the L2. Information structure thus emerges as a valuable resource to advance proficiency in a second language.
Das Sprechen und Verstehen einer Fremdsprache (L2) stellt eine komplexe Leistung für einen Nicht-Muttersprachler dar. Kenntnisse und Fertigkeiten auf verschiedenen sprachlichen und außersprachlichen Ebenen wirken dabei zusammen, wie z.B. eine andere Grammatik, neue Lautbildungen in der Aussprache, der Aufbau von Wortschatz, und auch die Sensibilisierung für mögliche kulturell unterschiedliche Kommunikationsformen oder das Training kommunikativer Kompetenz. Eine wichtige Hilfe bei der muttersprachlichen wie der fremdsprachlichen Sprachverarbeitung bieten Mittel, mit denen sprachliche Information gegliedert wird, um sie verständlich zu machen. Die Informationsstruktur ermöglicht es, zum Beispiel den Fokus einer Äußerung zu markieren und damit Intentionen sprachlich zu vermitteln. In gesprochener Sprache sind es vor allem prosodische Mittel wie Satzakzent, die es dem Hörer ermöglichen, die wichtigen Informationen in der Äußerung herauszufinden. Aber auch durch die Verwendung unterschiedlicher grammatischer Strukturen oder durch besondere Wortwahl können Sprecher Satzteile markieren, die sie für besonders wichtig halten, und sie damit hervorheben. Wird die Informationsstruktur eines Satzes verletzt, indem zum Beispiel der Satzakzent auf ein eher unwichtiges Wort gelegt wird, kann der Gesprächspartner/die Gesprächspartnerin einen anderen Teil des Satzes als im Fokus stehend interpretieren als den vom Sprecher eigentlich intendierten Teil. Dies kann - in Kombination mit anderen Faktoren wie ungeschickter Wortwahl - zu Missverständnissen führen. Nun kann eine Sprache prosodische, syntaktische oder lexikalische Möglichkeiten der Markierung besitzen, die entweder in einer anderen Sprache nicht vorkommen, oder die andere Funktionen in Bezug auf die Interpretation von Äußerungen erfüllen, die in dieser Form in der jeweils anderen Sprache nicht existieren. Dies betrifft zum Beispiel Unterschiede zwischen Intonations- und Tonsprachen oder zwischen silbenzählenden und akzentzählenden Sprachen. Ruft der Fremdsprachenlerner die Strukturen sprachlicher Information in der Muttersprache (L1) ab und überträgt sie auf die Fremdsprache, kann dies bei gleicher informationsstruktureller Organisation der Sprache zu einer erfolgreichen Strategie des fremdsprachlichen Verstehens führen. Wird aber Informationsstruktur in der Fremdsprache mit anderen Mitteln als in der Muttersprache ausgedrückt, entsteht ein Spannungsfeld zwischen Verarbeitungsstrategien der Muttersprache und denen der Fremdsprache. Die vorliegende Arbeit befasst sich mit der Rolle informationsstruktureller Parameter in der muttersprachlichen und fremdsprachlichen Sprachverarbeitung. Es wird untersucht, wie Fremdsprachenlerner Fokusmarkierung in der Muttersprache (hier: Deutsch) und in der Fremdsprache (hier: Englisch) zu effizienter Sprachverarbeitung nutzen. Das Ziel ist eine tiefere Einsicht, wie sich Informationsstruktur in der Fremdsprache erschließt; die grundlegende Annahme ist dabei, dass ein Verständnis und eine Sensibilisierung für Informationsstruktur dem Fremdsprachenlerner hilft, Form und Bedeutung von Sprache zu erkennen. Eine solche Einsicht in Informationsstruktur unterstützt die Erweiterung und Festigung fremdsprachlicher Kompetenz. Die Frage nach dem Gebrauch von Informationsstruktur in einer Fremdsprache wird in drei experimentellen Studien untersucht, die sich auf jeweils eines der folgenden sprachlichen Mittel zur Fokusmarkierung konzentrieren: 1. Prosodische Mittel der Fokusmarkierung: Unterstützen Satzakzent und Wortposition im Satz eine bessere Worterkennung? 2. Syntaktische Mittel der Fokusmarkierung: Ermöglicht die Konstruktion eines Spaltsatzes (Englisch: cleft) eine schnellere Verarbeitung des fokussierten Elements im Satz als eine kanonische Wortstellung, und kann sich der Hörer auch zu einem späteren Zeitpunkt noch besser an ein syntaktisch markiertes als an ein unmarkiertes Element erinnern? 3. Lexikalische Mittel der Fokusmarkierung: Bewirken Fokuspartikel (hier: nur/sogar) eine schnellere Verarbeitung des fokussierten Elements, und kann sich der Hörer auch zu einem späteren Zeitpunkt noch besser an das fokussierte als an das nicht-fokussierte Element erinnern? Zusätzlich wird in Experiment 2 und in Experiment 3 untersucht, welchen Einfluss einleitende Fragen haben, die zur Fokusmarkierung eines Elements im Folgesatz dienen. Außerdem wird nachgegangen, welche Rolle es spielt, wenn ein syntaktisch oder lexikalisch fokussiertes Element einen Tonhöheakzent bekommt oder wenn dieser auf dem vorangegangenen Adjektiv realisiert wird. Die Probanden sind deutsche Muttersprachler, die Englisch als Fremdsprache gelernt haben. In den Experimenten werden den Testpersonen jeweils Sprachaufnahmen von deutschen Sätzen und Aufnahmen von parallel dazu konstruierten englischen Sätzen dargeboten. Als Kontrollgruppe für den englischen Teil der Experimente werden englische Muttersprachler getestet, um Referenzdaten für die Ergebnisse der Fremdsprachenlerner zu erhalten. Die Experimente sind als Perzeptionsexperimente konzipiert. Experiment 1 (prosodische Fokusmarkierung) untersucht Worterkennung in drei Bedingungen mitunterschiedlichem Fokus (weiter und enger Fokus, enger Fokus auf anderem Satzelement als dem Zielwort), und zwei Bedingungen mit künstlich durch splicing verändertem Sprachmaterial. In Experiment 2 (syntaktische Fokusmarkierung) und Experiment 3 (lexikalische Fokusmarkierung) wird im Hörexperiment als Methode phoneme monitoring angewandt, wobei die Reaktionszeiten zum Erkennen des fokussierten Worts (welches ein vorher spezifiziertes Phonem enthält) gemessen werden. Im Anschluss an den Hörteil wird in diesen zwei Experimenten außerdem ein Erinnerungstest durchgeführt, bei dem die fokussierten Elemente mit einem Multiple-Choice-Verfahren (4AFC) noch einmal abgefragt werden und die Anzahl der richtigen Antworten gewertet wird. Zu 1.: Prosodische Mittel der Fokusmarkierung Akzentuierung ist ein Mittel, um im Satz wichtige Information hervorzuheben (Bolinger, 1972), was zu einer besseren Wahrnehmung solch akzentuierter Information führt (siehe z.B. van Santen & Olive, 1990; Eefting, 1991). Akzentstruktur scheint jedoch schneller in der L1 als in der L2 verarbeitet zu werden (Akker & Cutler, 2003). Es wird daher angenommen, dass in der L1 eine Fokusmarkierung durch Tonhöheakzent zu besserer Worterkennung eines solchermaßen markierten Wortes führt. Akzentstruktur sollte sich auch in der L2 erschließen, wenn auch in geringerem Maß (L1 > L2). Insgesamt wird ein unterschiedlich starker Fokuseffekt je nach Fokusbedingung erwartet (enger Fokus > weiter Fokus). Die Ergebnisse von Experiment 1 bestätigen, dass Worte in der Muttersprache besser erkannt werden als in der Fremdsprache. Ein unterschiedlicher, als Satzakzent realisierter Fokus hilft allerdings den Probanden weder in der Muttersprache noch in der Fremdssprache, fokussierte Worte schneller zu erkennen. Dies könnte auf ungenügende akustische Unterschiede in der Realisierung der unterschiedlichen Fokuskonditionen in den Sprachaufnahmen zurückzuführen sein. Die Experimente mit synthetisch, durch splicing manipuliertem Sprachmaterial ergeben, dass die umgebende Satzprosodie eher zur Worterkennung beiträgt als die einzelne Akzentmarkierung des Wortes (Cutler, 1976). Für die Salienz der Wortposition im Satz postulierte VanPatten (2004) für fremdsprachliche Wahrnehmung die Reihenfolge von initialer > finaler > medialer Position. Akker und Cutler (2003) erwähnen für L1 und L2 einen Verarbeitungsvorteil von später im Satz auftretenden Worten gegenüber früher Auftretenden. Des weiteren fand Rast (2003) in einer L2-Produktionsstudie einen Vorteil der äußeren Satzpositionen gegenüber der medialen Position. Im vorliegenden Experiment werden die Sätze vor allem wegen der fremdsprachlichen Testbedingung in akzeptabler Länge gehalten, was Aussagen über die Position an den äußeren Satzenden ermöglicht, aber weniger deutliche Effekte für die medial Position erwarten lässt. Wortlänge wurde als Nebenfaktor mit in das Experiment aufgenommen ohne eigenständige Hypothesen dafür zu formulieren. In einer früheren L2 Studie zeigte Wortlänge nur in Abhängigkeit zur Position des Wortes im Satz einen Effekt (Rast, 2003; Rast & Dommergues, 2003). Die Ergebnisse von Experiment 1 zeigen, dass die Länge der Zielworte keine entscheidende Rolle für deren korrekte Erkennung spielt. Die Wortposition im Satz, und hier besonders die finale Position, trägt jedoch entscheidend zur korrekten Worterkennung im Deutschen bei. Ein ähnlicher Trend zeigt sich für die Worterkennung in der Fremdsprache Englisch (siehe Klein, 1984; Slobin, 1985). Das Lokalitätsprinzip von VanPatten (2004) mit dem Verarbeitungsvorteil von initial > final > medial kann nicht bestätigt werden, und die besondere Salienz der finalen Position wird mit Murdock (1962) als recency effect erklärt. Außerdem könnte die finale Position von der Konvention für die Integration neuer Information profitieren: bekannte Information wird vor neuer Information genannt (Haviland & Clark, 1974). Hörer handeln nach dieser üblichen Diskursstruktur und richten ihre Aufmerksamkeit auf Information, die in finaler Position genannt wird. Zu 2.: Syntaktische Mittel der Fokusmarkierung Die Abweichung von kanonischer Satzstruktur lenkt die Aufmerksamkeit auf bestimmte Elemente im Satz, und der Spaltsatz ist in vielen Sprachen eine bekannte Art der Fokussierung (Lambrecht, 2001). Die Oberflächenstruktur eines Satzes beeinflusst seine Verarbeitung (Foss & Lynch, 1969; Langford & Holmes, 1979) und in Experiment 2 stehen zwei Hypothesen gegenüber: Der fokussierende Effekt von Spaltsätzen könnte einen Verarbeitungsvorteil bewirken. Andererseits sind Spaltsätze im Deutschen seltener und weniger gebräuchlich als im Englischen (Ahlemeyer & Kohlhof, 1999; Doherty, 1999; E. Klein, 1988); die syntaktische Komplexität von Spaltsätzen und die Erfahrung der Muttersprache könnten einem Verarbeitungsvorteil in Deutsch L1 und Englisch L2 entgegenwirken. Die Ergebnisse von Experiment 2 zeigen, dass der Spaltsatz ein effektives Mittel der Fokusmarkierung im Deutschen ist. Dies wird auf die geringe strukturelle Markiertheit des Ersatz-Subjekts ‚es’ zurückgeführt, da es an kanonischer, initialer Stelle steht. Die Prominenz dieses Subjekts setzt das nachfolgende Subjekt-Element in Fokus und verleiht ihm Subjekt-Prominenz. Der verarbeitungsfördernde Effekt von Spaltsätzen wird noch erhöht, wenn Oberflächenstruktur (Spaltsatz) und Satzzusammenhang (Kontext) integriert werden. Der Spaltsatz wird jedoch nicht in der Fremdsprache als ein effektives Mittel der Fokusmarkierung genutzt. Englische Muttersprachler nutzen den Fokuseffekt des Spaltsatzes zur schnellen Worterkennung, aber dieses informationsstrukturelle Mittel der L2 wird nicht von Fremdsprachenlernern erkannt und verwertet. Dies wird als Lernerproblem interpretiert: linguistische Strukturen der Muttersprache werden nicht adäquat nach informationsstrukturellen Prinzipien in der Fremdsprache angewandt. Der Spaltsatz trägt weder im Deutschen noch im Englischen zu einer besseren Erinnerungsleistung bei. Das kann zum einen an der starken phonologischen Ähnlichkeit der im Test angebotenen Antwortoptionen liegen (Conrad & Hull, 1964); zum anderen kann es mit der Zeitspanne zusammenhängen, die zwischen Hörexperiment und Erinnerungstest liegen und die die Erinnerung an ein bestimmtes Wort zu sehr erschwert (Birch & Garnsey, 1995; McCoon et.al., 1993). Zu 3.: Lexikalische Mittel der Fokusmarkierung Fokuspartikel sind Exponenten von Fokusstruktur und sie markieren Satzelemente (König, 1991; Paterson et al., 1999). Die untersuchten Fokuspartikel evozieren Kontrast und Alternativmengen zu dem fokussierten Element, was Interpretationen von Kontext bewirkt (Ni et al., 1996; Liversedge et al., 2002). Von daher wird keine schnellere Verarbeitung von fokussierten Worten erwartet. Ihre förderliche Eigenschaft zeigt sich jedoch in der Erinnerungsleistung, da sich dieser Prozess auf andere Erschließungsmechanismen zu stützen scheint: es wird erwartet, dass der bevorzugte Gebrauch von lexikalischen Mitteln zur Fokusmarkierung im Deutschen (König, 1991; Ahlemeyer & Kohlhof, 1999) sich positiv auf die Erinnerung von fokussierten Worten auswirkt. Die Fokuspartikel nur und sogar in Experiment 3 erweisen sich in der Experimentreihe als schwächste Exponenten von Fokusmarkierung: Weder im Deutschen noch in Englischen als Fremdsprache noch in der englischen Kontrollgruppe bewirken diese Fokuspartikel eine schnellere Verarbeitung des fokussierten Elements. Dies erklärt sich durch die Eigenschaft von Fokuspartikeln, eine Menge an Alternativen zu evozieren und dadurch beim Hörer komplexe Diskursmodelle anzuregen, die sowohl das Element in Fokus als auch Alternativen dazu beinhalten (siehe Ni et al., 1996; Liversedge et al., 2002). Verarbeitung und Interpretation der Fokusstruktur benötigen dann einen erhöhten Zeitaufwand. Im Erinnerungstest kommt der Fokuseffekt nur in der fremdsprachlichen Testbedingung zum Tragen: Werden Lerner hinsichtlich mit hinsichtlich ihrer L2-Fertigkeit anspruchsvollen Situationen konfrontiert, wird Fokusstruktur zu einer besseren Repräsentation in der Erinnerung genutzt. Übergreifend zeigt sich aus Experiment 2 und Experiment 3, dass ein zusätzlicher Satzakzent in Sätzen mit syntaktischer oder lexikalischer Fokusmarkierung in muttersprachlichem Deutsch und Englisch genutzt wird, aber in der Fremdsprache nicht gleichermaßen effektiv verarbeitet wird. Ein bedeutender Parameter wie Tonhöheakzent wird in der Fremdsprache scheinbar weniger genutzt, wenn gleichzeitig andere Mittel der Markierung auftreten. Vor allem deutet dieser Effekt jedoch auf eine weitaus differenziertere Wahrnehmung und Interpretation von Tonhöheakzent in der Muttersprache hin. Des weiteren scheint die Reihenfolge, in der die Testsprachen den Probanden angeboten werden (L1-L2,oder L2-L1) von Bedeutung zu sein, da ein Lerneffekt aus der ersten Testsprache die Leistung in der zweiten Testsprache beeinflussen kann. Dies erschwert die Erhebung vergleichbarer Daten für zwei Sprachen von derselben Probandengruppe (siehe Akker & Cutler, 2003). Im Hinblick auf die Auswirkungen von Kontext auf die Wortverarbeitung weisen die Ergebnisse darauf hin, dass vorangestellte Fragen dem Fremdsprachenlerner nur bedingt Hilfe bei der zügigen Verarbeitung von z.B. schwierigeren Satzkonstruktionen bieten. Zusätzlicher Kontext scheint außerdem die Erinnerungsleistung zu erschweren, vor allem in der Fremdsprache. Sowohl in der Fremdsprachenforschung als auch in der Fremdsprachendidaktik hat die Einbettung in einen Kontext bei dem Erlernen von Worten eine große Bedeutung (Lawson & Hogben, 1996). Es wird dahingehend argumentiert, dass eine Form von Kontext, die mehr semantische, prosodische oder lexikalische Verbindungen schafft, den zusätzlichen Verarbeitungsaufwand kompensieren müsste. Die Ergebnisse der vorliegenden Arbeit weisen darauf hin, dass sich Informationsstruktur eher in der Muttersprache als in der Fremdsprache erschließt. Einzelne informationsstrukturelle Parameter werden jedoch sehr wohl von den Fremdsprachenlernern erfolgreich ausgewertet, was sich in einer schnelleren und nachhaltigeren sprachlichen Verarbeitung äußert. Auf der kognitiven Ebene zeigt die vorliegende Arbeit die vorteilhafte Wirkung auf, wenn Informationsstruktur von Mutter- und Fremdsprache in dynamischem Austausch stehen. Die Ergebnisse bestärken die Annahme, dass ein Verständnis von Informationsstruktur dem Fremdsprachenlerner helfen kann, Form und Bedeutung der Fremdsprache zu erkennen. Informationsstruktur erweist sich als potentiell wertvolle Ressource in der Entwicklung und Stärkung fremdsprachlicher Kompetenz.
APA, Harvard, Vancouver, ISO, and other styles
13

Lichacz, Frederick Michael John Carleton University Dissertation Psychology. "The locus of length effects in visual word recognition." Ottawa, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
14

Perälä, J. (Juho). "Using open innovation to gain knowledge and technology." Bachelor's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605282043.

Full text
Abstract:
The topic of this thesis is “Using open innovation to gain knowledge and technology”. Open innovation as a topic has been researched quite much but the different researches have not gathered all the most used obtaining methods to one research. I saw it fitting to gather the most used inbound methods together. The other researches usually focus on one or two methods and the advantages and disadvantages of different methods have not been gathered under one study. This bachelor’s thesis is a literature review in which I write about all inbound open innovation methods and what advantages and possible disadvantages they might have. The main findings in this research is how the different methods work and what kind of open innovation business models each method supports. This bachelor’s thesis could help companies to determine what kind of obtaining method to use if they would like to implement open innovation to their company’s business model.
APA, Harvard, Vancouver, ISO, and other styles
15

Marjamaa-Mankinen, L. (Liisa). "Technology ecosystems and digital business ecosystems for business." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201603251356.

Full text
Abstract:
The purpose of this study was to find out the progress in the research of technology ecosystems and digital business ecosystems and to combine that information for business purposes by the utilization of information about business ecosystems. The need for this information emerged at the Department of Information Processing Science in the context of European Union research projects. The information gained is expected to assist to increase possibilities both for the research and for the personal competence to work with enterprises in new kinds of technology environments. The main research question to be answered in this study was: How are technology ecosystems and digital business ecosystems for business perceived and approached in the literature? Instead of a systematic review, a method of systematic mapping was selected to structure the selected research areas for getting a broad overview over the two streams of research, and for identification the possible research evidence. To answer the main question the following subquestions were set for both systematic mapping studies: RQ1 Which journals include papers on technology ecosystems / digital business ecosystems for business? RQ2 What are the most investigated topics of technology ecosystems / digital business ecosystems and how have these changed over time? RQ3 What are the most frequently applied research approaches and methods, and in what study context? Based on structuring the selected research areas according to the set subquestions, broad overviews were established presenting findings. Based on the identification and evaluation of publication channels the forums for discussion were exposed. Based on the identification of topics and their evolution the trends of discussion were exposed. Based on the identification of research types the non-empirical and the empirical research were exposed. Found research evidence and found solution proposals (from non-empirical research) were discussed and the need for further research was considered. The main contribution of this mapping study was the identification of different perceptions of two vague concepts, technology ecosystem and digital business ecosystem, and notion of their convergence and interlace over time (especially in relation to the exposed scarce research evidence). The recommendations for future research were set based on the found empirical research and solution proposals, as well as limitations of this study.
APA, Harvard, Vancouver, ISO, and other styles
16

Carson, Robyn. "Processing Grammatical and Notional Number Information in English and French." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38309.

Full text
Abstract:
Number is a grammatical category found in nearly every language around the world (Corbett, 2000). The syntactic expression of number is referred to as grammatical number. In English and French, two number categories are in use: singular and plural. Nouns that are written more frequently in their singular form are called singular-dominant, while those that are written more frequently in their plural form are called plural-dominant. Several lexical decision and picture naming studies have found that grammatical number and noun dominance interact, resulting in a surface frequency effect for singular-dominant nouns only. Singular-dominant nouns are recognized/named significantly faster in their singular form than in their plural form, while plural-dominant nouns are recognized/named equally fast in both forms (e.g., Baayen, Burani, & Schreuder, 1997; Biedermann, Beyersmann, Mason, & Nickels, 2013; Domínguez, Cuetos, & Segui, 1999; New, Brysbaert, Segui, Ferrand, & Rastle, 2004; Reifegerste, Meyer, & Zwitserlood, 2017). The objective of this thesis is to extend our understanding of the singular-dominant noun surface frequency effect in English and French by adopting three procedures. First, advanced linear mixed modelling techniques were used to improve statistical power and accuracy. Second, the noun dominance ratio technique (Reifegerste et al., 2017) was applied to investigate whether the surface frequency effect remains significant when noun dominance was treated as a continuous variable. Third, a determiner-noun number agreement task was created to determine whether the surface frequency effect could be reproduced in a novel task. Three studies were conducted. In Study 1, two lexical decision tasks (LDTs) were conducted. Results revealed that in both English and French, singular nouns were recognized faster than plural nouns while the noun dominance effect was non-significant. The interaction between grammatical number and noun dominance was significant in French and marginally so in English. The interaction pattern was identical in both languages, singular-dominant nouns demonstrated a surface frequency effect while plural nouns did not. In Study 2, three determiner-noun number agreement tasks (NATs) were conducted. Results revealed that in both English and French, plural nouns were recognized faster than singular nouns. No other effects were significant. Incorporating irregular singular nouns (e.g., bonus) and plural nouns (e.g., mice) as foils produced the same results. In Study 3, two LDTs and one NAT were conducted. Lexical decision results revealed that in both English and French, singular nouns were recognized faster than plural nouns. However, the effects of noun collectivity and animacy were significant in English only; non-collective nouns were recognized faster than collective nouns while inanimate nouns were recognized faster than animate nouns. Number agreement results revealed that in English, plural nouns were recognized faster than singular nouns; no other effects reached significance. Taken together, my studies confirm that a strong surface frequency effect exists during visual word recognition for singular-dominant nouns. However, the surface frequency effect does not extend to the formation determiner-noun number agreement decisions, which were influenced nearly exclusively by grammatical number.
APA, Harvard, Vancouver, ISO, and other styles
17

Cheng, Man-fai, and 鄭文輝. "Utitlization in information technology in housing management." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B4500920X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Aydinci, Nedim. "Processing Technology for Si Based Tandem Solar Cells." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177442.

Full text
Abstract:
This project focuses on the investigation of Silicon based Tandem solar cell fabricated by using the Hydride Vapor Phase Epitaxy (HVPE). In the state-of-the-art multi-junction solar cell manufacturing epitaxial technologies are used for sub-cell formation, such as MOVPE (Metal Organic Vapour Phase Epitaxy) [1] or MBE (Molecular Beam Epitaxy) [2]. Tandem solar cell structures consist of subcells made of III-V semiconductors serially connected or grown on a suitable semiconductor substrate [3]. The used semiconductor materials have to be lattice matched to each other and with optimum band gap combinations [4]. Multi-junction solar cells with Si and III-V semiconductor sub-cells are promising to achieve extremely high efficiency. The objective of this project is to investigate a cost effective fabrication technology to realize III-V semiconductor and silicon based sub-cells in tandem solar cells. The Si p-n junction formation by PH3 diffusion for the silicon sub cell is studied in HVPE. A prototype InP solar cell was fabricated by HVPE and its I-V performance was studied. In this thesis, the impact of HVPE process parameters on the silicon p-n junction formation was examined by alternating the process temperature. Silicon samples were processed in the HVPE with temperature values of 1st (605 0C) < 2nd (657 0C) < 3rd (720 0C). It is observed that the temperature affects the quality of the formed Si p-n junction. The Si samples treated at 720 0C show a diode performance with a deviated I-V curve due to parasitic resistances. The InP solar cell fabrication consisted of the epitaxial growth of sulfur doped n-InP and zinc doped p-InP materials on top of each other to form n+/n+/n-/p+ [5] structure. Ohmic conduction through the InP solar cell structure was observed after the contacts formation, which could be due to the metal alloy spiking through the p-InP emitter layer during annealing. Process mitigations to fabricate InP solar cell by HVPE are proposed at the end of project.
APA, Harvard, Vancouver, ISO, and other styles
19

Weber, Mary Margaret. "The effect of information technology on retail logistics." The Ohio State University, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=osu1299530060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Palokangas, M. (Mari). "CeHRes roadmap utilization in development of eHealth technology solutions:a Scoping review." Master's thesis, University of Oulu, 2017. http://urn.fi/URN:NBN:fi:oulu-201712023270.

Full text
Abstract:
eHealth can be defined as health care field that is using Information and Communication Technologies. There is variety of different technologies that can be used in eHealth and the field is evolving via new inventions. Users in eHealth are coming from several user groups from health care professionals to patients and external users. Development of eHealth technology solutions can fill increasing demands of health care field that are caused by longer life expectancy. Despite all the benefits that utilization of eHealth technology can bring for health care sector, it has also some barriers that are delaying adoption of eHealth technology solutions. To overcome these barriers CeHRes Roadmap was created to support and guide eHealth technology development and it is meant for developers, researchers, policy makers, and for educational purposes. CeHRes Roadmap is visualizing Holistic framework and it is based on participatory development approach, persuasive design techniques, and business modelling. Objective of this master’s thesis is to identify, collect, and characterize all relevant research that is using CeHRes Roadmap in developing eHealth technologies published from year 2011 onwards. Research articles are analysed geographically, in terms of technology and medical domain, and characterizing and categorizing CeHRes Roadmap elements and attributes. Research method in this thesis was Scoping review that is literature review method that aim to map rapidly relevant literature and is suitable for broad topics. Literature search was done from Scopus, Web of Science, IEEE, PubMed, and Cinahl. Due to multidisciplinary nature of topic search was done for databases that contain material from Information processing science and/or medical science. 26 studies were identified to be relevant for this research. Results of this master’s thesis indicate that usage of CeHRes Roadmap has been most common in Netherlands, but it has been recognized and referenced in hundreds of studies. As the roadmap is not restricting usage to particular technology area, variety of used technologies were wide and several different medical domains using CeHRes roadmap were found. When analysing CeHRes Roadmap characteristics, participatory development was found to be the key characteristic that was visible in almost every selected study. This thesis provides inventory of studies that have used CeHRes Roadmap in development work and give insight how it has been used.
APA, Harvard, Vancouver, ISO, and other styles
21

Hall, Shirley L. "Information processing instruction in Virginia Community Colleges." Thesis, Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/94455.

Full text
Abstract:
This was a study of the information processing instruction in Virginia Community Colleges. The purposes of the study were (a) to obtain baseline information about the information processing instruction programs, and (b) to evaluate the information processing programs using the information processing"content" portion of the Standards for Excellence in Business Education. The respondents in the study were secretarial science program directors in the Virginia community colleges. Responses to the survey instrument were received from 22 of the 32 mailed (69%). For the responding institutions, 2314 students were enrolled in courses that involve word processing. Sixty-six percent of the students were younger than 25. There were 57 instructors of information processing in the community colleges; two male and 55 female. Fifteen were full time, 42 part-time. Six of the faculty had doctor's degrees, 42 had master's degrees, six had bachelor's degrees. Of the 419 computers used for word processing, over one-half were IBM compatible. In addition, 106 dedicated word processors were in use. WordStar computer software was used by one-half of the community colleges. Other software most used included WordPerfect and Display Write. Word processing was the most-offered secretarial science course. The secretarial science program directors rated their information processing instruction programs to"meet" or"exceed the standard" in 112 of the 113 items in the"content" portion of the Standards published by the U. S. Department of Education. The one item that was rated"below standard" was an item dealing with data communications instruction. The Standards items were divided into 12 subtopics. The subtopic to receive the highest rating was"Employability Traits and Attitudes." The subtopic to receive the lowest rating was"Computer Programming."
M.S.
APA, Harvard, Vancouver, ISO, and other styles
22

Shi, H. (Henglin). "A GQM-based open research data technology evalution method in open research context." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605221853.

Full text
Abstract:
Open Research Data is gaining popularity nowadays, and various research units and individuals are interested to join this trend. However, due to variety of Open Research Data technologies, they have found it is difficult to select proper ones for their specific requirements. Thus, a method for evaluating of Open Research Data related technologies is developed in this study for researchers to select proper ones. Firstly, the theoretical knowledge of research data sharing and reusing barriers is resulted from a structured literature review. As a result, from the 19 primary studies, 96 instances of existing barriers are identified and classified to seven categories, where four of them are research data sharing barriers and rest of them are reusing barriers. This knowledge is regarded as an important resource for understanding researchers’ requirements on Open Research Data technologies, and utilized to develop the technology evaluation method. Additionally, the Open Research Data Technology Evaluation Method (ORDTEM) is developed basing on the Goal/Question/Metric (GQM) approach and resulted research data sharing and reusing barriers. To develop this method, the GQM approach is adopted as the main skeleton to transform these barriers to measurable criterion. Consequently, the ORDTEM, which is consisting of six GQM evaluation questions and 14 metrics, is developed for researchers to evaluate Open Research Data technologies. Furthermore, to validate the GQM-based ORDTEM, a focus groups study is conducted in a workshop. In the workshop, nine researchers who has the need to participate Open Research Data related activities are recruited to form a focus group to discuss the resulted ORDTEM. And by analysing the content of the discussion, 16 critical opinions are addressed which resulted eight improvements including one refinement on an existing metric and seven new metrics to ORDTEM. Lastly, a testing process of applying ORDTEM to evaluate four selected Open Research Data technologies is implemented also for validating whether it can be used in solving real-world evaluation tasks. And more than the validation, this experiment also results the materials about usage of ORDTEM, which is useful for future adopters. However, more than developing the solution to eliminate the difficulty of selecting technologies for participating Open Research Data movements, this study also provides two additional contributions. For one thing, resulted research data sharing and reusing barriers also direct the future effort to prompt Open Research Data and Open Science. Moreover, the experience of utilizing the GQM approach to transform existing requirements to evaluation criterion is possible to be studied for developing other requirement-specific evaluation.
APA, Harvard, Vancouver, ISO, and other styles
23

Smolenska, Greta. "Complex Word Identification for Swedish." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-352349.

Full text
Abstract:
Complex Word Identification (CWI) is a task of identifying complex words in text data and it is often viewed as a subtask of Automatic Text Simplification (ATS) where the main task is making a complex text simpler. The ways in which a text should be simplified depend on the target readers such as second language learners or people with reading disabilities. In this thesis, we focus on Complex Word Identification for Swedish. First, in addition to exploring existing resources, we collect a new dataset for Swedish CWI. We continue by building several classifiers of Swedish simple and complex words. We then use the findings to analyze the characteristics of lexical complexity in Swedish and English. Our method for collecting training data based on second language learning material has shown positive evaluation scores and resulted in a new dataset for Swedish CWI. Additionally, the built complex word classifiers have an accuracy at least as good as similar systems for English. Finally, the analysis of the selected features confirms the findings of previous studies and reveals some interesting characteristics of lexical complexity.
APA, Harvard, Vancouver, ISO, and other styles
24

Thomas, Charalambos Bob. "Information communications technologies in education : a Faustian bargain?" Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33934.

Full text
Abstract:
There is currently a glaring absence of critical discourse surrounding the integration of information communications technologies in schools. Despite a growing body of literature showing that technology has both advantages and disadvantages, schools are incorporating ICT as if it were a panacea for educational and societal ills. This deification of ICT is based on utilitarian concerns and spiritual yearnings. The first chapter situates the author and outlines how the larger technology metanarrative has a mythic status. Chapter two situates the appeal to utilitarian and spiritual values in the wider culture. The final chapter examines how these issues are played out in the school system. The author concludes with a summary and raises areas for further investigation, along with suggestions for change.
APA, Harvard, Vancouver, ISO, and other styles
25

Ankener, Christine Susanne [Verfasser], and Maria [Akademischer Betreuer] Staudte. "The influence of visual information on word predictability and processing effort / Christine Susanne Ankener ; Betreuer: Maria Staudte." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://d-nb.info/1196090025/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ankener, Christine Susanne Verfasser], and Maria [Akademischer Betreuer] [Staudte. "The influence of visual information on word predictability and processing effort / Christine Susanne Ankener ; Betreuer: Maria Staudte." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://nbn-resolving.de/urn:nbn:de:bsz:291--ds-284518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Chan, Ki-hung, and 陳己雄. "Implementation of information technology in small and medium retail and wholesale enterprises." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31267853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Harms, Herbert Andrew. "Considerations on the optimal and efficient processing of information-bearing signals." Thesis, Princeton University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3597492.

Full text
Abstract:

Noise is a fundamental hurdle that impedes the processing of information-bearing signals, specifically the extraction of salient information. Processing that is both optimal and efficient is desired; optimality ensures the extracted information has the highest fidelity allowed by the noise, while efficiency ensures limited resource usage. Optimal detectors and estimators have long been known, e.g., for maximum likelihood or minimum mean-squared error criteria, but might not admit an efficient implementation. A tradeoff often exists between the two goals. This thesis explores the tradeoff between optimality and efficiency in a passive radar system and an analog-to-digital converter. A passive radar system opportunistically uses illuminating signals from the environment to detect and track targets of interest, e.g., airplanes or vehicles. As an opportunistic user of signals, the system does not have control over the transmitted waveform. The available waveforms are not designed for radar and often have undesirable properties for radar systems, so the burden is on the receiver processing to overcome these obstacles. A novel technique is proposed for the processing of digital television signals as passive radar illuminators that eases the need for complex detection and tracking schemes while incurring only a small penalty in detection performance. An analog-to-digital converter samples analog signals for digital processing. The Shannon-Nyquist theorem describes a sufficient sampling and recovery scheme for bandlimited signals from uniformly spaced samples taken at a rate twice the bandwidth of the signal. Frequency-sparse signals are composed of relatively few frequency components and have fewer degrees of freedom than a frequency-dense bandlimited signal. Recent results in compressed sensing describe sufficient sampling and recovery schemes for frequency-sparse signals that require a sampling rate proportional to the spectral density and the logarithm of the bandwidth, while providing high fidelity and requiring many fewer samples, which saves resources. A proposed sampling and simple recovery scheme is shown to efficiently recover the locations of tones in a large bandwidth nearly-optimally using relatively few samples. The proposed sampling scheme is further optimized for full recovery of the input signal by matching the statistics of the scheme to the statistics of the input signal.

APA, Harvard, Vancouver, ISO, and other styles
29

Berdugo, Oviedo Gloria. "Comprehension and representation of algebra word problems in a second language." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=84473.

Full text
Abstract:
The present study was designed to examine students' comprehension and problem representation when reading and solving algebra word problems in their first language (L1: Spanish) and their second language (L2: English). The main interest was to investigate the role of the L2 in performing the task. It was hypothesized that difficulty in understanding and solving word problems is a function of carrying out the task in the L2 and discourse factors inherent to the specific type of text.
The sample consisted of 31 grade 9 students who attended a private bilingual school in a Colombian city. Students were asked to think aloud as they solved four problems that dealt with the topics of ratio and percentage. Problem presentation was counterbalanced with respect to topic, but the language of presentation was kept fixed; i.e., the first set of ratio and percentage problems were in English and the other set in Spanish. Students were allowed to use either language when solving the problems. There was a short post-task semi-structured interview. Students' think aloud protocols and answers to problems were analyzed.
The results support the role that language plays in performing this type of task, although not in the way that it was initially predicted. The results suggest that the language of instruction plays a more important role than whether the task is performed in L1 or L2. Thus, the hypothesis that completing the task would be more difficult in the L2 was not supported. The results suggest that the difficulty with word problems resides in the mismatch between text comprehension, the situation presented in the text and the mathematical representation. The evidence is discussed within the framework of current explanations for performance in word problems in the L1 as they apply to the bilingual case, and theories of text comprehension. It is argued that current explanations for performance in word problems in the L1 apply to the bilingual case, but with the caution that the interaction between the content of the subject domain, and the knowledge students have of the language of instruction and their first language must be taken into account.
APA, Harvard, Vancouver, ISO, and other styles
30

Holzer, Corey T. "The application of natural language processing to open source intelligence for ontology development in the advanced persistent threat domain." Thesis, Purdue University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10249704.

Full text
Abstract:

Over the past decade, the Advanced Persistent Threat (APT) has risen to forefront of cybersecurity threats. APTs are a major contributor to the billions of dollars lost by corporations around the world annually. The threat is significant enough that the Navy Cyber Power 2020 plan identified them as a “must mitigate” threat in order to ensure the security of its warfighting network.

Reports, white papers, and various other open source materials offer a plethora of information to cybersecurity professionals regarding these APT attacks and the organizations behind them but mining and correlating information out of these various sources needs the support of standardized language and a common understand of terms that comes from an accepted APT ontology.

This paper and its related research applies the science of Natural Language Processing Open Source Intelligence in order to build an open source Ontology in the APT domain with the goal of building a dictionary and taxonomy for this complex domain.

APA, Harvard, Vancouver, ISO, and other styles
31

Hoffman, A. R. "Information technology decision making in South Africa : a framework for company-wide strategic IT management." Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/15854.

Full text
Abstract:
Includes bibliography.
The area of interest in which this Study is set is the linking of a company's business strategies with its strategic planning for IT (information technology). The objectives of the Study are: to investigate how the IT planning environment is changing for business enterprises in South Africa; to establish how successfully South African companies are managing IT strategically; to propose a new approach to strategic IT decision making that will help South African management deal with the major issues; to propose a way of implementing the approach. In Chapter 2, conclusions are drawn from an examination of the key strategic IT planning literature. It appears that fundamental changes are indeed taking place, and are producing significant shifts in the way researchers, consultants and managers think about IT. The survey of South African management opinion is described in Chapter 3. The opinions analyzed range over environmental trends, strategic decision making practices, and what an acceptable strategic IT decision making framework would look like. The need for a new, comprehensive approach to strategic IT decision making in South Africa is clearly established. In Chapter 4, a theoretical Framework is proposed as a new, comprehensive approach to strategic IT decision making. The Framework covers five strategic tasks: analysing the key environmental issues; determining the purposes and uses of IT in competitive strategy and organizational designs; developing the IT infrastructure, human systems, information systems, and human resources to achieve these purposes and uses; implementing the strategic IT decisions; and learning to make better strategic IT decisions. In Chapter 5, ways of implementing the Framework in practice are .identified. A means of evaluating its acceptability in a specific company is also proposed. The general conclusions of the Study are presented in Chapter 6. The Framework developed in this Study is intended for use, not directly by the IT decision makers themselves, but by the persons responsible for designing the IT decision making processes of the company. It is not, however, offered as a theory or a methodology. The aim is· simply to provide a conceptual "filing system", to help designers uncover and classify the IT strategy problems of their own company, to identify the tools their decision makers need, and to put appropriate problem solving processes in place.
APA, Harvard, Vancouver, ISO, and other styles
32

Gränsbo, Gustav. "Word Clustering in an Interactive Text Analysis Tool." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157497.

Full text
Abstract:
A central operation of users of the text analysis tool Gavagai Explorer is to look through a list of words and arrange them in groups. This thesis explores the use of word clustering to automatically arrange the words in groups intended to help users. A new word clustering algorithm is introduced, which attempts to produce word clusters tailored to be small enough for a user to quickly grasp the common theme of the words. The proposed algorithm computes similarities among words using word embeddings, and clusters them using hierarchical graph clustering. Multiple variants of the algorithm are evaluated in an unsupervised manner by analysing the clusters they produce when applied to 110 data sets previously analysed by users of Gavagai Explorer. A supervised evaluation is performed to compare clusters to the groups of words previously created by users of Gavagai Explorer. Results show that it was possible to choose a set of hyperparameters deemed to perform well across most data sets in the unsupervised evaluation. These hyperparameters also performed among the best on the supervised evaluation. It was concluded that the choice of word embedding and graph clustering algorithm had little impact on the behaviour of the algorithm. Rather, limiting the maximum size of clusters and filtering out similarities between words had a much larger impact on behaviour.
APA, Harvard, Vancouver, ISO, and other styles
33

Lomerson, William L. (William Lough). "Information Technology Needs of Professional Education Faculty with the Implications for the Design of Information Technology Systems." Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc332783/.

Full text
Abstract:
This study identified determinants existing within the academic environment of professional education faculty that influence the development and implementation of an information technology system to support their professional endeavors. Such a system would encourage these faculty to integrate technology into their curriculum and, consequently, produce teacher graduates who could employ information technology effectively in teaching. This study used a field survey research design with a mailed questionnaire. Both institutional and personal demographic factors were examined. The population exhibited a high level of homogeneity. This study indicated that ease of use, reliable operation, and capability to integrate technology into their personal work style were the most important concerns of faculty members. These factors reduce the emotional and intellectual costs of using technology.
APA, Harvard, Vancouver, ISO, and other styles
34

Tam, Chiu-han Sandy, and 譚肖嫻. "Strategic aspects of information technology in the banking industry inHong Kong." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31268298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Sarni, T. (Tomi). "Use of workflow technology to assist remote caretakers in a smart kitchen environment designed for elderly people suffering from dementia." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201305241339.

Full text
Abstract:
Purpose of this study was to determine feasibility of an information system that enables remote assistance between caretakers and elderly people suffering from dementia in a smart kitchen environment. Such system could alleviate stress experienced by caretakers by enabling provisioning of care giving between any combination of informal and formal caretakers, and by increasing mobility of caretakers. Second research problem was to evaluate benefits and drawbacks of using workflow technology to model cooking activities as workflow process models in smart kitchen environment. The research was conducted following the design science research framework. Scenario-based method and literature review were used to determine functional and technical requirements for the designed system. Quality function deployment method was used to choose technical solutions to be implemented. The designed system was evaluated by conducting a pilot test that is comparable to alpha test in software industry. Two external test participants evaluated the design following formative usability study guidelines. The pilot test confirmed that designed information system is feasible to be built from technical perspective. The most significant benefit of using workflow technology is ability to model and customize important daily activities such as preparing own meals to match individual needs of users and changing nature dementia. The most significant drawback of using workflow technology is demandingness of creating the models. The results of this study provide technical confirmation on ability to construct a supporting system for remote guidance of cooking activities. Also the study confirms that workflow technology is an viable candidate for powering such system.
APA, Harvard, Vancouver, ISO, and other styles
36

Boshoff, Jacques. "An evaluation of the information-processing approach of knowledge management." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53260.

Full text
Abstract:
Thesis (MBA)--Stellenbosch University, 2003.
ENGLISH ABSTRACT: The study involves research of secondary data in the field of knowledge management. It investigates the different perspectives of how knowledge can be managed. The complexity of knowledge requires different approaches for different organisations in a variety of industries. The complexity of this field has resulted in a number of acceptable as well as unacceptable knowledge management approaches. In the study the two predominant conflicting perspectives of knowledge management is discussed. The information-processing perspective suggests that knowledge can be effectively managed through the use of information technology. The emphasis is on the capabilities of information technology with little reference to the human and cultural issues. The opposing knowledgecentric perspective emphasises that a knowledge initiative requires fundamental change that affects the strategy, structure, culture and processes within an organisation. The goal is to create an environment that is conducive to knowledge creation and sharing. In the study these contrasting perspectives are investigated. Analysis of the main points of contention provides insight into faulty assumptions that affect the credibility of an approach. The first primary area of dispute relates to the understanding of knowledge and information. According to the information processing approach knowledge and information are similar. Therefore knowledge can be managed with information processing tools. Advocates of the knowledge-centric approach disagree with this assumption. They believe that knowledge sharing and creation are subject to human interaction and creativity, and not technological systems. A clear definition and description of knowledge and information is provided. It is apparent from this discussion that there is a vast difference between information and knowledge. It is therefore false to suggest that knowledge and information can be managed with information management tools. The analysis of the factors that limit the capabilities of information management tools to provide knowledge management solutions indicates that technology merely plays an enabling role in knowledge management. Humans play the fundamental role in creating new knowledge and sharing existing knowledge. The information-processing perspective is therefore flawed, for knowledge and information cannot be equated and therefore information-processing tools cannot be used to manage knowledge.
AFRIKAANSE OPSOMMING: Die studie gebruik sekondêre data in die veld van kennisbestuur. Verskillende perspektiewe of benaderings tot kennisbestuur word ondersoek. Die kompleksiteit van kennis noodsaak dat verskillende benaderings vir verskillende organisasies gevolg moet word. Die resultaat is dat daar 'n verskeidenheid van aanvaarbare sowel as onaanvaarbare benaderings tot kennisbestuur is. In hierdie studie word die twee primêre teenoorgestelde perspektiewe van kennisbestuur bespreek. Die inligtingsverwerkingsbenadering se standpunt is dat kennis effektief bestuur kan word deur die gebruik van inligtingstegnologie. Die vermoëns van inligtingstegnologie word beklemtoon terwyl min aandag aan kulturele en menslike aspekte betee word. Die opponerende kennis-georiënteerde benadering beklemtoon dat 'n kennisbestuursinisiatief fundamentele veranderinge in die strategie, struktuur, kultuur en besigheidsprosesse van 'n organisasie vereis. Die doel is om 'n omgewing daar te stel waarin die skep en deel van kennis vrylik kan geskied. In die studie word die kontrasterende perspektiewe van kennisbestuur ondersoek. 'n Analise van die hoof konflikareas verskaf insig in foutiewe afleidings wat gemaak word. Foutiewe afleidings bevraagteken die kredietwaardigheid van 'n benadering. Die eerste primêre area van konflik kan toegeskryf word aan verskillende interpretasies van kennis en inligting. Volgens die inligtingsverwerkingsbenadering is kennis en inligting dieselfde. Kennis kan dus effektief bestuur word deur tegnologie wat geskik is vir die bestuur van inligting. Die kennisgeoriënteerde benadering beskou hierdie afleiding as foutief. Ondersteuners van hierdie benadering is van mening dat die deel en skep van kennis afhanklik is van menslike interaksie en kreatiwiteit, nie tegnologiese stelsels nie. 'n Duidelike definisie en beskrywing van kennis en inligting word voorgelê. Dit is duidelik uit hierdie bespreking dat daar 'n definitiewe verskil bestaan tussen inligting en kennis. Die aaname dat kennis en inligting deur dieselfde tegnologie bestuur kan word, is dus foutief. Die analise van die faktore wat die vermoëns van tegnologie beperk om kennisbestuursoplossings te verskaf, maak dit duidelik dat tegnologie nie kennisbestuursprobleme kan oplos nie. Mense speel die fundamentele rol in die skepping van nuwe kennis en die deel van bestaande kennis. Die inligtingsverwerkingsbenadering tot kennisbestuur sal dus nie optimale gebruik van kennisbronne verseker nie.
APA, Harvard, Vancouver, ISO, and other styles
37

Wong, Hon-ho, and 黃漢浩. "Analytical models for the assessment of information technology on sustainable electrification." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B3122362X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Acevedo, Rafael A. "Valued information at the right time (VIRT) and the Navy's cooperative engagement capability (CEC) - a win/win proposition." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Mar%5FAcevedo.pdf.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, March 2006.
Thesis Advisor(s): Rick Hayes-Roth, Curtis Blais. Includes bibliographical references (p. 65-68). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
39

Edizel, Necati Bora. "Word embeddings with applications to web search and advertising." Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/669622.

Full text
Abstract:
Word embeddings are a building block of many practical applications across NLP and related disciplines. In this thesis, we present theoretical analysis and algorithms to learn word embeddings. Moreover, we present applications of word embeddings that concern Web Search and Advertising. We start by presenting theoretical insights for one the most popular algorithm to learn word embeddings \textit{word2vec}. We also model \textit{word2vec} in Reinforcement Learning framework and showed that it's an off-policy learner with a fixed behavior policy. Then we present an off-policy learning algorithm $word2vec_{\pi}$ that uses \textit{word2vec} as a behavior policy. %With extensive experimentation, we show that the proposed method performs better than \textit{word2vec}. Then, we present a method to learn word embeddings that are resilient to misspellings. Existing word embeddings have limited applicability to malformed texts, which contain a non-negligible amount of out-of-vocabulary words. We propose a method combining FastText with subwords and a supervised task of learning misspelling patterns. In our method, misspellings of each word are embedded close to their correct variants. Lastly, we propose two novel approaches (one working at the character level and the other working at word level) that use deep convolutional neural networks for a central task in NLP, semantic matching. We experimentally showed the effectiveness of our approach using click-through rate prediction task for Sponsored Search.
Dins del món del Processament del Llenguatge Natural (NLP) i d’altres camps relacionats amb aquest àmbit, les representaciones latents de paraules (word embeddings) s'han convertit en una tecnologia fonamental per a desenvolupar aplicacions pràctiques. En aquesta tesi es presenta un anàlisi teòric d’aquests word embeddings així com alguns algoritmes per a entrenar-los. A més a més, com a aplicació pràctica d’aquesta recerca també es presenten aplicacions per a cerques a la web i màrqueting. Primer, s’introdueixen alguns aspectes teòrics d’un dels algoritmes més populars per a aprendre word embeddings, el word2vec. També es presenta el word2vec en un context de Reinforcement Learning demostrant que modela les normes no explícites (off-policy) en presència d’un conjunt de normes (policies) de comportament fixes. A continuació, presentem un nou algoritme de d’aprenentatge de normes no explícites (off-policy), $word2vec_{\pi}$, com a modelador de normes de comportament. La validació experimental corrobora la superioritat d’aquest nou algorithme respecte \textit{word2vec}. Segon, es presenta un mètode per a aprendre word embeddings que són resistents a errors d’escriptura. La majoria de word embeddings tenen una aplicació limitada quan s’enfronten a textos amb errors o paraules fora del vocabulari. Nosaltres proposem un mètode combinant FastText amb sub-paraules i una tasca supervisada per a aprendre patrons amb errors. Els resultats proven com les paraules mal escrites estan pròximes a les correctes quan les comparem dins de l’embedding. Finalment, aquesta tesi proposa dues tècniques noves (una a nivell de caràcter i l’altra a nivell de paraula) que empren xarxes neuronals (DNNs) per a la tasca de similaritat semàntica. Es demostra experimentalment que aquests mètodes són eficaços per a la predicció de l’eficàcia (click-through rate) dins del context de cerces patrocinades.
APA, Harvard, Vancouver, ISO, and other styles
40

梁南柱 and Nam-chu Alexander Leung. "The strategic importance of information system/technology to the Hong Kong Polytechnic University." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31266708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Leung, Shiu-keung, and 梁兆強. "Information technology and end user computing in the Hong Kong Government." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1992. http://hub.hku.hk/bib/B3196431X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Xiang, Qiao. "In-network processing for mission-critical wireless networked sensing and control| A real-time, efficiency, and resiliency perspective." Thesis, Wayne State University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3619123.

Full text
Abstract:

As wireless cyber-physical systems (WCPS) are increasingly being deployed in mission-critical applications, it becomes imperative that we consider application QoS requirements in in-network processing (INP). In this dissertation, we explore the potentials of two INP methods, packet packing and network coding, on improving network performance while satisfying application QoS requirements. We find that not only can these two techniques increase the energy efficiency, reliability, and throughput of WCPS while satisfying QoS requirements of applications in a relatively static environment, but also they can provide low cost proactive protection against transient node failures in a more dynamic wireless environment.

We first study the problem of jointly optimizing packet packing and the timeliness of data delivery. We identify the conditions under which the problem is strong NP-hard, and we find that the problem complexity heavily depends on aggregation constraints instead of network and traffic properties. For cases when the problem is NP-hard, we show that there is no polynomial-time approximation scheme (PTAS); for cases when the problem can be solved in polynomial time, we design polynomial time, offline algorithms for finding the optimal packet packing schemes. We design a distributed, online protocol tPack that schedules packet transmissions to maximize the local utility of packet packing at each node. We evaluate the properties of tPack in NetEye testbed. We find that jointly optimizing data delivery timeliness and packet packing and considering real-world aggregation constraints significantly improve network performance.

We then work on the problem of minimizing the transmission cost of network coding based routing in sensor networks. We propose the first mathematical framework so far as we know on how to theoretically compute the expected transmission cost of NC-based routing in terms of expected number of transmission. Based on this framework, we design a polynomial-time greedy algorithm for forwarder set selection and prove its optimality on transmission cost minimization. We designed EENCR, an energy-efficient NC-based routing protocol that implement our forwarder set selection algorithm to minimize the overall transmission cost. Through comparative study on EENCR and other state-of-the-art routing protocols, we show that EENCR significantly outperforms CTP, MORE and CodeOR in delivery reliability, delivery cost and network goodput.

Furthermore, we study the 1+1 proactive protection problem using network coding. We show that even under a simplified setting, finding two node-disjoint routing braids with minimal total cost is NP-hard. We then design a heuristic algorithm to construct two node-disjoint braids with a transmission cost upper bounded by two shortest node-disjoint paths. And we design ProNCP, a proactive NC-based protection protocol using similar design philosophy as in EENCR. We evaluate the performance of ProNCP under various transient network failure scenarios. Experiment results show that ProNCP is resilient to various network failure scenarios and provides a state performance in terms of reliability, delivery cost and goodput.

Our findings in this dissertation explore the challenges, benefits and solutions in designing real-time, efficient, resilient and QoS-guaranteed wireless cyber-physical systems, and our solutions shed lights for future research on related topics.

APA, Harvard, Vancouver, ISO, and other styles
43

Richards, Jessica Smith. "Terminological Mediation in Information Technology and Related Fields." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5572.

Full text
Abstract:
Terminological dissonance is created by the inherent ambiguity of natural language and compounded by ontological specialization efforts within fields. Terminological dissonance creates high-risk miscommunications in two key areas: within Information Technology as a singular domain, and also between IT and other fields in interdisciplinary projects. A comprehensive literature review revealed a lack of previous effort to acknowledge or solve problems of terminological dissonance within Information Technology. This research provides a comprehensive overview and definition of the terminology mediation space as it relates to Information Technology and adjacent fields. An analysis and verification of the contents and implementation of the terminology mediation tool Termediator has also been created as part of this research. The Termediator tool's conceptual model is further validated through the analysis of its synonymous and polysemous clustering methods and results.
APA, Harvard, Vancouver, ISO, and other styles
44

Foster, Geoff. "On-line help : an aid to effective use of software /." [St. Lucia, Qld.], 2006. http://adt.library.uq.edu.au/public/adt-QU20060329.152932/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Simin. "Topic discovery and document similarity via pre-trained word embeddings." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235537.

Full text
Abstract:
Throughout the history, humans continue to generate an ever-growing volume of documents about a wide range of topics. We now rely on computer programs to automatically process these vast collections of documents in various applications. Many applications require a quantitative measure of the document similarity. Traditional methods first learn a vector representation for each document using a large corpus, and then compute the distance between two document vectors as the document similarity.In contrast to this corpus-based approach, we propose a straightforward model that directly discovers the topics of a document by clustering its words, without the need of a corpus. We define a vector representation called normalized bag-of-topic-embeddings (nBTE) to encapsulate these discovered topics and compute the soft cosine similarity between two nBTE vectors as the document similarity. In addition, we propose a logistic word importance function that assigns words different importance weights based on their relative discriminating power.Our model is efficient in terms of the average time complexity. The nBTE representation is also interpretable as it allows for topic discovery of the document. On three labeled public data sets, our model achieved comparable k-nearest neighbor classification accuracy with five stateof-art baseline models. Furthermore, from these three data sets, we derived four multi-topic data sets where each label refers to a set of topics. Our model consistently outperforms the state-of-art baseline models by a large margin on these four challenging multi-topic data sets. These works together provide answers to the research question of this thesis:Can we construct an interpretable document represen-tation by clustering the words in a document, and effectively and efficiently estimate the document similarity?
Under hela historien fortsätter människor att skapa en växande mängd dokument om ett brett spektrum av publikationer. Vi förlitar oss nu på dataprogram för att automatiskt bearbeta dessa stora samlingar av dokument i olika applikationer. Många applikationer kräver en kvantitativmått av dokumentets likhet. Traditionella metoder först lära en vektorrepresentation för varje dokument med hjälp av en stor corpus och beräkna sedan avståndet mellan two document vektorer som dokumentets likhet.Till skillnad från detta corpusbaserade tillvägagångssätt, föreslår vi en rak modell som direkt upptäcker ämnena i ett dokument genom att klustra sina ord , utan behov av en corpus. Vi definierar en vektorrepresentation som kallas normalized bag-of-topic-embeddings (nBTE) för att inkapsla de upptäckta ämnena och beräkna den mjuka cosinuslikheten mellan två nBTE-vektorer som dokumentets likhet. Dessutom föreslår vi en logistisk ordbetydelsefunktion som tilldelar ord olika viktvikter baserat på relativ diskriminerande kraft.Vår modell är effektiv när det gäller den genomsnittliga tidskomplexiteten. nBTE-representationen är också tolkbar som möjliggör ämnesidentifiering av dokumentet. På tremärkta offentliga dataset uppnådde vår modell jämförbar närmaste grannklassningsnoggrannhet med fem toppmoderna modeller. Vidare härledde vi från de tre dataseten fyra multi-ämnesdatasatser där varje etikett hänvisar till en uppsättning ämnen. Vår modell överensstämmer överens med de högteknologiska baslinjemodellerna med en stor marginal av fyra utmanande multi-ämnesdatasatser. Dessa arbetsstöd ger svar på forskningsproblemet av tisthesis:Kan vi konstruera en tolkbar dokumentrepresentation genom att klustra orden i ett dokument och effektivt och effektivt uppskatta dokumentets likhet?
APA, Harvard, Vancouver, ISO, and other styles
46

Blom, Nicolaas Willem. "Extended Information processing of Technology Education learners during the early phases of the design process." Diss., University of Pretoria, 2015. http://hdl.handle.net/2263/52075.

Full text
Abstract:
The purpose of my study was to describe the manner in which Grade 9 technology learners typically accessed and used information sources during the early phases of their design processes. I did this by using an Extended Cognition framework to study the internal and external information sources that learners typically accessed and used in a technology learning environment. Theoretically, my study aimed to develop the application of the Extended Cognition Theory in an educational context. In this manner, my study adds to the scarce literature on design cognition in technology education. The methodological purpose of this study was to adapt conventional Think Aloud Protocol methods (TAPS) to investigate groups of learners in their natural technology learning environment. This methodology enabled me to understand the link between theoretical and empirical approaches of design cognition. As such, I was able to conceptualise practical guidelines that could be used by technology lecturers and teachers for the effective facilitation of the early phases of design processes. The conceptual framework of my study was adapted from empirical studies of expert designers, and is underpinned by the Information Processing and Embodiment theories. I followed a concurrent mixed methods approach and employed a case study design applying pragmatic assumptions. The target population for this study comprised Grade 9 learners based in a low socio-economic region. Eight female participants were purposefully selected and conveniently clustered into three groups: two groups of three participants, and one group of two participants. Data collection therefore consisted of three separately video recorded protocol studies. I was able to elicit the information access and usage activities of the participants by providing them with a design task that I adapted from a prescribed textbook, as suggested for technology by the Department of Basic Education. During the video recordings of the participants’ design processes, I was able to collect concurrent verbal, visual and temporal data types. I analysed the data according to a five-level framework, also adapted from the empirical investigations of expert designers. During my quantitative data analysis, I identified the occurrences of each group of participants’ cognitive phases, as well as the occurrences of their information access and use activities during each cognitive phase. On the one hand, problem structuring did not occur regularly. However, during their problem structuring activities, the participants mainly accessed and used instructions contained in the design task and pictures. On the other hand, the participants predominantly exhibited problem solving cognitive phases in which they mainly accessed and used external information sources including pictures and sketches. During my qualitative data analysis, I traced how the participants transformed their understanding of the design problem and possible design solutions. During problem structuring, the participants accessed information about the users’ needs, the design context and design objectives by perceiving and recognising useful information in their design task instructions and pictures. Information use during problem structuring was evidenced when the participants transformed information that they accessed to propose design objectives, constraints and requirements. Accessed information was typically transformed when the participants: (1) Read/evaluated information from the design task; (2) Evaluated the problem/context; (3) Evaluated/Elaborated information about the design objective; (4) Justified a design requirement; (5) Proposed/justified a design constraint; (6) Evaluated/Elaborated available resources in the environment; (7) Elaborated on the design context. During problem solving, the participants accessed information about the function, behaviour and structure of possible design solutions by perceiving and recognising useful information, primarily in their sketches, 3D models and pictures. Information use during problem solving was evidenced when the participants transformed accessed information to propose design specifications and limitations. Accessed information was typically transformed when the participants: (1) Evaluated existing solutions; (2) Proposed design limitations; (3) Modified existing solutions; (4) Proposed/evaluated a design idea; (5) Elaborated on a design idea; (6) Justified ideas; (7) Qualified ideas; (8) Modified previous design ideas. From the findings of my study, I could develop practical guidelines for current and future technology teachers. These guidelines should help technology teachers to effectively facilitate information rich design thinking during the early phases of learners’ design processes. I conclude this study by reiterating that the participants’ design cognition was enhanced by the availability of various information sources. This implies that technology teachers play a central role as information providers and mediators. Failure to provide adequate information sources during design tasks might inhibit learners’ development of the proficient design skills intended by the technology Curriculum and Assessment Policy Statement (CAPS) document.
Dissertation (MEd)--University of Pretoria, 2015.
National Research Foundation (NRF)
Science, Mathematics and Technology Education
MEd
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
47

劉淑賢 and Shuk-yin Josephine Lau. "Management of public housing estates: competitiveness and information technology." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31968909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Smith, Michael Alan. "An empirical investigation of the determinants of information systems outsourcing." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/29455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhou, Wenqi. "Three Essays on the Economic Impact of Online Word-of-Mouth in Online Software Market." Thesis, The George Washington University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3557958.

Full text
Abstract:

The advances in information technologies and the Internet significantly promote the prosperous growth of electronic commerce in recent years. Simply surfing the Internet allows consumers to conveniently explore endless product choices and a flood of related product information. As one of the most important sources of product information, Word-of-Mouth (WOM) helps consumers assess product quality, reduces decision risk without physical trials, and thus facilitates locating their best matches. WOM volume on the Internet has been rising fast while the Internet also unprecedentedly enhances the reach of WOM. As a result, online WOM could significantly influence consumer decision-making. More and more firms are embracing and applying online WOM marketing as a complementary strategy for advertising to increase sales. Even so, not much has been known regarding the mechanism underlying the WOM effect on online consumer behavior. A deeper understanding of the economic impact of online WOM is needed to provide practitioners insightful guidance on information systems design and the allocation of firm resources to more effectively develop online WOM marketing strategies.

This dissertation seeks to shed light on online WOM effect from three angles using a three-essay structure. The first essay of this dissertation investigates how a demand side factor (online user-generated WOM) interplays with a supply side factor (product variety) to affect a product's popularity in the online market where product choices are abundant and consumers can easily access product information. Extant research primarily looks into either demand side or supply side justifications for the heterogeneity of consumption pattern. Alternatively, this study highlights that consumers' reliance on online user reviews to choose products is significantly influenced by the quantity of products available.

In addition, this dissertation also explores the differential impact of online WOM created by different types of reviewers on online user choices. While consumers are widely exposed to both online user reviews and professional reviews, those two sources of WOM information are generally believed to influence user choices independently. However, an in-depth mediation analysis conducted in the Bayesian framework shows that professional reviews influence online user choices not only directly but also indirectly through the volume of online user reviews. This study also proposes a more robust hierarchical structure to model the interaction effect between online user reviews and product variety, refining the first essay.

Following this line of inquiry, this dissertation further studies the impact of the distribution of online WOM across retailing and third-party websites on consumers' purchasing decisions. In parallel with the flocking WOM available on the Internet, nowadays consumers are able to reach almost every piece of online WOM information relevant to their interested products. The distribution of WOM information across the Internet may accordingly influence consumers' search costs for product information and affect their final decisions. This research has found empirical evidence that both the dispersion of WOM volume and variation of WOM valence across the Internet significantly influence online retail sales.

APA, Harvard, Vancouver, ISO, and other styles
50

Foote, Alan Richard. "Exploring Knowledge Management Models on Information Technology Projects." ScholarWorks, 2016. https://scholarworks.waldenu.edu/dissertations/2028.

Full text
Abstract:
One way an organization manages the knowledge of its people is in information technology (IT) projects. Organizations develop IT projects for many socially responsible reasons, including improved health care services and better community services. IT projects do not always achieve the goals of the organization when the knowledge of the stakeholders is not managed for these objectives. For this study the purpose was to address the use of knowledge management (KM) in project management (PM) to improve the success of IT projects in achieving the organizational goals. The research questions were based on KM including its tools and techniques to improve the success rate for IT projects. The conceptual framework included the project knowledge management (PKM) model, which helped identify the knowledge sharing in IT software projects for a local insurance company in Baltimore, Maryland. Interview data were collected from 26 IT project stakeholders about KM in PM. Analysis revealed 4 themes of managing knowledge in the requirement process, code development process, testing process, and the helpdesk process for the success of the IT project. Each of the 4 processes used different KM repositories and face-to-face tools. Improving the rate of successful IT projects benefits organizations and society with better products and services for lower costs. This study may affect social change by providing information for managers of other organizations about achieving success of their IT projects.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography