To see the other types of publications on this topic, follow the link: Contributions in science and technology.

Dissertations / Theses on the topic 'Contributions in science and technology'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Contributions in science and technology.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Afonso, Ana Sofia. "Interactive science and technology museum and citizenship : contributions to the development of an understanding of acoustics, sound, and hearing." Thesis, University of Reading, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.423828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nkhata, Bentry. "Career and Technical Education (CTE) Directors' Experiences with CTE's Contributions to Science, Technology, Engineering, and Math (STEM) Education Implementation." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/24203.

Full text
Abstract:
In spite of the large overlap in the goals of CTE and STEM education, there is little evidence of the role(s) CTE delivery systems, programs, curricula, or pedagogical strategies can play in advancing STEM education. Because of their responsibilities, especially for organizational and instructional leadership, school district CTE directors could illuminate our understanding of linkages between CTE and STEM education. The purpose of this study was to analyze the experiences of school district CTE directors to better understand these linkages. The researcher used a qualitative research design to gain understanding of the local CTE directors' experiences. Data were collected using face-to-face semi-structured interviews with 13 participants. The data were analyzed using a continuous process of coding, recoding, memo-writing and making comparisons across the transcripts. Among the results of the study were that definitions of STEM education were varied, but all had aspects of an integrated approach and using real world applications. The data revealed a number of contributions made by CTE to assist in STEM education implementation. They include context for learning, multiple pathways; platform for program delivery, and administrative leadership and framework. It was also found that strategies for increasing the visibility of CTE's contributions in the advancement of STEM education could include marketing CTE, demonstrating the value of CTE, enhancing curriculum and instruction, and rebranding CTE. Conclusions made in the study include, but not limited to, the fact that there are tremendous reciprocal benefits that CTE and STEM education can provide for one another, given there are strong, mutual, and intended linkage of the two; and that establishing a state-level STEM education coordinator position would result in providing much needed leadership at the local and state levels. Recommendations for practice that were made in the study include, but are not limited to, continuing to establish Virginia Governor's Academies throughout the Commonwealth of Virginia by aligning STEM education with CTE and continuing to support, at the highest level, intentional and mutual collaborative initiatives between STEM education and CTE. A recommendation for future research includes conducting a longitudinal study on the impact that Virginia Governor's Academies are having on student morale, growth, learning, and future endeavor.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Donolo, Rosa Marina. "Contributions to geovisualization for territorial intelligence." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0075/document.

Full text
Abstract:
Ce travail de thèse de doctorat est placé dans le domaine de la Géovisualisation utilisé pour mettre en œuvre des systèmes de aide à la décision territoriale. Ce travail de recherche est né grâce à l'établissement d’un accord entre l’Université Tor Vergata de Rome, et l’INSA (Institut National des Sciences Appliquées), de Lyon. La co-tutelle de cette thèse est née pour la nécessité d’une approche multidisciplinaire à le sujet de recherche, en profitant des compétences en matière d'urbanisme, de l’environnement et de la modélisation de territoire à l'école doctorale ‘Geoinformazione’ de l'Université de Tor Vergata, et en utilisant d’avantage les compétences dans les systèmes d'information spatiale et de la géovisualisation au laboratoire LIRIS de l'INSA. Malgré ces avantages, l’un des problèmes les plus courants en visualisation de l’information est de représenter les données d’une manière claire et compréhensible. Actuallement il ya peu de fondations scientifiques pour guider les chercheurs dans la conception visuelle de données spatiales ; il existe des méthodes systématiques limitée pour évaluer l'efficacité des solutions proposées. Dans ce travail de recherche de doctorat, des contributions seront fournis à la création d'une méthode d'évaluation systématique pour évaluer et développer des affichages visuels efficaces. Dans ce contexte l’objectif de la recherche était de trouver une méthode simple et empirique - un test en ligne - qui pourrait être facilement reproductible pour differentes scenarios, pour soutenir l’administration publique dans différents contextes et pour différentes tâches
This PhD research work is placed in the domain of Geovisualization used to implement Territorial Intelligence and decision support systems. This research work was born through the establishment of an agreement between Tor Vergata University, Rome, and INSA (Institut National des Sciences Appliquées), Lyon. The co-supervision of this thesis was born from the necessity of a multidisciplinary approach to the research topic, taking advantage of the skills in urban planning, environment and territory modeling at the Geoinformation doctoral school of Tor Vergata University, and taking advantage of the skills in Spatial Information Systems and Geovisualization at the LIRIS Laboratory of INSA. The motivation that led us to deal with this research topic was the perception of a lack of systematic methods and universally approved empirical experiments in data visualization domain. The experiments should consider different typologies of data, different environmental contexts, different indicators and methods of representations, etc., in order to support expert users in decision making, in the urban and territorial planning and in the implementation of environmental policies. In modern societies, we have to deal with a great amount of data every day and Geovisualization permits the management, exploration and display of big and heterogeneous data in an interactive way that facilitates decision making processes. Geovisualization gives the opportunity to the user to change the visual appearance of the maps, to explore different layers of data and to highlight problems in some areas by the citizens. Despite these advantages, one of the most common problems in Information Visualization is to represent data in a clear and comprehensible way. Spatial data have a complex structure that includes spatial component, thematic attributes, and often the temporal component Actually there are limited scientific foundations to guide researchers in visual design of spatial data, and there are limited systematic and standard methods to evaluate the effectiveness of the solutions proposed. In this Phd research work, some contributions will be provided to the creation of a systematic assessment method to evaluate and to develop effective geovisualization displays. An empirical evaluation test is proposed to assess the effectiveness of some map displays, analyzing the use of three elements of visual design: 1. the spatial indicators to be represented and their context of visualization, 2. the physical dimensions of map displays, 3. the visual variables to represent different layers of information
APA, Harvard, Vancouver, ISO, and other styles
4

Johnson, Dawn Rene. "Sense of belonging among women of color in science, technology, engineering, and math majors investigating the contributions of campus racial climate perceptions and other college environments /." College Park, Md.: University of Maryland, 2007. http://hdl.handle.net/1903/7723.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Dept. of Counseling and Personnel Services. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
5

Eshelman-Haynes, Candace Lee. "Visual contributions to spatial perception during a remote navigation task." Wright State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=wright1247510065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cregan, Anne Computer Science &amp Engineering Faculty of Engineering UNSW. "Weaving the semantic web: Contributions and insights." Publisher:University of New South Wales. Computer Science & Engineering, 2008. http://handle.unsw.edu.au/1959.4/42605.

Full text
Abstract:
The semantic web aims to make the meaning of data on the web explicit and machine processable. Harking back to Leibniz in its vision, it imagines a world of interlinked information that computers `understand' and `know' how to process based on its meaning. Spearheaded by the World Wide Web Consortium, ontology languages OWL and RDF form the core of the current technical offerings. RDF has successfully enabled the construction of virtually unlimited webs of data, whilst OWL gives the ability to express complex relationships between RDF data triples. However, the formal semantics of these languages limit themselves to that aspect of meaning that can be captured by mechanical inference rules, leaving many open questions as to other aspects of meaning and how they might be made machine processable. The Semantic Web has faced a number of problems that are addressed by the included publications. Its germination within academia, and logical semantics has seen it struggle to become familiar, accessible and implementable for the general IT population, so an overview of semantic technologies is provided. Faced with competing `semantic' languages, such as the ISO's Topic Map standards, a method for building ISO-compliant Topic Maps in the OWL DL language has been provided, enabling them to take advantage of the more mature OWL language and tools. Supplementation with rules is needed to deal with many real-world scenarios and this is explored as a practical exercise. The available syntaxes for OWL have hindered domain experts in ontology building, so a natural language syntax for OWL designed for use by non-logicians is offered and compared with similar offerings. In recent years, proliferation of ontologies has resulted in far more than are needed in any given domain space, so a mechanism is proposed to facilitate the reuse of existing ontologies by giving contextual information and leveraging social factors to encourage wider adoption of common ontologies and achieve interoperability. Lastly, the question of meaning is addressed in relation to the need to define one's terms and to ground one's symbols by anchoring them effectively, ultimately providing the foundation for evolving a `Pragmatic Web' of action.
APA, Harvard, Vancouver, ISO, and other styles
7

Lategan, Laetus O. K. "A conceptual analysis of a university of technology and its contribution to research and development." Interim : Interdisciplinary Journal, Vol 7, Issue 2: Central University of Technology Free State Bloemfontein, 2008. http://hdl.handle.net/11462/389.

Full text
Abstract:
Published Article
This paper provides a conceptual overview of one of the three types of university in the South African higher education band, namely the university of technology. The contention of the paper is that universities of technology should have the same core activities as the general or classical university, that is teaching, research and service. The differences between the types of university exist on a conceptual level and therefore also in their approach to science in general. The conceptual analysis illustrates how the university of technology contributes towards research and development. It is for this reason that this university type should be welcomed by the university sector. Its overall contribution to what a university is should be acknowledged.
APA, Harvard, Vancouver, ISO, and other styles
8

Clay, Paul F. "Factors contributing to user choice between codification and personalization-based knowledge management systems a task-technology fit perspective /." [Bloomington, Ind.] : Indiana University, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3219901.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Kelley School of Business, 2006.
Source: Dissertation Abstracts International, Volume: 67-06, Section: A, page: 2227. Adviser: Alan R. Dennis. "Title from dissertation home page (viewed June 21, 2007)."
APA, Harvard, Vancouver, ISO, and other styles
9

Saad, Clément. "Quelques contributions dans les réseaux de capteurs sans fil : Localisation et Routage." Phd thesis, Université d'Avignon, 2008. http://tel.archives-ouvertes.fr/tel-00364914.

Full text
Abstract:
Comme le firent Internet et les communications sans fil il y a de cela quelques décennies, l'avènement des réseaux de capteurs s'apprête à révolutionner notre mode de vie. Mais avant de voir ces réseaux atteindre un degré de démocratisation identique à celui des téléphones portables par exemple, un certain nombre de problématiques doit être résolu. Aux contraintes traditionnelles des réseaux ad hoc s'ajoutent les limites très strictes liées aux caractéristiques matérielles des capteurs telles que la puissance de calcul, la mémoire et surtout l'alimentation en énergie, rendant les algorithmes existants inadaptés. Cette thèse aborde deux problématiques : celle de la localisation et celle du routage. Concernant la localisation, une famille de trois méthodes est proposée pour estimer les positions des capteurs en y associant des bornes d'erreur, à partir de localisations exactes connues pour certains d'entre eux et en fonction de leurs capacités de mesures. Cette famille est ensuite étendue aux réseaux de capteurs mobiles. Vient alors le problème du routage permettant l'acheminement d'un message d'un capteur vers une station de base lorsqu'il détecte un événement. Les stratégies de routage dit géographique s'appuient sur les positions des capteurs qui sont considérées comme exactes. Or, dans la pratique, ces positions sont rarement précises. Cette thèse propose deux algorithmes de routage destinés respectivement aux réseaux de capteurs statiques et mobiles en considérant des positions estimées, rendant ainsi ces méthodes compatibles avec les algorithmes de localisation.
APA, Harvard, Vancouver, ISO, and other styles
10

Edwards, Madhuri M. "Identifying Factors Contributing Towards Information Security Maturity in an Organization." NSUWorks, 2018. http://nsuworks.nova.edu/gscis_etd/1027.

Full text
Abstract:
Information security capability maturity (ISCM) is a journey towards accurate alignment of business and security objectives, security systems, processes, and tasks integrated with business-enabled IT systems, security enabled organizational culture and decision making, and measurements and continuous improvements of controls and governance comprising security policies, processes, operating procedures, tasks, monitoring, and reporting. Information security capability maturity may be achieved in five levels: performing but ad-hoc, managed, defined, quantitatively governed, and optimized. These five levels need to be achieved in the capability areas of information integrity, information systems assurance, business enablement, security processes, security program management, competency of security team, security consciousness in employees, and security leadership. These areas of capabilities lead to achievement of technology trustworthiness of security controls, integrated security, and security guardianship throughout the enterprise, which are primary capability domains for achieving maturity of information security capability in an organization. There are many factors influencing the areas of capabilities and the capability domains for achieving information security capability maturity. However, there is little existing study done on identifying the factors that contribute to achievement of the highest level of information security capability maturity (optimized) in an organization. This research was designed to contribute to this area of research gap by identifying the factors contributing to the areas of capabilities for achieving the highest level of information security capability maturity. The factors were grouped under the eight capability areas and the three capability domains in the form of an initial structural construct. This research was designed to collect data on all the factors using an online structured questionnaire and analyzing the reliability and validity of the initial structural construct following the methods of principal components analysis (PCA), Cronbach Alpha reliability analysis, confirmatory factor analysis (CFA), and structural equation modeling. A number of multivariate statistical tests were conducted on the data collected regarding the factors to achieve an optimal model reflecting statistical significance, reliability, and validity. The research was conducted in four phases: expert panel and pilot study (first phase), principal component analysis (PCA) and reliability analysis (RA) of the factor scales (second phase), confirmatory factor analysis (CFA) using LISREL (third phase), and structural equation modeling (SEM) using LISREL (fourth phase). The final model subsequent to completing the four phases reflected acceptance or rejection of the eleven hypotheses defined in the initial structural construct of this study. The final optimized model was obtained with the most significant factors loading on the capability areas of information integrity, information security assurance, business enablement, security process maturity, security program management, competency of security team, security conscious employees, and security leadership, including the most significant factors loading the three capability domains of security technology trustworthiness, security integration, and security guardianship. All the eleven hypotheses were accepted as part of the optimal structural construct of the final model. The model provides a complex integrated framework of information security maturity requiring multi-functional advancements and maturity in processes, people, and technology, and organized security program management and communications fully integrated with the business programs and communications. Information security maturity is concluded as a complex function of multiple maturity programs in an organization leading to organized governance structures, multiple maturity programs, leadership, security consciousness, and risk-aware culture of employees.
APA, Harvard, Vancouver, ISO, and other styles
11

Marzinsky, Maria. "Teaching and curricular practices contributing to success in gateway courses for freshman and sophomore students in math, science, engineering, and technology (MSTE) majors at a large public research university: A longitudinal study." Diss., The University of Arizona, 2002. http://hdl.handle.net/10150/280026.

Full text
Abstract:
This dissertation examined teaching and curricular practices that have had an impact on the academic achievement of freshman and sophomore students taking introductory courses in math, science, technology, and engineering (MSTE). A large proportion of undergraduate students intending to pursue MSTE majors switch to other majors after taking introductory courses in math, biology, physics and other courses that constitute a requirement for science and engineering degrees (Astin, 1993). This investigation utilized quantitative and qualitative methods to assess the academic performance of students at a large public research university. In the quantitative phase, two efficiency indices were computed for eleven course sequences for MSTE majors using student cohorts from 1993-1999. The ICE index is the average number of attempts it takes a cohort of students to pass a introductory (gateway) course. An index closer to 1 indicates an efficient course, since an index of 1 means that all students passed the course in the first attempt. The ICE2 index measures the rate of success of students taking the second course in a gateway sequence. The ICE 2 index ranges from 0 to 1. An ICE2 index of 1 for the targeted gateway course is ideal, indicating that every student who passed the first gateway course took and passed the next course in the curricular sequence with a grade of A, B, or C on the first attempt. The qualitative phase of this study consisted of twelve interviews of the faculty and instructors who teach the same courses analyzed in the quantitative phase. In addition, four faculty members who held administrative positions in the MSTE disciplines were interviewed. The purpose of the interviews was to unveil teaching and curricular practices that have had an impact on students' academic achievement. The resulting trends of the efficiency indices failed to demonstrate an improvement in students' academic achievement as measured by the indices, except for three gateway course sequences: Computer Science, Biology, and Management Information Systems. The qualitative phase helped to unravel a myriad of not only faculty's innovations and achievements but also their concerns surrounding practices regarding the introductory courses.
APA, Harvard, Vancouver, ISO, and other styles
12

Maguire, John F. "Contributions to materials science and engineering." Thesis, Ulster University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.515891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chada, Daniel de Magalhães. "From cognitive science to management science: two computational contributions." reponame:Repositório Institucional do FGV, 2011. http://hdl.handle.net/10438/17053.

Full text
Abstract:
Submitted by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T12:57:06Z No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Approved for entry into archive by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T12:58:17Z (GMT) No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Approved for entry into archive by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T13:00:07Z (GMT) No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Made available in DSpace on 2016-09-12T13:03:31Z (GMT). No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5) Previous issue date: 2011
This work is composed of two contributions. One borrows from the work of Charles Kemp and Joshua Tenenbaum, concerning the discovery of structural form: their model is used to study the Business Week Rankings of U.S. Business Schools, and to investigate how other structural forms (structured visualizations) of the same information used to generate the rankings can bring insights into the space of business schools in the U.S., and into rankings in general. The other essay is purely theoretical in nature. It is a study to develop a model of human memory that does not exceed our (human) psychological short-term memory limitations. This study is based on Pentti Kanerva’s Sparse Distributed Memory, in which human memories are registered into a vast (but virtual) memory space, and this registration occurs in massively parallel and distributed fashion, in ideal neurons.
Este trabalho é composto de duas contribuições. Uma se usa do trabalhode Charles Kemp e Joshua Tenenbaum sobre a descoberta da forma estrutural: o seu modelo é usado para estudar os rankings da revista Business Week sobre escolas de administração, e para investigar como outras formas estruturais (visualizações estruturadas) da mesma informação usada para gerar os rankings pode trazer discernimento no espaço de escolas de negócios nos Estados Unidos e em rankings em geral. O outro ensaio é de natureza puramente teórica. Ele é um estudo no desenvolvimento de um modelo de memória que não excede os nossos (humanos) limites de memória de curto-prazo. Este estudo se baseia na Sparse Distributed Memory (Memória Esparsa e Distribuida) de Pentti Kanerva, na qual memórias humanas são registradas em um vasto (mas virtual) espaço, e este registro ocorre de forma maciçamente paralela e distribuida, em neurons ideais.
APA, Harvard, Vancouver, ISO, and other styles
14

Conversy, Stéphane. "Contributions to the science of controlled transformation." Habilitation à diriger des recherches, Université Paul Sabatier - Toulouse III, 2013. http://tel.archives-ouvertes.fr/tel-00853192.

Full text
Abstract:
My research activities pertain to "Informatics" and in particular "Interactive Graphics" i.e. dynamic graphics on a 2D screen that a user can interact with by means of input devices such as a mouse or a multitouch surface. I have conducted research on Interactive Graphics along three themes: interactive graphics development (how should developers design the architecture of the code corresponding to graphical interactions?), interactive graphic design (what graphical interactions should User Experience (UX) specialists use in their system?) and interactive graphics design process (how should UX specialists design? Which method should they apply?) I invented the MDPC architecture that relies on Picking views and Inverse transforms. This improves the modularity of programs and improves the usability of the specification and the implementation of interactive graphics thanks to the simplification of description. In order to improve the performance of rich-graphic software using this architecture, I explored the concepts of graphical compilers and led a PhD thesis on the topic. The thesis explored the approach and contributed both in terms of description simplification and of software engineering facilitation. Finally, I have applied the simplification of description principles to the problem of shape covering avoidance by relying on new efficient hardware support for parallelized and memory-based algorithms. Together with my colleagues, we have explored the design and assessment of expanding targets, animation and sound, interaction with numerous tangled trajectories, multi-user interaction and tangible interaction. I have identified and defined Structural Interaction, a new interaction paradigm that follows the steps of the direct and instrumental interaction paradigms. I directed a PhD thesis on this topic and together with my student we designed and assessed interaction techniques for structural interaction. I was involved in the design of the "Technology Probes" concept i.e. runnable prototypes to feed the design process. Together with colleagues, I designed VideoProbe, one such Technology Probe. I became interested in more conceptual tools targeted at graphical representation. I led two PhD theses on the topic and explored the characterization of visualization, how to design representations with visual variables or ecological perception and how to design visual interfaces to improve visual scanning. I discovered that those conceptual tools could be applied to programming languages and showed how the representation of code, be it textual or "visual" undergoes visual perception phenomena. This has led me to consider our discipline as the "Science of Controlled Transformations". The fifth chapter is an attempt at providing this new account of "Informatics" based on what users, programmers and researchers actually do with interactive systems. I also describe how my work can be considered as contributing to the science of controlled transformations.
APA, Harvard, Vancouver, ISO, and other styles
15

Eghbali, Amir. "Contributions to Reconfigurable Filter Banks and Transmultiplexers." Doctoral thesis, Linköpings universitet, Elektroniksystem, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-60929.

Full text
Abstract:
A current focus among communication engineers is to design flexible radio systems to handle services among different telecommunication standards. Thus, lowcost multimode terminals will be crucial building blocks for future generations of multimode communications. Here,  different bandwidths, from different telecommunication standards, must be supported. This can be done using multimode transmultiplexers (TMUXs) which allow different users to share a common channel in a time-varying manner. These TMUXs allow bandwidth-on-demand. Each user occupies a specific portion of the channel whose location and width may vary with time. Another focus among communication engineers is to provide various wideband services accessible to everybody everywhere. Here, satellites with high-gain spot beam antennas, on-board signal processing, and switching will be a major complementary part of future digital communication systems. Satellites provide a global coverage and customers only need to install a satellite terminal and subscribe to the service. Efficient utilization of the available limited frequency spectrum, calls for on-board signal processing to perform flexible frequency-band reallocation (FFBR). This thesis outlines the design and realization of reconfigurable TMUX and FFBR structures which allow dynamic communication scenarios with simple software reconfigurations. In both structures, the system parameters are determined in advance. For these parameters, the required filter design problems are solved only once. Dynamic communications, with users having different time-varying bandwidths, are then supported by adjusting some multipliers, commutators, or a channel switch. These adjustments do not require hardware changes and can be performed online. However, the filter design problem is solved offline. The thesis provides various illustrative examples and it also discusses possible applications of the proposed structures in the context of other communication scenarios, e.g., cognitive radios.
APA, Harvard, Vancouver, ISO, and other styles
16

Recordon, Anne. "The contribution of medio-lateral balance during activities in sitting and standing in hemiplegic subjects a dissertation [thesis] submitted to Auckland University of Technology in partial fulfilment of the requirements for the degree of Master of Health Science, 2002." Full thesis. Abstract, 2002. http://puka2.aut.ac.nz/ait/theses/RecordonA.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Bazile, Emmanuel Patrick. "Electronic Medical Records (EMR): An Empirical Testing of Factors Contributing to Healthcare Professionals’ Resistance to Use EMR Systems." NSUWorks, 2016. http://nsuworks.nova.edu/gscis_etd/964.

Full text
Abstract:
The benefits of using electronic medical records (EMRs) have been well documented; however, despite numerous financial benefits and cost reductions being offered by the federal government, some healthcare professionals have been reluctant to implement EMR systems. In fact, prior research provides evidence of failed EMR implementations due to resistance on the part of physicians, nurses, and clinical administrators. In 2010, only 25% of office-based physicians have basic EMR systems and only 10% have fully functional systems. One of the hindrances believed to be responsible for the slow implementation rates of EMR systems is resistance from healthcare professionals not truly convinced that the system could be of substantive use to them. This study used quantitative methods to measure the relationships between six constructs, namely computer self-efficacy (CSE), perceived complexity (PC), attitude toward EMR (ATE), peer pressure (PP), anxiety (AXY), and resistance to use of technology (RES), are predominantly found in the literature with mixed results. Moreover, they may play a significant role in exposing the source of resistance that exists amongst American healthcare professionals when using Electronic Medical Records (EMR) Systems. This study also measured four covariates: age, role in healthcare, years in healthcare, gender, and years of computer use. This study used Structural Equation Modeling (SEM) and an analysis of covariance (ANCOVA) to address the research hypotheses proposed. The survey instrument was based on existing construct measures that have been previously validated in literature, however, not in a single model. Thus, construct validity and reliability was done with the help of subject matter experts (SMEs) using the Delphi method. Moreover, a pilot study of 20 participants was conducted before the full data collection was done, where some minor adjustments to the instrument were made. The analysis consisted of SEM using the R software and programming language. A Web-based survey instrument consisting of 45 items was used to assess the six constructs and demographics data. The data was collected from healthcare professionals across the United States. After data cleaning, 258 responses were found to be viable for further analysis. Resistance to EMR Systems amongst healthcare professionals was examined through the utilization of a quantitative methodology and a cross-sectional research measuring the self-report survey responses of medical professionals. The analysis found that the overall R2 after the SEM was performed, the model had an overall R2 of 0.78, which indicated that 78% variability in RES could be accounted by CSE, PC, ATE, PP, and AXY. The SEM analysis of AXY and RES illustrated a path that was highly significant (β= 0.87, p < .001), while the other constructs impact on RES were not significant. No covariates, besides years of computer use, were found to show any significance differences. This research study has numerous implications for practice and research. The identification of significant predictors of resistance can assist healthcare administrators and EMR system vendors to develop ways to improve the design of the system. This study results also help identify other aspects of EMR system implementation and use that will reduce resistance by healthcare professionals. From a research perspective, the identification of specific attitudinal, demographic, professional, or knowledge-related predictors of reference through the SEM and ANCOVA could provide future researchers with an indication of where to focus additional research attention in order to obtain more precise knowledge about the roots of physician resistance to using EMR systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Wagner, Samuel Joseph. "Contributions to solving planar traveling salesman problems /." The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487672631600309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Elliott, Michael A. (Michael Alfred). "Contributions to risk-informed decision making." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62690.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 104-107).
Risk-informed decision-making (RIDM) is a formal process that assists stakeholders make decisions in the face of uncertainty. At MIT, a tool known as the Analytic Deliberative Decision Making Process (ADP) has been under development for a number of years to provide an efficient framework for implementing RIDM. ADP was initially developed as a tool to be used by a small group of stakeholders but now it has become desirable to extend ADP to an engineering scale that can be used by many individual across large organizations. This dissertation identifies and addresses four challenges in extended the ADP to an engineering scale. Rigorous preference elicitation using pairwise comparisons is addressed. A new method for judging numerical scales used in these comparisons is presented along with a new type of scale. This theory is tested by an experiment involving 64 individuals and it is found that the optimal scale is a matter of individual choice. The elicitation of expert opinion is studied and a process that adapts to the complexity of the decision at hand is proposed. This method is tested with a case study involving the choice of a heat removal technology for a new type of fission reactor. Issues related to the unique informational needs of large organizations are investigated and new tools to handle these needs are developed. Finally, difficulties with computationally intensive modeling and simulation are identified and a new method of uncertainty propagation using orthogonal polynomials is explored. Using a code designed to investigate the LOCA behavior of a fission reactor, it is demonstrated that this new propagation methods offers superior convergence over existing techniques.
by Michael A. Elliott.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Guo, Danni. "Contributions to spatial uncertainty modelling in GIS : small sample data." Doctoral thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/19031.

Full text
Abstract:
Includes bibliographical references.
Environmental data is very costly and difficult to collect and are often vague (subjective) or imprecise in nature (e.g. hazard level of pollutants are classified as "harmful for human beings"). These realities in practise (fuzziness and small datasets) leads to uncertainty, which is addressed by my research objective: "To model spatial environmental data with .fuzzy uncertainty, and to explore the use of small sample data in spatial modelling predictions, within Geographic Information System (GIS)." The methodologies underlying the theoretical foundations for spatial modelling are examined, such as geostatistics, fuzzy mathematics Grey System Theory, and (V,·) Credibility Measure Theory. Fifteen papers including three journal papers were written in contribution to the developments of spatial fuzzy and grey uncertainty modelling, in which I have a contributed portion of 50 to 65%. The methods and theories have been merged together in these papers, and they are applied to two datasets, PM10 air pollution data and soil dioxin data. The papers can be classified into two broad categories: fuzzy spatial GIS modelling and grey spatial GIS modelling. In fuzzy spatial GIS modelling, the fuzzy uncertainty (Zadeh, 1965) in environmental data is addressed. The thesis developed a fuzzy membership grades kriging approach by converting fuzzy subsets spatial modelling into membership grade spatial modelling. As this method develops, the fuzzy membership grades kriging is put into the foundation of the credibility measure theory, and approached a full data-assimilated membership function in terms of maximum fuzzy entropy principle. The variable modelling method in dealing with fuzzy data is a unique contribution to the fuzzy spatial GIS modelling literature. In grey spatial GIS modelling, spatial predictions using small sample data is addressed. The thesis developed a Grey GIS modelling approach, and two-dimensional order-less spatially observations are converted into two one-dimensional ordered data sequences. The thesis papers also explored foundational problems within the grey differential equation models (Deng, 1985). It is discovered the coupling feature of grey differential equations together with the help of e-similarity measure, generalise the classical GM( 1,1) model into more classes of extended GM( 1,1) models, in order to fully assimilate with sample data information. The development of grey spatial GIS modelling is a creative contribution to handling small sample data.
APA, Harvard, Vancouver, ISO, and other styles
21

Suchier, Henri-Maxime. "Nouvelles contributions du boosting en apprentissage automatique." Phd thesis, Université Jean Monnet - Saint-Etienne, 2006. http://tel.archives-ouvertes.fr/tel-00379539.

Full text
Abstract:
L'apprentissage automatique vise la production d'une hypothèse modélisant un concept à partir d'exemples, dans le but notamment de prédire si de nouvelles observations relèvent ou non de ce concept. Parmi les algorithmes d'apprentissage, les méthodes ensemblistes combinent des hypothèses de base (dites ``faibles'') en une hypothèse globale plus performante.

Le boosting, et son algorithme AdaBoost, est une méthode ensembliste très étudiée depuis plusieurs années : ses performances expérimentales remarquables reposent sur des fondements théoriques rigoureux. Il construit de manière adaptative et itérative des hypothèses de base en focalisant l'apprentissage, à chaque nouvelle itération, sur les exemples qui ont été difficiles à apprendre lors des itérations précédentes. Cependant, AdaBoost est relativement inadapté aux données du monde réel. Dans cette thèse, nous nous concentrons en particulier sur les données bruitées, et sur les données hétérogènes.

Dans le cas des données bruitées, non seulement la méthode peut devenir très lente, mais surtout, AdaBoost apprend par coeur les données, et le pouvoir prédictif des hypothèses globales générées, s'en trouve extrêmement dégradé. Nous nous sommes donc intéressés à une adaptation du boosting pour traiter les données bruitées. Notre solution exploite l'information provenant d'un oracle de confiance permettant d'annihiler les effets dramatiques du bruit. Nous montrons que notre nouvel algorithme conserve les propriétés théoriques du boosting standard. Nous mettons en pratique cette nouvelle méthode, d'une part sur des données numériques, et d'autre part, de manière plus originale, sur des données textuelles.

Dans le cas des données hétérogènes, aucune adaptation du boosting n'a été proposée jusqu'à présent. Pourtant, ces données, caractérisées par des attributs multiples mais de natures différentes (comme des images, du son, du texte, etc), sont extrêmement fréquentes sur le web, par exemple. Nous avons donc développé un nouvel algorithme de boosting permettant de les utiliser. Plutôt que de combiner des hypothèses boostées indépendamment, nous construisons un nouveau schéma de boosting permettant de faire collaborer durant l'apprentissage des algorithmes spécialisés sur chaque type d'attribut. Nous prouvons que les décroissances exponentielles des erreurs sont toujours assurées par ce nouveau modèle, aussi bien d'un point de vue théorique qu'expérimental.
APA, Harvard, Vancouver, ISO, and other styles
22

Wilander, John. "Contributions to Specification, Implementation, and Execution of Secure Software." Doctoral thesis, Linköpings universitet, Programvara och system, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-88330.

Full text
Abstract:
This thesis contributes to three research areas in software security, namely security requirements and intrusion prevention via static analysis and runtime detection. We have investigated current practice in security requirements by doing a field study of eleven requirement specifications on IT systems. The conclusion is that security requirements are poorly specified due to three things:  inconsistency in the selection of requirements, inconsistency in level of detail, and almost no requirements on standard security solutions. A follow-up interview study addressed the reasons for the inconsistencies and the impact of poor security requirements. It shows that the projects had relied heavily on in-house security competence and that mature producers of software compensate for poor requirements in general but not in the case of security and privacy requirements specific to the customer domain. Further, we have investigated the effectiveness of five publicly available static analysis tools for security. The test results show high rates of false positives for the tools building on lexical analysis and low rates of true positives for the tools building on syntactical and semantical analysis. As a first step toward a more effective and generic solution we propose decorated dependence graphs as a way of modeling and pattern matching security properties of code. The models can be used to characterize both good and bad programming practice as well as visually explain code properties to programmers. We have implemented a prototype tool that demonstrates how such models can be used to detect integer input validation flaws. Finally, we investigated the effectiveness of publicly available tools for runtime prevention of buffer overflow attacks. Our initial comparison showed that the best tool as of 2003 was effective against only 50 % of the attacks and there were six attack forms which none of the tools could handle. A follow-up study includes the release of a buffer overflow testbed which covers 850 attack forms. Our evaluation results show that the most popular, publicly available countermeasures cannot prevent all of these buffer overflow attack forms.
APA, Harvard, Vancouver, ISO, and other styles
23

Krohn, Jonathan Jacob Pastushchyn. "Genes contributing to variation in fear-related behaviour." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:1e8e40bd-9a98-405f-9463-e9423f0a60ca.

Full text
Abstract:
Anxiety and depression are highly prevalent diseases with common heritable elements, but the particular genetic mechanisms and biological pathways underlying them are poorly understood. Part of the challenge in understanding the genetic basis of these disorders is that they are polygenic and often context-dependent. In my thesis, I apply a series of modern statistical tools to ascertain some of the myriad genetic and environmental factors that underlie fear-related behaviours in nearly two thousand heterogeneous stock mice, which serve as animal models of anxiety and depression. Using a Bayesian method called Sparse Partitioning and a frequentist method called Bagphenotype, I identify gene-by-sex interactions that contribute to variation in fear-related behaviours, such as those displayed in the elevated plus maze and the open field test, although I demonstrate that the contributions are generally small. Also using Bagphenotype, I identify hundreds of gene-by-environment interactions related to these traits. The interacting environmental covariates are diverse, ranging from experimenter to season of the year. With gene expression data from a brain structure associated with anxiety called the hippocampus, I generate modules of co-expressed genes and map them to the genome. Two of these modules were enriched for key nervous system components — one for dendritic spines, another for oligodendrocyte markers — but I was unable to find significant correlations between them and fear-related behaviours. Finally, I employed another Bayesian technique, Sparse Instrumental Variables, which takes advantage of conditional probabilities to identify hippocampus genes whose expression appears not just to be associated with variation in fear-related behaviours, but cause variation in those phenotypes.
APA, Harvard, Vancouver, ISO, and other styles
24

Uehara, Takeyuki. "Contributions to image encryption and authentication." Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20040920.124409/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Willig, Matthias. "Contributions to the commutation analysis of uncompensated single phase universal motors." Thesis, University of Glasgow, 2013. http://theses.gla.ac.uk/4262/.

Full text
Abstract:
In this thesis various aspects of the electromagnetic effects influencing the commutation of uncompensated single phase universal motors are investigated. An introduction to the subject as well as a review of significant literature on the subject are given. The literature review includes classical text books about commutator motors as well as more recent publications about the mathematical analysis of the commutation of universal motors. Subsequently, the analysis of the most important inductances of the machine is outlined that comprises the analytical and numerical calculation as well as the measurement of the machine inductances using different measurement techniques. Moreover, a brush model for commutation analysis of small commutator motors is presented. Equivalent circuits of the brush are established for the cases of one coil shorted and two coils shorted by one brush, and a strategy to obtain their elements is proposed. This uses a dedicated finite difference program to calculate the effective brush resistance between all pairs of terminals. The treatment of the boundary conditions is critical in this process. The resulting terminal resistances are regarded as combinations of a set of internal resistances and this nonlinear relationship is inverted to obtain the internal resistors using a modified Newton-Raphson method. Results are presented showing the influence of anisotropy and geometry, and a simplified example of commutation analysis using the model established is given. In the next step the arcing phenomenon in analysed mathematically. Equations are given for the pre-arcing phase, the arcing phase and the post-arc oscillation. Equivalent circuits for the different phases are proposed and the equations are derived strictly from a circuit point of view. In the analysis a constant arc voltage (confirmed by experimental data and literature on the subject) and a minimum uncommutated residual current necessary for arc ignition are assumed. Those quantities are adopted from reviewed literature and used in the calculations. The design of a motor test bench is described that allows to measure the motor performance according to the principle of the reaction dynamometer. The load machine is mounted on air bearings to minimize possible torque errors in the measurements. A measured torque speed characteristic of a universal motor is shown as well as the torque as a function of the motor current. These measurements were carried out at reduced motor voltage to keep the shaft speed within reasonable limits. Furthermore, theoretical and experimental investigations are carried out in order to estimate how strongly certain rotor coils undergoing rapid current changes affect each other due to mutual coupling and how the mutual coupling changes in the presence of a damping field winding. Several FEA simulations are performed in order to get an insight into the flux pattern if rotor coils are acting on each other and the field winding is allowed to impose its damping effect on the rotor coils. Simple AC measurements are performed as well as di/dt - tests using a more complex oscillating circuit for measurements of the change of the di/dt of a rotor coil with and without the presence of an active field winding. Additionally, investigations are carried out in order to analyse the influence of power cord and source impedances on the ability of the field winding of an uncompensated universal motor to damp flux variations caused by the commutation process. The motor is regarded as a harmonic generator with the power cord and the source impedance acting as a load. Rotational tests are carried out with different loads connected to the field winding and the Fourier spectrums of the field voltage are evaluated. In the final stage a simulation model is described that uses deductions from the previous chapters and that simulates the electromagnetic behaviour of the machine including the complex problem of brush commutation. Measured and calculated signals suitable for validation of the model were compared in order to evaluate the accuracy of the model with regard to motor performance and commutation analysis.
APA, Harvard, Vancouver, ISO, and other styles
26

Asbury, Donald James. "Integrating science and technology." Montana State University, 2012. http://etd.lib.montana.edu/etd/2012/asbury/AsburyD0812.pdf.

Full text
Abstract:
Science plays an important role in students' education, even when time is limited by restrictions from other subject areas such as reading and mathematics. In this study, students' computer classes were integrated with a current and relevant science topic (alternative energy resources) to gauge 1) whether students were able to better understand the content presented and 2) how their attitudes towards science were affected by the science instruction. Students completed nine lessons that focused on the use, benefits, and drawbacks of two types of alternative energy: wind energy and algae biofuel. Each lesson was integrated with technology-based activities to enhance student understanding. Student interviews, unit pretests and posttests, journals entries, and attitude surveys were used to monitor student learning and progress throughout the project. The data collection indicated that students came into the project with little science background knowledge and an average interest in science. As the study progressed, students developed a deeper understanding of alternative energy resources. Student attitudes towards the science learning process improved a small amount as well. At the conclusion of the study, all of the students had increased scores on the content tests and most students had small increases on the attitude measures.
APA, Harvard, Vancouver, ISO, and other styles
27

Siemers, Alexander. "Contributions to Modelling and Visualisation of Multibody Systems Simulations with Detailed Contact Analysis." Doctoral thesis, Linköpings universitet, PELAB - Laboratoriet för programmeringsomgivningar, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-60303.

Full text
Abstract:
The steadily increasing performance of modern computer systems is having a large influence on simulation technologies. It enables increasingly detailed simulations of larger and more comprehensive simulation models. Increasingly large amounts of numerical data are produced by these simulations. This thesis presents several contributions in the field of mechanical system simulation and visualisation. The work described in the thesis is of practical relevance and results have been tested and implemented in tools that are used daily in the industry i.e., the BEAST (BEAring Simulation Tool) tool box. BEAST is a multibody system (MBS) simulation software with special focus on detailed contact calculations. Our work is primarily focusing on these types of systems. focusing on these types of systems. Research in the field of simulation modelling typically focuses on one or several specific topics around the modelling and simulation work process. The work presented here is novel in the sense that it provides a complete analysis and tool chain for the whole work process for simulation modelling and analysis of multibody systems with detailed contact models. The focus is on detecting and dealing with possible problems and bottlenecks in the work process, with respect to multibody systems with detailed contact models. The following primary research questions have been formulated: How to utilise object-oriented techniques for modelling of multibody systems with special reference tocontact modelling? How to integrate visualisation with the modelling and simulation process of multibody systems withdetailed contacts. How to reuse and combine existing simulation models to simulate large mechanical systems consistingof several sub-systems by means of co-simulation modelling? Unique in this work is the focus on detailed contact models. Most modelling approaches for multibody systems focus on modelling of bodies and boundary conditions of such bodies, e.g., springs, dampers, and possibly simple contacts. Here an object oriented modelling approach for multibody simulation and modelling is presented that, in comparison to common approaches, puts emphasis on integrated contact modelling and visualisation. The visualisation techniques are commonly used to verify the system model visually and to analyse simulation results. Data visualisation covers a broad spectrum within research and development. The focus is often on detailed solutions covering a fraction of the whole visualisation process. The novel visualisation aspect of the work presented here is that it presents techniques covering the entire visualisation process integrated with modeling and simulation. This includes a novel data structure for efficient storage and visualisation of multidimensional transient surface related data from detailed contact calculations. Different mechanical system simulation models typically focus on different parts (sub-systems) of a system. To fully understand a complete mechanical system it is often necessary to investigate several or all parts simultaneously. One solution for a more complete system analysis is to couple different simulation models into one coherent simulation. Part of this work is concerned with such co-simulation modelling. Co-simulation modelling typically focuses on data handling, connection modelling, and numerical stability. This work puts all emphasis on ease of use, i.e., making mechanical system co-simulation modelling applicable for a larger group of people. A novel meta-model based approach for mechanical system co-simulation modelling is presented. The meta-modelling process has been defined and tools and techniques been created to fully support the complete process. A component integrator and modelling environment are presented that support automated interface detection, interface alignment with automated three-dimensional coordinate translations, and three dimensional visual co-simulation modelling. The integrated simulator is based on a general framework for mechanical system co-simulations that guarantees numerical stability.
APA, Harvard, Vancouver, ISO, and other styles
28

Travers, Karen Ann. "Elementary Pre-service Science Teacher Preparation: Contributions During the Methods Semester." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/194976.

Full text
Abstract:
The purpose of this study was to better understand the nature of the contribution of the mentor teacher and the methods instructor in the development of professional knowledge of pre-service teachers (PSTs) to teach elementary science. The PSTs' conceptions of teaching science were also explored to see if there were changes in their ideas about teaching science and what influenced these changes during the methods semester of a field-based elementary teacher preparation program. Specifically, this study examined the perceptions of the PSTs regarding the nature of mentorship that they received for the teaching of elementary science. Participants were 144 PSTs from five field-based elementary methods sites, their mentor teachers, and their methods instructor from a university program in a large urban area. Of interest in this study was examining the extent to which PSTs actually observed science teaching in their mentor teachers' classrooms during the methods semester. Analysis of an end-of-semester survey revealed that more than one-third of the PSTs never observed their mentor elementary teachers teach science. On an encouraging note, 62% of PSTs who observed at least some science teaching reported that they perceived their teachers as modeling inquiry science teaching strategies. Regarding the perceived quality of mentor support for learning to teach science, more than 90% of PSTs reported that they felt supported by mentor teachers in their growth of science teaching even if the mentor teachers did not incorporate science lessons into their school day. In addition, half of the PSTs' conceptions of teaching science changed over the methods semester, with the methods course and the elementary classroom as the two most influential factors.
APA, Harvard, Vancouver, ISO, and other styles
29

Jaume, Mathieu. "Contributions à la sémantique de la programmation logique." Phd thesis, Ecole des Ponts ParisTech, 1999. http://tel.archives-ouvertes.fr/tel-00005594.

Full text
Abstract:
La notion de preuves en programmation logique est examinée à deux niveaux différents. D'un point de vue externe, la "théorie classique" de la programmation logique est complètement formalisée dans le calcul des constructions inductives. Après avoir envisagé le problème de la définition de fonctions partielles dans un système dans lequel seules les fonctions totales sont représentables, l'unification est obtenue en réutilisant une preuve formelle existante portant sur un sur-ensemble des termes. Les propriétés fondamentales des SLD-résolution sont alors formalisées. Le niveau de détail imposé par la mécanisation des preuves considérées a mis en relief la complexité cachée de certaines preuves: le mécanisme de renommage est traité de manière explicite, transformant ainsi certaines certitudes théoriques en réalités. D'un point de vue interne, les preuves SLD,finies ou infinies, sont comparées à celles que l'on peut obtenir, par induction ou par co-induction, à partir des clauses d'un programme logique vues comme des règles d'inférence. Dans le cas fini la correspondance est complète ("ce que calcule un programme est prouvable") tandis que dans le cas infini, certains objets non calculables sont toutefois prouvables. Les propriétés classiques des définitions co-inductives et la comparaison de certaines dérivations infinies à des termes de preuve d'un type co-inductif, se révèlent utiles tant pour expliquer les résultats d'incomplétude d'approches existantes que pour définir une sémantique valide et complète pour une classe de dérivations infinies (précisement celles qui ne construisent pas de termes infinis).(résumé de l'auteur)
APA, Harvard, Vancouver, ISO, and other styles
30

Neice, David C. "Conspicuous contributions : signs of social esteem on the Internet." Thesis, University of Sussex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Guarnera, Drew T. "Merge Commit Contributions in Git Repositories." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1436528894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tully, D. A. "Contributions to big geospatial data rendering and visualisations." Thesis, Liverpool John Moores University, 2017. http://researchonline.ljmu.ac.uk/6685/.

Full text
Abstract:
Current geographical information systems lack features and components which are commonly found within rendering and game engines. When combined with computer game technologies, a modern geographical information system capable of advanced rendering and data visualisations are achievable. We have investigated the combination of big geospatial data, and computer game engines for the creation of a modern geographical information system framework capable of visualising densely populated real-world scenes using advanced rendering algorithms. The pipeline imports raw geospatial data in the form of Ordnance Survey data which is provided by the UK government, LiDAR data provided by a private company, and the global open mapping project of OpenStreetMap. The data is combined to produce additional terrain data where data is missing from the high resolution data sources of LiDAR by utilising interpolated Ordnance Survey data. Where data is missing from LiDAR, the same interpolation techniques are also utilised. Once a high resolution terrain data set which is complete in regards to coverage, is generated, sub datasets can be extracted from the LiDAR using OSM boundary data as a perimeter. The boundaries of OSM represent buildings or assets. Data can then be extracted such as the heights of buildings. This data can then be used to update the OSM database. Using a novel adjacency matrix extraction technique, 3D model mesh objects can be generated using both LiDAR and OSM information. The generation of model mesh objects created from OSM data utilises procedural content generation techniques, enabling the generation of GIS based 3D real-world scenes. Although only LiDAR and Ordnance Survey for UK data is available, restricting the generation to the UK borders, using OSM alone, the system is able to procedurally generate any place within the world covered by OSM. In this research, to manage the large amounts of data, a novel scenegraph structure has been generated to spatially separate OSM data according to OS coordinates, splitting the UK into 1kilometer squared tiles, and categorising OSM assets such as buildings, highways, amenities. Once spatially organised, and categorised as an asset of importance, the novel scenegraph allows for data dispersal through an entire scene in real-time. The 3D real-world scenes visualised within the runtime simulator can be manipulated in four main aspects; • Viewing at any angle or location through the use of a 3D and 2D camera system. • Modifying the effects or effect parameters applied to the 3D model mesh objects to visualise user defined data by use of our novel algorithms and unique lighting data-structure effect file with accompanying material interface. • Procedurally generating animations which can be applied to the spatial parameters of objects, or the visual properties of objects. • Applying Indexed Array Shader Function and taking advantage of the novel big geospatial scenegraph structure to exploit better rendering techniques in the context of a modern Geographical Information System, which has not been done, to the best of our knowledge. Combined with a novel scenegraph structure layout, the user can view and manipulate real-world procedurally generated worlds with additional user generated content in a number of unique and unseen ways within the current geographical information system implementations. We evaluate multiple functionalities and aspects of the framework. We evaluate the performance of the system, measuring frame rates with multi sized maps by stress testing means, as well as evaluating the benefits of the novel scenegraph structure for categorising, separating, manoeuvring, and data dispersal. Uniform scaling by n2 of scenegraph nodes which contain no model mesh data, procedurally generated model data, and user generated model data. The experiment compared runtime parameters, and memory consumption. We have compared the technical features of the framework against that of real-world related commercial projects; Google Maps, OSM2World, OSM-3D, OSM-Buildings, OpenStreetMap, ArcGIS, Sustainability Assessment Visualisation and Enhancement (SAVE), and Autonomous Learning Agents for Decentralised Data and Information (ALLADIN). We conclude that when compared to related research, the framework produces data-sets relevant for visualising geospatial assets from the combination of real-world data-sets, capable of being used by a multitude of external game engines, applications, and geographical information systems. The ability to manipulate the production of said data-sets at pre-compile time aids processing speeds for runtime simulation. This ability is provided by the pre-processor. The added benefit is to allow users to manipulate the spatial and visual parameters in a number of varying ways with minimal domain knowledge. The features of creating procedural animations attached to each of the spatial parameters and visual shading parameters allow users to view and encode their own representations of scenes which are unavailable within all of the products stated. Each of the alternative projects have similar features, but none which allow full animation ability of all parameters of an asset; spatially or visually, or both. We also evaluated the framework on the implemented features; implementing the needed algorithms and novelties of the framework as problems arose in the development of the framework. Examples of this is the algorithm for combining the multiple terrain data-sets we have (Ordnance Survey terrain data and Light Detection and Ranging Digital Surface Model data and Digital Terrain Model data), and combining them in a justifiable way to produce maps with no missing data values for further analysis and visualisation. A majority of visualisations are rendered using an Indexed Array Shader Function effect file, structured to create a novel design to encapsulate common rendering effects found in commercial computer games, and apply them to the rendering of real-world assets for a modern geographical information system. Maps of various size, in both dimensions, polygonal density, asset counts, and memory consumption prove successful in relation to real-time rendering parameters i.e. the visualisation of maps do not create a bottleneck for processing. The visualised scenes allow users to view large dense environments which include terrain models within procedural and user generated buildings, highways, amenities, and boundaries. The use of a novel scenegraph structure allows for the fast iteration and search from user defined dynamic queries. The interaction with the framework is allowed through a novel Interactive Visualisation Interface. Utilising the interface, a user can apply procedurally generated animations to both spatial and visual properties to any node or model mesh within the scene. We conclude that the framework has been a success. We have completed what we have set out to develop and create, we have combined multiple data-sets to create improved terrain data-sets for further research and development. We have created a framework which combines the real-world data of Ordnance Survey, LiDAR, and OpenStreetMap, and implemented algorithms to create procedural assets of buildings, highways, terrain, amenities, model meshes, and boundaries. for visualisation, with implemented features which allows users to search and manipulate a city’s worth of data on a per-object basis, or user-defined combinations. The successful framework has been built by the cross domain specialism needed for such a project. We have combined the areas of; computer games technology, engine and framework development, procedural generation techniques and algorithms, use of real-world data-sets, geographical information system development, data-parsing, big-data algorithmic reduction techniques, and visualisation using shader techniques.
APA, Harvard, Vancouver, ISO, and other styles
33

Kurnio, Hartono. "Contributions to group key distribution schemes." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060509.103409/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Pepakayala, Sagar. "Contributions of honeyports to network security." Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9177.

Full text
Abstract:

A honeypot is an attractive computer target placed inside a network to lure the attackers into it. There are many advantages of this technology, like, information about attacker's tools and techniques can be fingerprinted, malicious traffic can be diverted away from the real target etc. With the increased activity from the blackhat community day by day, honeypots could be an effective weapon in the

network security administrator's armor. They have been studied rigorously during the past few years as a part of the security

industry's drive to combat malicious traffic. While the whitehats are trying to make honeypots stealthier, blackhats are coming up with techniques to identify them (therefore nullifying any

further use) or worse, use them in their favor. The game is on. The goal of this thesis is to study different architectural issues regarding honeypot deployment, various stages in utilizing honeypots like forensic analysis etc. Other concepts like IDSs and firewalls which are used in conjunction with honeypots are also discussed, because security is about cooperation among different security components. In the security industry, it is customary for whitehats to watch what blackhats are doing and vice versa. So the thesis

discusses recent techniques to defeat honeypots and risks involved in deploying honeypots. Commercial viability of honeypots and business cases for outsourcing honeypot maintenance are presented. A great interest from the security community about honeypots has propelled the research and resulted in various new and innovative applications of honeypots. Some of these applications, which made an impact, are discussed. Finally, future directions in research in honeypot technology are perused.

APA, Harvard, Vancouver, ISO, and other styles
35

Vialette, Stéphane. "Algorithmic Contributions to Computational Molecular Biology." Habilitation à diriger des recherches, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00862069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Radcliffe, Peter M. "Now how much would you pay? : PAC contributions, timing, and theories of access /." The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487951907959148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mitchell, Carmen L. (Carmen Lois). "The Contributions of Grace Murray Hopper to Computer Science and Computer Education." Thesis, University of North Texas, 1994. https://digital.library.unt.edu/ark:/67531/metadc278692/.

Full text
Abstract:
This study explored the life and work of the late Grace Murray Hopper, Rear Admiral United States Naval Reserve. The study emphasized Hopper's contributions to computer science and computer science education, including her philosophy of teaching and learning, and her pedagogical legacy for today's teachers and scholars of computer science and computer science education.
APA, Harvard, Vancouver, ISO, and other styles
38

Correia, Ana Paula. "Understanding conflict in teamwork : contributions of a technology-rich environment to conflict management /." [Bloomington, Ind.] : Indiana University, 2005. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3183915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lagogiannis, Konstantinos. "Contributions of synaptic filters to models of synaptically stored memory." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/372747/.

Full text
Abstract:
The question of how neural systems encode memories in one-shot without immediately disrupting previously stored information has puzzled theoretical neuroscientists for years and it is the central topic of this thesis. Previous attempts on this topic, have proposed that synapses probabilistically update in response to plasticity inducing stimuli to effectively delay the degradation of old memories in the face of ongoing memory storage. Indeed, experiments have shown that synapses do not immediately respond to plasticity inducing stimuli, since these must be presented many times before synaptic plasticity is expressed. Such a delay could be due to the stochastic nature of synaptic plasticity or perhaps because induction signals are integrated before overt strength changes occur. The later approach has been previously applied to control fluctuations in neural development by low-pass filtering induction signals before plasticity is expressed. In this thesis we consider memory dynamics in a mathematical model with synapses that integrate plasticity induction signals to a threshold before expressing plasticity. We report novel recall dynamics and considerable improvements in memory lifetimes against a prominent model of synaptically stored memory. With integrating synapses the memory trace initially rises before reaching a maximum and then falls. The memory signal dissociates into separate oblivescence and reminiscence components, with reminiscence initially dominating recall. Furthermore, we find that integrating synapses possess natural timescales that can be used to consider the transition to late-phase plasticity under spaced repetition patterns known to lead to optimal storage conditions. We find that threshold crossing statistics differentiate between massed and spaced memory repetition patterns. However, isolated integrative synapses obtain an insufficient statistical sample to detect the stimulation pattern within a few memory repetitions. We extend the modelto consider the cooperation of well-known intracellular signalling pathways in detecting storage conditions by utilizing the profile of postsynaptic depolarization. We find that neuron wide signalling and local synaptic signals can be combined to detect optimal storage conditions that lead to stable forms of plasticity in a synapse specific manner. These models can be further extended to consider heterosynaptic and neuromodulatory interactions for late-phase plasticity.
APA, Harvard, Vancouver, ISO, and other styles
40

Lamb, Zachary W. "Contributions to Infrastructure Deployment and Management in Vehicular Networks." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1561394450556027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Collard, Martine. "Fouille de données, Contributions Méthodologiques et Applicatives." Habilitation à diriger des recherches, Université Nice Sophia Antipolis, 2003. http://tel.archives-ouvertes.fr/tel-01059407.

Full text
Abstract:
Les travaux présentés dans ce mémoire, ont été développés sur le thème de la découverte de motifs intéressants à travers la fouille de données et mis en oeuvre dans le cadre de la conception de systèmes d'information. Ils sont essentiellement consacrés aux problèmes soulevés par l'étape de fouille pour la découverte de modèles et de motifs fréquents. Ils sont à la fois d'ordre méthodologique et applicatif.
APA, Harvard, Vancouver, ISO, and other styles
42

Derrien, Steven. "Contributions à la conception d'architectures matérielles dédiées." Habilitation à diriger des recherches, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00749092.

Full text
Abstract:
Le thème de recherche abordé dans cette HDR porte sur la conception d'architectures matérielles spécialisées, et sur leur utilisation dans le domaine des systèmes embarqués (par exemple les réseaux de capteurs) et du calcul haute performance (bioinformatique, recherche dans les flux multimédias). Ce thème de recherche est développé sur trois axes interdépendants, brièvement décrits ci-dessous. Tout d'abord la conception de plateformes matérielles de type " systèmes de stockage intelligents " basées sur des technologies de logique programmable. L'objectif est de proposer des systèmes capables de traiter les données à la volée directement en sortie du support de stockage (disque magnétique, mémoire Flash), en utilisant des filtres mis en œuvre sous la forme de coprocesseurs spécialisés. Ce travail à donné lieu à la réalisation de deux prototypes de machines (RDISK et REMIX) ainsi qu'à leur validation sur plusieurs cas d'études. Ensuite, la conception d'accélérateurs matériels spécialisés et massivement parallèles, pour des applications de calcul intensif issues de la bioinformatique et du traitement de données multimédia. Nous nous sommes en particulier intéressés à des algorithmes de recherche de similarités entre séquences, basés sur la notion de profil de Markov (logiciel HMMER), ainsi qu'à des algorithmes de recherche d'images par le contenu. Enfin, les outils d'aide à la conception d'architectures spécialisées à partir de spécifications de haut niveau. Dans ce contexte, nous avons proposé un flot de conception complet pour la conception de nœuds de réseau de capteurs ultra faible consommation. Nous nous sommes également intéressés aux problèmes de génération automatique d'accélérateurs pour nids de boucles, notamment au travers d'un flot de transformation source à source de boucles ciblant en sortie des outils de synthèse de haut niveau " C to hardware ".
APA, Harvard, Vancouver, ISO, and other styles
43

Stanway, Michael Jordan. "Contributions to automated realtime underwater navigation." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/70429.

Full text
Abstract:
Thesis (Ph. D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 177-187).
This dissertation presents three separate-but related-contributions to the art of underwater navigation. These methods may be used in postprocessing with a human in the loop, but the overarching goal is to enhance vehicle autonomy, so the emphasis is on automated approaches that can be used in realtime. The three research threads are: i) in situ navigation sensor alignment, ii) dead reckoning through the water column, and iii) model-driven delayed measurement fusion. Contributions to each of these areas have been demonstrated in simulation, with laboratory data, or in the field-some have been demonstrated in all three arenas. The solution to the in situ navigation sensor alignment problem is an asymptotically stable adaptive identifier formulated using rotors in Geometric Algebra. This identifier is applied to precisely estimate the unknown alignment between a gyrocompass and Doppler velocity log, with the goal of improving realtime dead reckoning navigation. Laboratory and field results show the identifier performs comparably to previously reported methods using rotation matrices, providing an alignment estimate that reduces the position residuals between dead reckoning and an external acoustic positioning system. The Geometric Algebra formulation also encourages a straightforward interpretation of the identifier as a proportional feedback regulator on the observable output error. Future applications of the identifier may include alignment between inertial, visual, and acoustic sensors. The ability to link the Global Positioning System at the surface to precision dead reckoning near the seafloor might enable new kinds of missions for autonomous underwater vehicles. This research introduces a method for dead reckoning through the water column using water current profile data collected by an onboard acoustic Doppler current profiler. Overlapping relative current profiles provide information to simultaneously estimate the vehicle velocity and local ocean current-the vehicle velocity is then integrated to estimate position. The method is applied to field data using online bin average, weighted least squares, and recursive least squares implementations. This demonstrates an autonomous navigation link between the surface and the seafloor without any dependence on a ship or external acoustic tracking systems. Finally, in many state estimation applications, delayed measurements present an interesting challenge. Underwater navigation is a particularly compelling case because of the relatively long delays inherent in all available position measurements. This research develops a flexible, model-driven approach to delayed measurement fusion in realtime Kalman filters. Using a priori estimates of delayed measurements as augmented states minimizes the computational cost of the delay treatment. Managing the augmented states with time-varying conditional process and measurement models ensures the approach works within the proven Kalman filter framework-without altering the filter structure or requiring any ad-hoc adjustments. The end result is a mathematically principled treatment of the delay that leads to more consistent estimates with lower error and uncertainty. Field results from dead reckoning aided by acoustic positioning systems demonstrate the applicability of this approach to real-world problems in underwater navigation.
by Michael Jordan Stanway.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
44

Krupa, Alexandre. "Contributions à l'asservissement visuel échographique." Habilitation à diriger des recherches, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00830025.

Full text
Abstract:
Ces travaux portent sur l'étude et l'élaboration de méthodes d'asservissement visuel utilisant les images échographiques. L'asservissement visuel consiste à commander les mouvements d'un système dynamique, généralement un robot, à partir d'informations visuelles extraites de l'image fournie par un capteur embarqué ou observant le système. Dans ce contexte, très peu de travaux avaient été menés sur l'utilisation de l'image fournie par un capteur échographique et les méthodes existantes permettaient uniquement de contrôler les mouvements dans le plan de coupe du capteur. En effet, il est important de noter qu'une sonde échographique 2D a la particularité de fournir une information complète dans son plan d'observation mais de ne fournir aucune information en dehors de ce plan. A la différence, une caméra fournit une projection de la scène 3D vers une image 2D. De ce fait, les méthodes d'asservissement visuel basées sur la modélisation de l'interaction entre une caméra et son environnement ne peuvent être appliquées directement à la modalité échographique. Il faut également noter qu'une problématique importante est l'extraction en temps réel des informations visuelles, nécessaires à la commande d'un système robotique, à partir des images échographiques qui sont par nature très fortement bruitées. Nous avons par conséquent apporté des solutions génériques pour permettre le contrôle complet des déplacements d'une sonde embarquée sur un robot à 6 degrés de liberté en utilisant directement les images échographiques. Deux démarches principales ont été menées pour atteindre cet objectif. La première a porté sur le choix et la modélisation exacte des informations visuelles géométriques qu'il est possible d'extraire en temps réel à partir des images échographiques 2D et qui sont pertinentes pour la réalisation d'une tâche de positionnement d'une sonde robotisée. Nous avons plus particulièrement considéré des informations géométriques de type points, contours et moments de la section d'un organe d'intérêt. Des modèles simplifiés ou estimés en ligne de la forme des objets observés ont été considérés pour déterminer la variation des informations géométriques introduite par les mouvements effectués en dehors du plan de coupe de la sonde. Cette modélisation a permis de déterminer la loi de commande à appliquer au robot porteur de sonde pour atteindre automatiquement la coupe échographique présentant l'information visuelle désirée. La seconde démarche a porté sur l'exploitation de l'information dense de l'image échographique en vue de s'affranchir de l'étape de segmentation. Dans une première approche, l'information de corrélation de la texture de type " speckle " présente dans l'image échographique a été considérée pour réaliser la poursuite de tissus en mouvement par la sonde échographique. Une approche alternative a également été proposée où les informations visuelles choisies en entrée de la commande du système correspondent directement à la valeur d'intensité d'un ensemble de pixels de l'image. La variation de ces informations en fonction du mouvement de la sonde a été modélisée afin de mettre en œuvre des asservissements visuels capables de réaliser des tâches de positionnement ou de suivi de coupes anatomiques. Cette méthode a en outre été étendue pour différents types de capteurs ultrasonores (2D, 3D, bi-plans). Les applications qui découlent de ces travaux se situent principalement dans le domaine de l'assistance à l'examen échographique. Elles concernent d'une part le positionnement automatique de la sonde sur une section anatomique désirée. D'autre part, les applications traitées portent également sur la stabilisation active de l'image échographique. A cette fin, différentes approches ont été mises en œuvre pour compenser le mouvement de tissus mous en synchronisant les déplacements de la sonde par asservissement visuel échographique.
APA, Harvard, Vancouver, ISO, and other styles
45

Kornprobst, Pierre. "Contributions en modélisation de la vision algorithmique et bio-inspirée." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2007. http://tel.archives-ouvertes.fr/tel-00457491.

Full text
Abstract:
Le traitement d'images a une longue histoire, à commencer par les approches provenant de techniques du traitement du signal 1-D. Ces approches reposent sur la théorie du filtrage (linéaire ou non), l'analyse spectrale, ou sur des concepts basiques de probabilité et statistiques.
APA, Harvard, Vancouver, ISO, and other styles
46

Cheung, Chi-wai. "Museum of Chinese Science and Technology." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B25951609.

Full text
Abstract:
Thesis (M. Arch.)--University of Hong Kong, 1996.
Includes special report study entitled: Relationship between man and nature in Chinese traditional architecture. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
47

Manker, Concetta. "Factors Contributing to the Limited Use of Information Technology in State Courtrooms." ScholarWorks, 2015. https://scholarworks.waldenu.edu/dissertations/1416.

Full text
Abstract:
Few state courtrooms in the United States have integrated information technology (IT) in court trials. Despite jurors' beliefs that using courtroom technology improves their abilities to serve as jurors, the attitudes and experiences among attorneys and judges toward the utility of IT continue to pose barriers. The purpose of this phenomenological study was to explore and describe the experiences of attorneys and judges in the State of Virginia with regard limited use of IT in state courtrooms. The conceptual framework included Davis, Bagozzi, and Warshaw's (1989) technology acceptance model; Rogers's (2003) diffusion of innovation theory; and Venkatesh, Morris, Davis, and Davis's (2003) unified theory of acceptance. A snowball sample of 22 attorneys and judges were interviewed using in-depth, semistructured questions. Data were analyzed using open coding techniques to identify themes and patterns with findings supporting the need for improved and expanded courtroom technology. Finding showed that attorneys and judges believed courtroom technology could be useful; however, the lack of training and the cost to implement technology limited their use of technology in courtrooms. Implications for positive social change include increasing the adoption rate of courtroom technology to support courtroom processes and empowering courts to improve the quality of justice through technology in an efficient and effective manner, thereby benefiting everyone in the judicial system and the public.
APA, Harvard, Vancouver, ISO, and other styles
48

Ball, Diane M. "An Empirical Investigation of the Contribution of Computer Self-Efficacy, Computer Anxiety, and Instructors' Experience with the Use of Technology to Their Intention to Use Emerging Educational Technology in Traditional Classrooms." NSUWorks, 2008. http://nsuworks.nova.edu/gscis_etd/401.

Full text
Abstract:
Over the past decade there has been a shift in the emphasis of emerging educational technology from use in online settings to supporting face-to-face and mixed delivery classes. Although emerging educational technology integration in the classroom has increased in recent years, technology acceptance and usage continue to be problematic for educational institutions. In this predictive study the researcher aimed to predict university instructors' intention to use emerging educational technology in traditional classrooms based on the contribution of computer self-efficacy (CSE), computer anxiety (CA), and experience with the use of technology (EUT), as measured by their contribution to the prediction of behavioral intention (BI). Fifty-six instructors from a small, private university were surveyed to determine their level of CSE, CA, and EUT, and their intention to use emerging educational technology in traditional classrooms. A theoretical model was proposed, and two statistical methods were used to formulate models and test predictive power: Multiple Linear Regression (MLR) and Ordinal Logistic Regression (OLR). It was predicted that CSE, CA, and EUT would have a significant impact on instructors' intention to use emerging educational technology in the classroom. Results showed overall significant models of the three aforementioned factors in predicting instructors' use of emerging educational technology in traditional classrooms. Additionally, results demonstrated that CSE was a significant predictor of the use of emerging educational technology in the classroom, while CA and EUT were not found to be significant predictors. Two important contributions of this study include I) an investigation of factors that contribute to instructors' acceptance of an emerging educational technology that has been developed specifically to respond to current demands of higher education, and 2) an investigation of key constructs contributing to instructors' intention to use emerging educational technology in the classroom.
APA, Harvard, Vancouver, ISO, and other styles
49

Kvaran, Trevor. "Dual-process theories and the rationality debate contributions from cognitive neuroscience /." unrestricted, 2007. http://etd.gsu.edu/theses/available/etd-08032007-161242/.

Full text
Abstract:
Thesis (M.A.)--Georgia State University, 2007.
Title from file title page. Andrea Scarantino, Eddy Nahmias, committee co-chairs; Erin McClure, committee member. Electronic text (68 p.) : digital, PDF file. Description based on contents viewed Jan. 7, 2008. Includes bibliographical references (p. 63-68).
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, YiQun. "Contributions to privacy preserving with ring signatures." Access electronically, 2006. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20070104.134826/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography