To see the other types of publications on this topic, follow the link: Science Application software.

Dissertations / Theses on the topic 'Science Application software'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Science Application software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Raley, John B. "Factors Affecting the Programming Performance of Computer Science Students." Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/36716.

Full text
Abstract:
Two studies of factors affecting the programming performance of first- and second year Computer Science students were conducted. In one study students used GIL, a simple application framework, for their programming assignments in a second-semester programming course. Improvements in student performance were realized. In the other study, students submitted detailed logs of how time was spent on projects, along with their programs. Software metrics were computed on the students' source code. Correlations between student performance and the log data and software metric data were sought. No significant indicators of performance were found, even with factors that are commonly expected to indicate performance. However, results from previous research concerning variations in individual programmer performance and relationships between software metrics were obtained.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

Gotchel, Gary. "Enhancing the software development process with wikified widgets." [Denver, Colo.] : Regis University, 2008. http://165.236.235.140/lib/GGotchel2008.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kokkonen, T. (Tommi). "Application software for laboratory-scale process test equipment." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201505211556.

Full text
Abstract:
The purpose of this study was to describe a construct of an application software designed to support laboratory test equipment for pyrolysis/coking process test equipment for the use at the Process Metallurgy Group (PMG) in the University of Oulu. The prior research in the fields of laboratory automation, usability in laboratory context, data gathering, operational safety and linking to larger laboratory IS, and a brief summary of design science research and it’s methodology were presented. The study described the context of the software development, including the Process Metallurgy Group in the University of Oulu; and pyrolysis and coking processes. The design and the development processes of the PYROLYSIS software were described, as was the evaluation of the software. A model of hardware virtualization, application-device communication and the UI design were presented. Finally, a tentative model for remote alert system via SMS was presented.
APA, Harvard, Vancouver, ISO, and other styles
4

Chiang, Yen-Hsi. "Advising module: Graduate application system for the Computer Science Graduate Program." CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2725.

Full text
Abstract:
The Advising Module: Graduate Application System is a Web-based application system that provides quality advice on coursework for prospective as well as continuing graduate students. It also serves as an improved tracking system for the graduate coordinator. Authorized parties may obtain access to status evaluations, master's options, and permitted course waivers, course listings, personal data, various advisement forms, application usage statistics, and automatic data updating process reports.
APA, Harvard, Vancouver, ISO, and other styles
5

Rajabi, Zeyad. "BIAS : bioinformatics integrated application software and discovering relationships between transcription factors." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=81427.

Full text
Abstract:
In the first part of this thesis, we present a new development platform especially tailored to Bioinformatics research and software development called Bias (Bioinformatics Integrated Application Software) designed to provide the tools necessary for carrying out integrative Bioinformatics research. Bias follows an object-relational strategy for providing persistent objects, allows third-party tools to be easily incorporated within the system, and it supports standards and data-exchange protocols common to Bioinformatics. The second part of this thesis is on the design and implementation of modules and libraries within Bias related to transcription factors. We present a module in Bias that focuses on discovering competitive relationships between mouse and yeast transcription factors. By competitive relationships we mean the competitive binding of two transcription factors for a given binding site. We also present a method that divides a transcription factor's set of binding sites into two or more different sets when constructing PSSMs.
APA, Harvard, Vancouver, ISO, and other styles
6

Siaca, Enrique R. (Enrique Rene). "Development and implementation of a flexible reporting software application." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/38012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saraf, Nikita Sandip. "Leveraging Commercial and Open Source Software to Process and Visualize Advanced 3D Models on a Web-Based Software Platform." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613748656629076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ewing, John M. "Autonomic Performance Optimization with Application to Self-Architecting Software Systems." Thesis, George Mason University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3706982.

Full text
Abstract:

Service Oriented Architectures (SOA) are an emerging software engineering discipline that builds software systems and applications by connecting and integrating well-defined, distributed, reusable software service instances. SOA can speed development time and reduce costs by encouraging reuse, but this new service paradigm presents significant challenges. Many SOA applications are dependent upon service instances maintained by vendors and/or separate organizations. Applications and composed services using disparate providers typically demonstrate limited autonomy with contemporary SOA approaches. Availability may also suffer with the proliferation of possible points of failure—restoration of functionality often depends upon intervention by human administrators.

Autonomic computing is a set of technologies that enables self-management of computer systems. When applied to SOA systems, autonomic computing can provide automatic detection of faults and take restorative action. Additionally, autonomic computing techniques possess optimization capabilities that can leverage the features of SOA (e.g., loose coupling) to enable peak performance in the SOA system's operation. This dissertation demonstrates that autonomic computing techniques can help SOA systems maintain high levels of usefulness and usability.

This dissertation presents a centralized autonomic controller framework to manage SOA systems in dynamic service environments. The centralized autonomic controller framework can be enhanced through a second meta-optimization framework that automates the selection of optimization algorithms used in the autonomic controller. A third framework for autonomic meta-controllers can study, learn, adjust, and improve the optimization procedures of the autonomic controller at run-time. Within this framework, two different types of meta-controllers were developed. The Overall Best meta-controller tracks overall performance of different optimization procedures. Context Best meta-controllers attempt to determine the best optimization procedure for the current optimization problem. Three separate Context Best meta-controllers were implemented using different machine learning techniques: 1) K-Nearest Neighbor (KNN MC), 2) Support Vector Machines (SVM) trained offline (Offline SVM), and 3) SVM trained online (Online SVM).

A detailed set of experiments demonstrated the effectiveness and scalability of the approaches. Autonomic controllers of SOA systems successfully maintained performance on systems with 15, 25, 40, and 65 components. The Overall Best meta-controller successfully identified the best optimization technique and provided excellent performance at all levels of scale. Among the Context Best meta-controllers, the Online SVM meta-controller was tested on the 40 component system and performed better than the Overall Best meta-controller at a 95% confidence level. Evidence indicates that the Online SVM was successfully learning which optimization procedures were best applied to encountered optimization problems. The KNN MC and Offline SVM were less successful. The KNN MC struggled because the KNN algorithm does not account for the asymmetric cost of prediction errors. The Offline SVM was unable to predict the correct optimization procedure with sufficient accuracy—this was likely due to the challenge of building a relevant offline training set. The meta-optimization framework, which was tested on the 65 component system, successfully improved the optimization techniques used by the autonomic controller.

The meta-optimization and meta-controller frameworks described in this dissertation have broad applicability in autonomic computing and related fields. This dissertation also details a technique for measuring the overlap of two populations of points, establishes an approach for using penalty weights to address one-sided overfitting by SVM on asymmetric data sets, and develops a set of high performance data structure and heuristic search templates for C++.

APA, Harvard, Vancouver, ISO, and other styles
9

Duvall, Paul. "Using an object-oriented approach to develop a software application." [Denver, Colo.] : Regis University, 2006. http://165.236.235.140/lib/PDuvall2006.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kannikanti, Rajesh. "An android application for the USDA structural design software." Kansas State University, 2013. http://hdl.handle.net/2097/15977.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
Mitchell L. Neilsen
People are more inclined to use tablets instead of other computing devices due to their portability and ease of use. A number of desktop applications are now becoming available as tablet applications, with increasing demand in the market. Android is one of the largest and most popular open source platforms that offer developers complete access to framework APIs in order to develop innovative tablet applications. The objective of this project is to develop an Android application for the U.S. Department of Agriculture (USDA) Structural Design Software. The GUI for this software is developed to run on tablet devices powered by Android platform. The main features provided by the User Interface include: • Allowing the input to be saved in ASCII text format and displaying the simulation results in PDF format • Allowing the user to select the type of project or view help contents for the projects • Allowing the user to build the simulation for the selected type of project • Allowing the user to send the simulation results to an e-mail The backend for this software is supposed to replace the old FORTRAN source files with Java source files. FORTRAN to Java translation is performed using the FORTRAN to Java (F2J) translator. F2J is intended to translate old FORTRAN math libraries, but was not completely successful in translating these FORTRAN programs. To accomplish successful translation, some features (such as Common Blocks, IO operations) were removed from the FORTRAN source files before translation. After successful translation, the removed features were added again to the translated Java source files. The simulation results provided by the software are useful to design engineers to develop new structural designs.
APA, Harvard, Vancouver, ISO, and other styles
11

Sun, Pi-Hwa. "Distributed empirical modelling and its application to software system development." Thesis, University of Warwick, 1999. http://wrap.warwick.ac.uk/78784/.

Full text
Abstract:
Empirical Modelling (EM) is a new appro~h for software system development (SSO) that is particularly suitable for ill-defined, open systems. By regarding a software system as a computer model, EM aims to acquire and construct the knowledge associated with the intended system by situated modelling in which the modeller interacts with the computer model through continuous observations and experiments in an open-ended manner. In this way, a software system can be constructed that takes account of its context and is adaptable to the rapidly changing environment in which the system is developed and used. This thesis develops principles and tools for distributed Empirical Modelling (OEM). It proposes a framework for OEM by drawing on two crucial theories in social science: distributed cognition and ethnomethodology. This framework integrates cognitive and social processes, allowing multiple modellers to work collaboratively to explore, expand, experience and communicate their knowledge through interaction with their networked computer models. The concept of pretend play is proposed, whereby modellers as internal observers can interact with each other by acting in the role of agents within the intended system in order to shape the agency of such agents. The author has developed a tool called dtkeden to support the proposed OEM framework. Technical issues arising from the implementation dtkeden and case-studies in its use are discussed. The popular star-type logical configuration network and the client/server· communication technique are exploited to construct the network environment of this tool. A protocol has been devised and embedded into their communication mechanism to achieve synchronisation of computer models. Four interaction modes have been implemented into dtkeden to provide modellers with different forms of interpersonal interaction. In addition, using a virtual agent concept that was initially devised to allow definitions of different contexts to co-exist in a computer model, a definitive script can be interpreted as a generic observable that can serve as a reusable definitive pattern. Like experience in everyday life, this definitive pattern can be reused by particularising and adapting it to a specific context. A comparison between generic observables and abstract data types for reuse is given. The application of the framework for OEM to requirements engineering is proposed. The requirements engineering process (REP) - currently poorly understood - is reviewed. To integrate requirements engineering with SSD, this thesis suggests reengineering the REP by taking the context into account. On the basis of OEM, a framework (called SPORE) for the REP is established to guide the process of cultivating requirements in a situated manner. Examples of the use of this framework are presented, and comparisons with other approaches to RE are made.
APA, Harvard, Vancouver, ISO, and other styles
12

Giboney, Justin Scott. "Development And Application Of An Online Tool For Meta-Analyses Using Design Science Principles." Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/333036.

Full text
Abstract:
Nations are becoming increasingly sensitive about securing their borders, leading border security organizations to investigate systems designed to detect deception through linguistic analysis. As research about linguistic deception still has resulted in competing hypotheses, this dissertation uses a design science, information systems approach to build a system that synthesizes research on the linguistics of deception. It also performs a systematic review and meta-analysis to provide information about linguistics of deception to border security organizations. This dissertation outlines features that should be included in collaborative meta-analysis tools: process restriction, task organization, information sharing, task coordination, terminology definition, and simple interfaces. These features are discussed and implemented in a new system www.OrionShoulders.com. Through a systematic review and a behavioral experiment on linguistic of deception using the new system, this dissertation identified seven behavioral and cognitive constructs that could be measured through linguistics during deception: cognitive load, event recollection, guilt, credibility portrayal, distancing, dominant behavior, and hedging. This dissertation contributes a theoretical model that explains these seven constructs and how they are measured. The results of the systematic review and the behavioral experiment showed that hedging terms, first-person pronouns, negative emotion, generalizing terms, and the quantity of words were significantly correlated with deception.
APA, Harvard, Vancouver, ISO, and other styles
13

Addanki, Nikhita. "Android application for USDA (U.S. Department of Agriculture) structural design software." Kansas State University, 2014. http://hdl.handle.net/2097/17628.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
Mitchell L. Neilsen
The computer industry has seen a growth in the development of mobile applications over the last few years. Tablet/Mobile applications are preferred over their desktop versions due to their increased accessibility and usability. Android is the most popular mobile OS in the world. It not only provides a world-class platform for creating several apps, but also consists of an open marketplace for distributing them to Android users everywhere. This openness has led to it being a favorite for consumers and developers alike, thereby leading to a strong growth in app consumption. The main objective of the project is to design and develop an Android software application for USDA (U.S. Department of Agriculture) structural design that can be used on Android tablets. The different components of USDA that can be designed using this application are SingleCell, TwinCell, Cchan, Cbasin and Drpws3e. The USDA (U.S. Department of Agriculture) structural design application was previously developed using FORTRAN. But FORTRAN is not supported by Android Tablets. So, F2J Translator software was used to convert the FORTRAN source files to java source files which are supported by Android. Also, many other formatters such as CommonIn, CommonOut, and SwapStreams were used to translate some common blocks of FORTRAN code that cannot be translated by F2J Translator. The developed Android software allows users to access all different components of USDA structural design. Users can either directly enter the data in the forms provided or upload a file that already has data stored in it. When the application is run, the output can be accessed as a PDF file. Users can even send the output of a particular component to their personal email address. This output provided by the software application is helpful for design engineers to implement new structural designs.
APA, Harvard, Vancouver, ISO, and other styles
14

LIU, XIAOHUI. "AN EXPLORATION OF VISUALIZING HELP SUB-SYSTEMS FOR DESIGN APPLICATION SOFTWARE." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1085115587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jones, Anthony Andrew. "Combining data driven programming with component based software development : with applications in geovisualisation and dynamic data driven application systems." Thesis, Aston University, 2008. http://publications.aston.ac.uk/10682/.

Full text
Abstract:
Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations.
APA, Harvard, Vancouver, ISO, and other styles
16

Hannah, Jason. "Design pattern usage in designing web services for a video game inventory application /." View PDF document on the Internet, 2005. http://library.athabascau.ca/drr/download.php?filename=scis/JasonHannah.PDF.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Perks, Oliver F. J. "Addressing parallel application memory consumption." Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/58493/.

Full text
Abstract:
Recent trends in computer architecture are furthering the gap between CPU capabilities and those of the memory system. The rise of multi-core processors is having a dramatic effect on memory interactions, not just with respect to performance but crucially to capacity. The slow growth of DRAM capacity, coupled with configuration limitations, is driving up the cost of memory systems as a proportion of total HPC platform cost. As a result, scientific institutions are increasingly interested in application memory consumption, and in justifying the cost associated with maintaining high memory-per-core ratios. By studying the scaling behaviour of applications, both in terms of runtime and memory consumption, we are able to demonstrate a decrease in workload efficiency in low memory environments, resulting from poor memory scalability. Current tools are lacking in performance and analytical capabilities motivating the development of a new suite of tools for capturing and analysing memory consumption in large scale parallel applications. By observing and analysing memory allocations we are able to record not only how much but more crucially where and when an application uses its memory. We use use this analysis to look at some of the key principles in application scaling such as processor decomposition, parallelisation models and runtime libraries, and their associated effects on memory consumption. We demonstrate how the data storage model of OpenMPI implementations inherently prevents scaling due to memory requirements, and investigate the benefits of different solutions. Finally, we show how by analysing information gathered during application execution we can automatically generate models to predict application memory consumption, at different scale and runtime configurations. In addition we predict, using these models, how implementation changes could affect the memory consumption of an industry strength benchmark.
APA, Harvard, Vancouver, ISO, and other styles
18

Soltani, Hamidreza. "Development and application of real-time and interactive software for complex system." Thesis, University of Central Lancashire, 2016. http://clok.uclan.ac.uk/20443/.

Full text
Abstract:
Soft materials have attracted considerable interest in recent years for predicting the characteristics of phase separation and self-assembly in nanoscale structures. A popular method for demonstrating and simulating the dynamic behaviour of particles (e.g. particle tracking) and to consider effects of simulation parameters is cell dynamic simulation (CDS). This is a cellular computerisation technique that can be used to investigate different aspects of morphological topographies of soft material systems. The acquisition of quantitative data from particles is a critical requirement in order to obtain a better understanding and of characterising their dynamic behaviour. To achieve this objective particle tracking methods considering quantitative data and focusing on different properties and components of particles is essential. Despite the availability of various types of particle tracking used in experimental work, there is no method available to consider uniform computational data. In order to achieve accurate and efficient computational results for cell dynamic simulation method and particle tracking, two factors are essential: computing/calculating time-scale and simulation system size. Consequently, finding available computing algorithms and resources such as sequential algorithm for implementing a complex technique and achieving precise results is critical and rather expensive. Therefore, it is highly desirable to consider a parallel algorithm and programming model to solve time-consuming and massive computational processing issues. Hence, the gaps between the experimental and computational works and solving time consuming for expensive computational calculations need to be filled in order to investigate a uniform computational technique for particle tracking and significant enhancements in speed and execution times. The work presented in this thesis details a new particle tracking method for integrating diblock copolymers in the form of spheres with a shear flow and a novel designed GPU-based parallel acceleration approach to cell dynamic simulation (CDS). In addition, the evaluation of parallel models and architectures (CPUs and GPUs) utilising the mixtures of application program interface, OpenMP and programming model, CUDA were developed. Finally, this study presents the performance enhancements achieved with GPU-CUDA of approximately ~2 times faster than multi-threading implementation and 13~14 times quicker than optimised sequential processing for the CDS computations/workloads respectively.
APA, Harvard, Vancouver, ISO, and other styles
19

Waters, Matthew. "Application of software engineering tools and techniques to PLC programming : innovation report." Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/36897/.

Full text
Abstract:
The software engineering tools and techniques available for use in traditional information systems industries are far more advanced than in the manufacturing and production industries. Consequently there is a paucity of ladder logic programming support tools. These tools can be used to improve the way in which ladder logic programs are written, to increase the quality and robustness of the code produced and minimise the risk of software related downtime. To establish current practice and to ascertain the needs of industry a literature review and a series of interviews with industrial automation professionals were conducted. Two opportunities for radical improvement were identified; a tool to measure software metrics for code written in ladder logic and a tool to detect cloned code within a ladder program. Software metrics quantify various aspects of code and can be used to assess code quality, measure programmer productivity, identify weak code and develop accurate costing models with respect to code. They are quicker, easier and cheaper than alternative code reviewing strategies such as peer review and allow organisations to make evidence based decisions with respect to code. Code clones occur because reuse of copied and pasted code increases programmer productivity in the short term, but make programs artificially large and can spread bugs. Cloned code can be removed with no loss of functionality, dramatically reducing the the size of a program. To implement these tools, a compiler front end for ladder logic was first constructed. This included a lexer with 24 lexical modes, 71 macro definitions and 663 token definitions as well as a 729 grammar rule parser. The software metrics tool and clone detection tool perform analyses on an abstract sytax tree, the output from the compiler. The tools have been designed to be as user friendly as possible. Metrics results are compiled in XML reports that can be imported into spreadsheet applications, and the clone detector generates easily navigable HTML reports for each clone as well as an index file of all clones that contains hyperlinks to all clone reports. Both tools were demonstrated by analysing real factory code from a Jaguar Land Rover body in white line. The metrics tool analysed over 1.5 million lines of ladder logic code contained within 23 files and 8466 routines. The results identified those routines that are abnormally complex in addition to routines that are excessively large. These routines are a likely source of problems in future and action to improve them immediately is recommended. The clone detector analysed 59K lines from a manufacturing cell. The results of this analysis proved that the code could be reduced in volume by 43.9% and found previously undetected bugs. By removing clones for all factory code, the code would be reduced in size by so much that it could run on as much as 25% fewer PLCs, yielding a significant saving on hardware costs alone. De-cloned code is also easier to make modifications to, so this process goes some way towards future-proofing the code.
APA, Harvard, Vancouver, ISO, and other styles
20

Piggin, Richard Stuart Hadley. "Application and development of fieldbus : executive summary." Thesis, University of Warwick, 1999. http://wrap.warwick.ac.uk/36357/.

Full text
Abstract:
Confusion over fieldbus technology by manufacturers and customers alike is due to a number of factors. The goal of a single global fieldbus standard, the subsequent development of European standards, the recognition of a number of emerging de facto standards and the continued international standardisation of fieldbus technology is still perplexing potential fieldbus users. The initial low supply and demand for suitable devices and compatible controller interfaces, the high cost of control systems and inertia caused by resistance to change have all contributed to the slow adoption of fieldbus technology by industry. The variable quality of fieldbus documentation has not assisted the acceptance of this new technology. An overview of industrial control systems, fieldbus technology, present and future trends is given. The quantifiable benefits of fieldbus are identified in the assessment of fieldbus applications and guidance on the appropriate criteria for the evaluation and selection of fieldbus are presented. End users can use this and network planning to establish the viability, suitability and benefits of various control system architectures and configurations prior to implementation. The enhancements to a network configuration tool are shown to aid control system programming and the provision of comprehensive diagnostics. A guide to fieldbus documentation enables manufacturers to produce clear, consistent fieldbus documentation. The safety-related features for a machine safety fieldbus are also determined for an existing network technology. Demonstrators have been produced to show the novel use of fieldbus technology in different areas. Transitory connections are utilised to reduce complexity and increase functionality. A machine safety fieldbus is evaluated in the first installation of a fully networked control application. Interoperability of devices from many different manufacturers and the benefits of fieldbus are proven. Experience gained during the membership of the British Standards Institution AMT/7 Committee identified the impact of standards and legislation on fieldbus implementation and highlighted the flawed use of standards to promote fieldbus technology. The Committee prepared a Guide to the evaluation of fieldbus specifications, a forthcoming publication by the BSI. The Projects presented have increased and developed the appropriate use of fieldbus technology through novel application, technical enhancement, demonstration and knowledge dissemination.
APA, Harvard, Vancouver, ISO, and other styles
21

Tsao, Heng-Jui. "Personal Software Process (PSP) Scriber." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2140.

Full text
Abstract:
Personal Software Process (PSP) Scriber is a Web-based software engineering tool designed to implement an automatic system for performing PSP. The basis of this strategy is a set of tools to facilitate collection and analysis of development data. By analyzing the collected data, the developer can make informed, accurate decisions about their individual development effort.
APA, Harvard, Vancouver, ISO, and other styles
22

Cox, Tyler L. "Development of ETSU Student Life Android Application." Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/honors/231.

Full text
Abstract:
In this thesis, the author gives a description his journey creating and developing a Student Life Application for East Tennessee State University. This thesis will document his process with development as well as reflect on the struggles and victories in creation of this application.
APA, Harvard, Vancouver, ISO, and other styles
23

Fleming, David M. "Foundations of object-based specification design." Morgantown, W. Va. : [West Virginia University Libraries], 1997. http://etd.wvu.edu/templates/showETD.cfm?recnum=1036.

Full text
Abstract:
Thesis (Ph. D.)--West Virginia University, 1997.
Title from document title page. Document formatted into pages; contains xi, 161 p. : ill. Includes abstract. Includes bibliographical references (p. 158-161).
APA, Harvard, Vancouver, ISO, and other styles
24

Srivastava, Vikesh. "Bias : bioinformatics integrated application software, design and implementation which was written as part of my masters degree requirements." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=81442.

Full text
Abstract:
This thesis introduces a development platform especially tailored to Bioinformatics research and software development. Bias (Bioinformatics Integrated Application Software) provides the tools necessary for carrying out integrative Bioinformatics research. It follows an object-relational strategy for providing persistent objects, allows third-party tools to be easily incorporated within the system, and it supports standards and data-exchange protocols common to Bioinformatics. It is not enough to present the architecture of Bias without showing a working example for which exploits all composites of Bias, thus demonstrating the utility of Bias. We present the architecture of Bias and provide a full example based on the work of Segal et al. for finding transcriptional regulatory relationships. Bias is an OpenSource project and is freely available to all interested users.
APA, Harvard, Vancouver, ISO, and other styles
25

Gill, Mandeep Singh. "Application of software engineering methodologies to the development of mathematical biological models." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:35178f3a-7951-4f1c-aeab-390cdd622b05.

Full text
Abstract:
Mathematical models have been used to capture the behaviour of biological systems, from low-level biochemical reactions to multi-scale whole-organ models. Models are typically based on experimentally-derived data, attempting to reproduce the observed behaviour through mathematical constructs, e.g. using Ordinary Differential Equations (ODEs) for spatially-homogeneous systems. These models are developed and published as mathematical equations, yet are of such complexity that they necessitate computational simulation. This computational model development is often performed in an ad hoc fashion by modellers who lack extensive software engineering experience, resulting in brittle, inefficient model code that is hard to extend and reuse. Several Domain Specific Languages (DSLs) exist to aid capturing such biological models, including CellML and SBML; however these DSLs are designed to facilitate model curation rather than simplify model development. We present research into the application of techniques from software engineering to this domain; starting with the design, development and implementation of a DSL, termed Ode, to aid the creation of ODE-based biological models. This introduces features beneficial to model development, such as model verification and reproducible results. We compare and contrast model development to large-scale software development, focussing on extensibility and reuse. This work results in a module system that enables the independent construction and combination of model components. We further investigate the use of software engineering processes and patterns to develop complex modular cardiac models. Model simulation is increasingly computationally demanding, thus models are often created in complex low-level languages such as C/C++. We introduce a highly-efficient, optimising native-code compiler for Ode that generates custom, model-specific simulation code and allows use of our structured modelling features without degrading performance. Finally, in certain contexts the stochastic nature of biological systems becomes relevant. We introduce stochastic constructs to the Ode DSL that enable models to use Stochastic Differential Equations (SDEs), the Stochastic Simulation Algorithm (SSA), and hybrid methods. These use our native-code implementation and demonstrate highly-efficient stochastic simulation, beneficial as stochastic simulation is highly computationally intensive. We introduce a further DSL to model ion channels declaratively, demonstrating the benefits of DSLs in the biological domain. This thesis demonstrates the application of software engineering methodologies, and in particular DSLs, to facilitate the development of both deterministic and stochastic biological models. We demonstrate their benefits with several features that enable the construction of large-scale, reusable and extensible models. This is accomplished whilst providing efficient simulation, creating new opportunities for biological model development, investigation and experimentation.
APA, Harvard, Vancouver, ISO, and other styles
26

Flitman, Andrew. "Towards the application of artificial intelligence techniques for discrete event simulation." Thesis, University of Warwick, 1986. http://wrap.warwick.ac.uk/51317/.

Full text
Abstract:
The possibility of incorporating Artificial Intelligence (A.I) techniques into Visual Interactive Discrete Event Simulation was examined. After a study the current state of the art, work was undertaken to investigate the usefulness of PROLOG as a simulation language. This led to the development of a working Simulation Engine, allowing simulations to be developed quickly. The way PROLOG facilitated development of the engine indicated a possible usefulness as a medium for controlling external simulations. Tests on the feasibility of this were made resulting in the development of an assembler link which allows PROLOG to remotely communicate with and control procedural language programs resident on a separate microcomputer. Experiments using this link were then made to test the application of A.I. techniques to current visual simulations. Studies were carried out on the controlling of the simulation, the monitoring and learning from a simulation, the use of simulation as a window to expert system performance, and on the manipulation of the simulation. This study represents a practical attempt to understand and develop the possible uses of A.I. techniques within visual interactive simulation. The thesis concludes with a discussion of the advantages attainable through such a merger of techniques, followed by areas in which the research may be expanded.
APA, Harvard, Vancouver, ISO, and other styles
27

Lim, Hojung Goel Amrit L. "Support vector parameter selection using experimental design based generating set search (SVEG) with application to predictive software data modeling." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2004. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

O'Neill, Simon John. "A fundamental study into the theory and application of the partial metric spaces." Thesis, University of Warwick, 1998. http://wrap.warwick.ac.uk/73518/.

Full text
Abstract:
Our aim is to establish the partial metric spaces within the context of Theoretical Computer Science. We present a thesis in which the big "idea" is to develop a more (classically) analytic approach to problems in Computer Science. The partial metric spaces are the means by which we discuss our ideas. We build directly on the initial work of Matthews and Wadge in this area. Wadge introduced the notion of healthy programs corresponding to complete elements in a semantic domain, and of size being the extent to which a point is complete. To extend these concepts to a wider context, Matthews placed this work in a generalised metric framework. The resulting partial metric axioms are the starting point for our own research. In an original presentation, we show that Ta-metrics are either quasi-metrics, if we discard symmetry, or partial metrics, if we allow non-zero self-distances. These self-distances are how we capture Wadge's notion of size (or weight) in an abstract setting, and Edalat's computational models of metric spaces are examples of partial metric spaces. Our contributions to the theory of partial metric spaces include abstracting their essential topological characteristics to develop the hierarchical spaces, investigating their To-topological properties, and developing metric notions such as completions. We identify a quantitative domain to be a continuous domain with a To-metric inducing the Scott topology, and introduce the weighted spaces as a special class of partial metric spaces derived from an auxiliary weight function. Developing a new area of application, we model deterministic Petri nets as dynamical systems, which we analyse to prove liveness properties of the nets. Generalising to the framework of weighted spaces, we can develop model-independent analytic techniques. To develop a framework in which we can perform the more difficult analysis required for non-deterministic Petri nets, we identify the measure-theoretic aspects of partial metric spaces as fundamental, and use valuations as the link between weight functions and information measures. We are led to develop a notion of local sobriety, which itself appears to be of interest.
APA, Harvard, Vancouver, ISO, and other styles
29

Nakhimovski, Iakov. "Modeling and Simulation of Contacting Flexible Bodies in Multibody Systems." Licentiate thesis, Linköping University, Linköping University, PELAB - Programming Environment Laboratory, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5731.

Full text
Abstract:

This thesis summarizes the equations, algorithms and design decisions necessary for dynamic simulation of flexible bodies with moving contacts. The assumed general shape function approach is also presented. The approach is expected to be computationally less expensive than FEM approaches and easier to use than other reduction techniques. Additionally, the described technique enables studies of the residual stress release during grinding of flexible bodies.

The overall software system design for a flexible multi-body simulation system BEAST is presented and the specifics of the flexible modeling is specially addressed. An industrial application example is also described in the thesis. The application presents some results from a case where the developed system is used for simulation of flexible ring grinding with material removal.


Report code: LiU-TEK-LIC-2002:62.
APA, Harvard, Vancouver, ISO, and other styles
30

Herndon, Nic. "ATTITUDE a tidy touchscreen interface to a UML development environment /." abstract and full text PDF (free order & download UNR users only), 2008. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1456410.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Warnock, David. "The application of multiple modalities to improve home care and reminder systems." Thesis, University of Glasgow, 2014. http://theses.gla.ac.uk/5164/.

Full text
Abstract:
Existing home care technology tends to be pre-programmed systems limited to one or two interaction modalities. This can make them inaccessible to people with sensory impairments and unable to cope with a dynamic and heterogeneous environment such as the home. This thesis presents research that considers how home care technology can be improved through employing multiple visual, aural, tactile and even olfactory interaction methods. A wide range of modalities were tested to gather a better insight into their properties and merits. That information was used to design and construct Dyna-Cue, a prototype multimodal reminder system. Dyna-Cue was designed to use multiple modalities and to switch between them in real time to maintain higher levels of effectiveness and acceptability. The Dyna-Cue prototype was evaluated against other models of reminder delivery and was shown to be an effective and appropriate tool that can help people to manage their time and activities.
APA, Harvard, Vancouver, ISO, and other styles
32

Wu, Jichuan. "Web-based e-mail client for computer science." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2462.

Full text
Abstract:
The project is a web e-mail application to provide a web page interface for all CSCI faculty, staff and students to handle their e-mails. The application is written by JSP, Java Servlets, JavaScript and custom JSP tag libraries. Regular e-mail capabilities have been enhanced by the feature of allowing users to store and manage messages by day (store to daily folders, view in daily folders, append notes for that day).
APA, Harvard, Vancouver, ISO, and other styles
33

Borges, Rafael. "A neural-symbolic system for temporal reasoning with application to model verification and learning." Thesis, City University London, 2012. http://openaccess.city.ac.uk/1303/.

Full text
Abstract:
The effective integration of knowledge representation, reasoning and learning into a robust computational model is one of the key challenges in Computer Science and Artificial Intelligence. In particular, temporal models have been fundamental in describing the behaviour of Computational and Neural-Symbolic Systems. Furthermore, knowledge acquisition of correct descriptions of the desired system’s behaviour is a complex task in several domains. Several efforts have been directed towards the development of tools that are capable of learning, describing and evolving software models. This thesis contributes to two major areas of Computer Science, namely Artificial Intelligence (AI) and Software Engineering. Under an AI perspective, we present a novel neural-symbolic computational model capable of representing and learning temporal knowledge in recurrent networks. The model works in integrated fashion. It enables the effective representation of temporal knowledge, the adaptation of temporal models to a set of desirable system properties and effective learning from examples, which in turn can lead to symbolic temporal knowledge extraction from the corresponding trained neural networks. The model is sound, from a theoretical standpoint, but is also tested in a number of case studies. An extension to the framework is shown to tackle aspects of verification and adaptation under the SE perspective. As regards verification, we make use of established techniques for model checking, which allow the verification of properties described as temporal models and return counter-examples whenever the properties are not satisfied. Our neural-symbolic framework is then extended to deal with different sources of information. This includes the translation of model descriptions into the neural structure, the evolution of such descriptions by the application of learning of counter examples, and also the learning of new models from simple observation of their behaviour. In summary, we believe the thesis describes a principled methodology for temporal knowledge representation, learning and extraction, shedding new light on predictive temporal models, not only from a theoretical standpoint, but also with respect to a potentially large number of applications in AI, Neural Computation and Software Engineering, where temporal knowledge plays a fundamental role.
APA, Harvard, Vancouver, ISO, and other styles
34

Xiong, Xiaoyu. "Adaptive multiple importance sampling for Gaussian processes and its application in social signal processing." Thesis, University of Glasgow, 2017. http://theses.gla.ac.uk/8542/.

Full text
Abstract:
Social signal processing aims to automatically understand and interpret social signals (e.g. facial expressions and prosody) generated during human-human and human-machine interactions. Automatic interpretation of social signals involves two fundamentally important aspects: feature extraction and machine learning. So far, machine learning approaches applied to social signal processing have mainly focused on parametric approaches (e.g. linear regression) or non-parametric models such as support vector machine (SVM). However, these approaches fall short of taking into account any uncertainty as a result of model misspecification or lack interpretability for analyses of scenarios in social signal processing. Consequently, they are less able to understand and interpret human behaviours effectively. Gaussian processes (GPs), that have gained popularity in data analysis, offer a solution to these limitations through their attractive properties: being non-parametric enables them to flexibly model data and being probabilistic makes them capable of quantifying uncertainty. In addition, a proper parametrisation in the covariance function makes it possible to gain insights into the application under study. However, these appealing properties of GP models hinge on an accurate characterisation of the posterior distribution with respect to the covariance parameters. This is normally done by means of standard MCMC algorithms, which require repeated expensive calculations involving the marginal likelihood. Motivated by the desire to avoid the inefficiencies of MCMC algorithms rejecting a considerable number of expensive proposals, this thesis has developed an alternative inference framework based on adaptive multiple importance sampling (AMIS). In particular, this thesis studies the application of AMIS for Gaussian processes in the case of a Gaussian likelihood, and proposes a novel pseudo-marginal-based AMIS (PM-AMIS) algorithm for non-Gaussian likelihoods, where the marginal likelihood is unbiasedly estimated. Experiments on benchmark data sets show that the proposed framework outperforms the MCMC-based inference of GP covariance parameters in a wide range of scenarios. The PM-AMIS classifier - based on Gaussian processes with a newly designed group-automatic relevance determination (G-ARD) kernel - has been applied to predict whether a Flickr user is perceived to be above the median or not with respect to each of the Big-Five personality traits. The results show that, apart from the high prediction accuracies achieved (up to 79% depending on the trait), the parameters of the G-ARD kernel allow the identification of the groups of features that better account for the classification outcome and provide indications about cultural effects through their weight differences. Therefore, this demonstrates the value of the proposed non-parametric probabilistic framework for social signal processing. Feature extraction in signal processing is dominated by various methods based on short time Fourier transform (STFT). Recently, Hilbert spectral analysis (HSA), a new representation of signal which is fundamentally different from STFT has been proposed. This thesis is also the first attempt to investigate the extraction of features from this newly proposed HSA and its application in social signal processing. The experimental results reveal that, using features extracted from the Hilbert spectrum of voice data of female speakers, the prediction accuracy can be achieved by up to 81% when predicting their Big-Five personality traits, and hence show that HSA can work as an effective alternative to STFT for feature extraction in social signal processing.
APA, Harvard, Vancouver, ISO, and other styles
35

Lin, Burch. "Neural networks and their application to metrics research." Virtual Press, 1996. http://liblink.bsu.edu/uhtbin/catkey/1014859.

Full text
Abstract:
In the development of software, time and resources are limited. As a result, developers collect metrics in order to more effectively allocate resources to meet time constraints. For example, if one could collect metrics to determine, with accuracy, which modules were error-prone and which were error-free, one could allocate personnel to work only on those error-prone modules.There are three items of concern when using metrics. First, with the many different metrics that have been defined, one may not know which metrics to collect. Secondly, the amount of metrics data collected can be staggering. Thirdly, interpretation of multiple metrics may provide a better indication of error-proneness than any single metric.This thesis researched the accuracy of a neural network, an unconventional model, in building a model that can determine whether a module is error-prone from an input of a suite of metrics. The accuracy of the neural network model was compared with the accuracy of a linear regression model, a standard statistical model, that has the same input and output. In other words, we attempted to find whether metrics correlated with error-proneness. The metrics were gathered from three different software projects. The suite of metrics that was used to build the models was a subset of a larger collection of metrics that was reduced using factor analysis.The conclusion of this thesis is that, from the projects analyzed, neither the neural network model nor the logistic regression model provide acceptable accuracies for real use. We cannot conclude whether one model provides better accuracy than the other.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
36

Medynskiy, Yevgeniy. "Design and evaluation of a health-focused personal informatics application with support for generalized goal management." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43710.

Full text
Abstract:
The practice of health self-management offers behavioral and problem-solving strategies that can effectively promote responsibility for one's own wellbeing, improve one's health outcomes, and decrease the cost of health services. Personal informatics applications support health self-management by allowing their users to easily track personal health information, and to review the changes and patterns in this information. Over the course of the past several years, I have pursued a research agenda centered on understanding how personal health informatics applications can further support the strategies of health self-management--specifically those relating to goal-management and behavior change. I began by developing a flexible personal informatics tool, called Salud!, that I could use to observe real-world goal management and behavior change strategies, as well as use to evaluate new interfaces designed to assist in goal management. Unlike existing personal informatics tools, Salud! allows users to self-define the information that they will track, which allows tracking of highly personal and meaningful data that may not be possible to track given other tools. It also enables users to share their account data with facilitators (e.g. fitness grainers, nutritionists, etc.) who can provide input and feedback. Salud! was built on top of an infrastructure consisting of a stack of modular services that make it easier for others to develop and/or evaluate a variety of personal informatics applications. Several research teams used this infrastructure to develop and deploy a variety of custom projects. Informal analysis of their efforts showed an unmet need for data storage and visualization services for home- and health-based sensor data. In order to design a goal management support tool for Salud!, I first, I conducted a meta-analysis of relevant research literature to cull a set of proven goal management strategies. The key outcome of this work was an operationalization of Action Plans--goal management strategies that are effective at supporting behavior change. I then deployed Salud! in two fitness-related contexts to observe and understand the breadth of health-related behavior change and goal management practices. Findings from these deployments showed that personal informatics tools are most helpful to individuals who are able to articulate short-term, actionable goals, and who are able to integrate self-tracking into their daily activities. The literature meta-analysis and the two Salud! deployments provided formative requirements for a goal management interaction that would both incorporate effective goal management strategies and support the breadth of real-world goals. I developed a model of the goal management process as the framework for such an interaction. This model enables goals to be represented, evaluated, and visualized, based on a wide range of user objectives and data collection strategies. Using this model, I was able to develop a set of interactions that allow users of Salud! to manage their personal goals within the application. The generalized goal management model shows the inherent difficulty in supporting open-ended, highly personalized goal management. To function generically, Salud! requires facilitator input to correctly process goals and meaningfully classify their attributes. However, for specific goals represented by specific data collection strategies, it is possible to fully- or semi-automate the goal management process. I ran a large-scale evaluation of Salud! with the goal management interaction to evaluate the effectiveness of a fully-automated goal management interaction. The evaluation consisted of a common health self-management intervention: a simple fitness program to increase participants' daily step count. The results of this evaluation suggest that the goal management interaction may improve the rate of goal realization among users who are initially less active and less confident in their ability to succeed. Additionally, this evaluation showed that, while it can significantly increase participants' step count, a fully automated fitness program is not as effective as traditional, instructor-led fitness programs. However, it is much easier to administer and much less resource intensive, showing that it can be utilized to rapidly evaluate concrete goal management strategies.
APA, Harvard, Vancouver, ISO, and other styles
37

McNamara, Caolán. "Runtime automated detection of out of process resource mismanagement in the X Windowing System." [Denver, Colo.] : Regis University, 2009. http://adr.coalliance.org/codr/fez/view/codr:31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Claesén, Daniel. "MCapture; An Application Suite for Streaming Audio over Networks." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4387.

Full text
Abstract:

The purpose of this thesis is to develop software to stream input and output audio from a large number of computers in a network to one specific computer in the same network. This computer will save the audio to disk. The audio that is to be saved will consist mostly of spoken communication. The saved audio is to be used in a framework for modeling and visualization.

There are three major problems involved in designing a software to fill this purpose: recording both input and output audio at the same time, efficiently receiving multiple audio-streams at once and designing an interface where finding and organizing the computers to record audio from is easy.

The software developed to solve these problems consists of two parts; a server and a client. The server captures the input (microphone) and output (speaker) audio from a computer. To capture the output and input audio simultaneously an external application named Virtual Audio Cable (VAC) is used. The client connects to multiple servers and receives the captured audio. Each one of the client’s server-connections is handled by its own thread. To make it easy to find available servers an Automatic Server Discovery System has been developed. To simplify the organization of the servers they are displayed in a tree-view specifically designed for this purpose.

APA, Harvard, Vancouver, ISO, and other styles
39

Robeson, Aaron. "Airwaves: A Broadcasting Web Application Supplemented by a Neural Network Transcription Model." Ohio University Honors Tutorial College / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors155603038153628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Pearson, Edward R. S. "The multiresolution Fourier transform and its application to polyphonic audio analysis." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/35769/.

Full text
Abstract:
Many people listen to, or at least hear, some form of music almost every day of their lives. However, only some of the processes involved in creating the sensations and emotions evoked by the music are understood in any detail. The problem of unravelling these processes has been much less thoroughly investigated than the comparable topics of speech and image recognition; this has almost certainly been caused by the existence of a greater number of applications awaiting this knowledge. Nevertheless, the area of music perception has attracted some attention over the last few decades and there is an increasing interest in the subject largely arising from the availability of suitably powerful technology. It is becoming feasible to use such technology to construct artificial hearing devices which attempt to reproduce the functionality of the human auditory system. The construction of such devices is both a powerful method of verifying operational theories of the human auditory system and may ultimately provide a means of analysing music in more detail than man. In addition to the analytical benefits, techniques developed in this manner are readily applicable to the creative aspects of music, such as the composition of new music and musical sounds.
APA, Harvard, Vancouver, ISO, and other styles
41

Kai, Cheng. "The application of artificial intelligence to the development of a design support system for externally pressurised journal bearings." Thesis, Liverpool John Moores University, 1994. http://researchonline.ljmu.ac.uk/4938/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Dai, Zhen Zhong. "Workflow application and workflow engine." Thesis, University of Macau, 2005. http://umaclib3.umac.mo/record=b1447902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Jiang, Feng. "Capturing event metadata in the sky : a Java-based application for receiving astronomical internet feeds : a thesis presented in partial fulfilment of the requirements for the degree of Master of Computer Science in Computer Science at Massey University, Auckland, New Zealand." Massey University, 2008. http://hdl.handle.net/10179/897.

Full text
Abstract:
When an astronomical observer discovers a transient event in the sky, how can the information be immediately shared and delivered to others? Not too long time ago, people shared the information about what they discovered in the sky by books, telegraphs, and telephones. The new generation of transferring the event data is the way by the Internet. The information of astronomical events is able to be packed and put online as an Internet feed. For receiving these packed data, an Internet feed listener software would be required in a terminal computer. In other applications, the listener would connect to an intelligent robotic telescope network and automatically drive a telescope to capture the instant Astrophysical phenomena. However, because the technologies of transferring the astronomical event data are in the initial steps, the only resource available is the Perl-based Internet feed listener developed by the team of eSTAR. In this research, a Java-based Internet feed listener was developed. The application supports more features than the Perl-based application. After applying the rich Java benefits, the application is able to receive, parse and manage the Internet feed data in an efficient way with the friendly user interface. Keywords: Java, socket programming, VOEvent, real-time astronomy
APA, Harvard, Vancouver, ISO, and other styles
44

Sanjeepan, Vivekananthan. "A service-oriented, scalable, secure framework for Grid-enabling legacy scientific applications." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0013276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Diwan, Piyush D. "A Software Product Line Engineering Approach to Building A Modeling and Simulation as a Service (M&SaaS) Application Store." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1385393779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Glöckner, Stephan. "Application of automated feedback for the improvement of data quality in web-based clinical collaborations." Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7620/.

Full text
Abstract:
Background: Clinical research registries are rarely driven by data quality assurance. However, quality of data can have a huge impact on the performance and outcome of any given trial using registry data. Therefore, data quality assurance procedures for cost reduction and data process improvements have to be implemented in research registries. Hypothesis: This research proposes that web-based data quality feedback can motivate registry users, increase their contributions and ultimately improve the quality of registry data and its (re-)use to support clinical trials; thereby reducing the costs and need for study monitors. Method: To explore causes of low data quality and user motivation, a survey and an assessment of quality indicators in a multicentre clinical setting was performed. Subsequently, a development and evaluation of a web-based feedback framework was conducted. This was explored in the international Niemann-Pick disease registry (INPDR) and two clinical trials associated with the European Network for the Study of Adrenal Tumours (ENSAT). Results: The survey and framework evaluation highlight effectiveness of web-based automated data quality feedback. Case studies showed an increase of data quality within observation time. Conclusion: Centralised data monitoring requires a general framework that can be adjusted for a variety of trials and studies. This research highlights how biomedical research registries have to be designed with focus on data quality and feedback mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
47

Gu, Yong Fei. "Web service application : multi-platform Event-Ticketing System." Thesis, University of Macau, 2002. http://umaclib3.umac.mo/record=b1636972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Tian, Xin Mei. "UML-based funtional testing for web application systems." Thesis, University of Macau, 2003. http://umaclib3.umac.mo/record=b1636994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Govindaswamy, Kirthilakshmi. "An API for adaptive loop scheduling in shared address space architectures." Master's thesis, Mississippi State : Mississippi State University, 2003. http://sun.library.msstate.edu/ETD-db/theses/available/etd-07082003-122028/restricted/kirthi%5Fthesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Rouvoy, Romain. "Une démarche à granularité extrêmement fine pour la construction de canevas intergiciels hautement adaptables : application aux services de transactions." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2006. http://tel.archives-ouvertes.fr/tel-00119794.

Full text
Abstract:
Cette thèse adresse la problématique de la construction des intergiciels hautement adaptables. Ces intergiciels se caractérisent par une grande diversité des fonctionnalités fournies. Dans le domaine du transactionnel, cette diversité concerne non seulement les modèles de transactions, les protocoles de contrôle de concurrence et de reprise après défaillance, mais aussi les normes et les standards d'intégration. Notre proposition consiste à définir un canevas intergiciel capitalisant la diversité du domaine transactionnel, et permettant de construire des services de transactions hautement adaptables. Ce type de services requiert la mise en place d'une démarche de construction à granularité extrêmement fine afin de pouvoir adapter les nombreuses caractéristiques de l'intergiciel.

Nous proposons donc de compléter l'approche issue des exogiciels avec quatre nouveaux éléments. Ainsi, nous définissons le modèle de programmation Fraclet à base d'annotations pour favoriser la programmation des abstractions fonctionnelles de l'intergiciel. Nous proposons ensuite un langage de description et de vérification de motifs d'architecture pour fiabiliser la modélisation des abstractions architecturales. Ces deux premiers éléments servent à la conception d'un canevas intergiciel à base de composants utilisant les motifs de conception comme structure architecturale extensible. Enfin, nous décrivons les configurations possibles en utilisant différents modèles de haut niveau dédiés aux caractéristiques de l'intergiciel. Nous illustrons ces concepts en présentant GoTM, un canevas intergiciel à composants pour la construction de services de transactions hautement adaptables.

Notre approche est validée au travers de trois expériences originales. Tout d'abord, nous proposons de faciliter l'intégration des services de transactions dans les plates-formes intergicielles par la définition de politiques de démarcation transactionnelle indépendantes de la plate-forme et du type de service intégré. Ensuite, nous définissons un service de transactions composant plusieurs personnalités simultanément pour faciliter l'interopérabilité transactionnelle d'applications hétérogènes. Enfin, nous sommes en mesure de sélectionner différents protocoles de validation à deux phases pour optimiser le temps d'exécution des transactions face aux changements des conditions d'exécution de l'application.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography