To see the other types of publications on this topic, follow the link: Extended Parallel Processing Model.

Dissertations / Theses on the topic 'Extended Parallel Processing Model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Extended Parallel Processing Model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Raneri, April. "SOURCE REPRESENTATION AND FRAMING IN CHILDHOOD IMMUNIZATION COMMUNICATION." Master's thesis, University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3522.

Full text
Abstract:
Research has indicated a strong interest in knowing who is being represented and how information is being represented in the communication about childhood immunization. This study uses a two-part analysis to look at source representation and framing in childhood immunization communication. A quantitative analysis of articles from the New York Times and USA Today were examined for their source representation, their use of fear appeals, through the Extended Parallel Processing Model (EPPM), and the use of frames, through the application of Prospect Theory. A qualitative semiotic analysis was conducted on 36 images that appeared on www.yahoo.com and www.google.com to find common themes for who is being represented and how information is being portrayed through the images. Results found a high prevalence of representation from the Center for Disease Control and Prevention, other governmental agencies and views from health/medical professionals in both the articles and images.
M.A.
Nicholson School of Communication
Sciences
Communication MA
APA, Harvard, Vancouver, ISO, and other styles
2

Moody, G. (Gregory). "A multi-theoretical perspective on IS security behaviors." Doctoral thesis, Oulun yliopisto, 2011. http://urn.fi/urn:isbn:9789514295614.

Full text
Abstract:
Abstract Increasingly, organizations and individuals rely upon technologies and networks more and more. Likewise, these environments are infested with more dangers, which could be avoided if computer users were to follow general security guidelines or procedures. Despite the ever-increasing threat, little research has addressed or explained why individuals purposefully engage in behaviors that make them more vulnerable to these threats, rather than avoiding or protecting themselves from such threats. Despite the advantage that could be afforded by understanding the motivations behind such behaviors, research addressing these behaviors is lacking or focused on very specific theoretical bases. This dissertation addresses this research gap by focusing on security-related behaviors that have yet to be addressed in this research stream, and by using novel theoretical perspectives that increase our insight into these types of behaviors. Four studies (n =  1,430) are tested and reported here that support the four behaviors and theoretical perspectives that are of focus in this dissertation. By considering additional theories, constructs, and theoretical perspectives, this dissertation provides several important contributions to security-related behaviors. The results of this study provide new insights into the motivations behind the purposeful enactment of behaviors that increase one’s vulnerability to technological threats and risks
Tiivistelmä Organisaatiot ja ihmiset ovat yhä enenevissä määrin riippuvaisia teknologiasta ja tietoverkoista. Tällöin he myös kohtaavat entistä enemmän tietoturvariskejä, joita olisi mahdollista välttää noudattamalla tietoturvaohjeita ja -politiikkoja. Huolimatta näistä jatkuvasti yleistyvistä riskeistä, tähän mennessä ei juurikaan ole tehty tutkimusta, joka selittää ihmisten tietoista tietoturvaohjeiden ja -politiikkojen laiminlyöntiä, joka altistaa heidät tietoturvariskeille. Aikaisempi ihmisten tietoturvakäyttäytymisen syiden ymmärtämiseen keskittyvä tutkimus tarkastelee ilmiötä yksipuolisesti tiettyihin teoreettisiin lähtökohtiin nojautuen. Tämä väitöskirjatyö tarkastelee ihmisten tietoturvakäyttäytymisen syitä uudesta teoreettisesta näkökulmasta. Väitöskirja sisältää neljä tutkimusta (n = 1430), jotka tarkastelevat erityyppistä tietoturvakäyttäytymistä erilaisista teoreettisista lähtökohdista. Väitöskirja täydentää olemassa olevaa tietoturvakäyttäytymisen tutkimusta uusien teorioiden, käsitteiden ja teoreettisten näkökulmien avulla
APA, Harvard, Vancouver, ISO, and other styles
3

Zuriekat, Faris Nabeeh. "Parallel remote interactive management model." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3222.

Full text
Abstract:
This thesis discusses PRIMM which stands for Parallel Remote Interactive Management Model. PRIMM is a framework for object oriented applications that relies on grid computing. It works as an interface between the remote applications and the parallel computing system. The thesis shows the capabilities that could be achieved from PRIMM architecture.
APA, Harvard, Vancouver, ISO, and other styles
4

Patinho, Pedro José Grilo Lopes. "An abstract model for parallel execution of prolog." Doctoral thesis, Universidade de Évora, 2017. http://hdl.handle.net/10174/21002.

Full text
Abstract:
Logic programming has been used in a broad range of fields, from artifficial intelligence applications to general purpose applications, with great success. Through its declarative semantics, by making use of logical conjunctions and disjunctions, logic programming languages present two types of implicit parallelism: and-parallelism and or-parallelism. This thesis focuses mainly in Prolog as a logic programming language, bringing out an abstract model for parallel execution of Prolog programs, leveraging the Extended Andorra Model (EAM) proposed by David H.D. Warren, which exploits the implicit parallelism in the programming language. A meta-compiler implementation for an intermediate language for the proposed model is also presented. This work also presents a survey on the state of the art relating to implemented Prolog compilers, either sequential or parallel, along with a walk-through of the current parallel programming frameworks. The main used model for Prolog compiler implementation, the Warren Abstract Machine (WAM) is also analyzed, as well as the WAM’s successor for supporting parallelism, the EAM; Sumário: Um Modelo Abstracto para Execução Paralela de Prolog A programação em lógica tem sido utilizada em diversas áreas, desde aplicações de inteligência artificial até aplicações de uso genérico, com grande sucesso. Pela sua semântica declarativa, fazendo uso de conjunções e disjunções lógicas, as linguagens de programação em lógica possuem dois tipos de paralelismo implícito: ou-paralelismo e e-paralelismo. Esta tese foca-se em particular no Prolog como linguagem de programação em lógica, apresentando um modelo abstracto para a execução paralela de programas em Prolog, partindo do Extended Andorra Model (EAM) proposto por David H.D. Warren, que tira partido do paralelismo implícito na linguagem. É apresentada uma implementação de um meta-compilador para uma linguagem intermédia para o modelo proposto. É feita uma revisão sobre o estado da arte em termos de implementações sequenciais e paralelas de compiladores de Prolog, em conjunto com uma visita pelas linguagens para implementação de sistemas paralelos. É feita uma análise ao modelo principal para implementação de compiladores de Prolog, a Warren Abstract Machine (WAM) e da sua evolução para suportar paralelismo, a EAM.
APA, Harvard, Vancouver, ISO, and other styles
5

Widmaier, W. Robert, and Rick Schuh. "Naval Model Testing Using Fiber Optics and Parallel Processing." International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614686.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California
The David Taylor Research Center has a large complex of model basins and carriages to satisfy a wide variety of Hydrodynamic requirements. Technical support is furnished to US government design agencies, the maritime administration, and private organizations. Support involves both experimental and analytical programs related to every type of ship and craft. The stability, control, and maneuvering characteristics of submarines can be determined by performing free running radio control experiments. These experiments and analytical methods used are to investigate submarine motion. A requirement arose to operate the radio controlled model (RCM) in a towing tank housed in a building 3200 feet long. This tank is located 1500 feet from the building that houses the data collection and control system mainframe computer. A unique communications, command, and control system has been developed to run experiments using the RCM.
APA, Harvard, Vancouver, ISO, and other styles
6

Poma, Jonathan Miguel Campos, Emily Yanira De La Cruz Dominguez, Jimmy Armas-Aguirre, and Leonor Gutierrez Gonzalez. "Extended Model for the Early Skin Cancer Detection Using Image Processing." IEEE Computer Society, 2020. http://hdl.handle.net/10757/656579.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
In this research paper, we proposed an extended model for the early detection of skin cancer... The purpose is reduce the waiting time to obtaining a diagnosis, in addition, the function of the dermatoscope has been digitized by using a Smartphone and magnifying lenses as an accessory the mobile device. The proposed model has five phases: 1. The patient is attended by a general practitioner or nurse previously trained in any health center which has WiFi or mobile network connectivity to record their data and capture the skin lesion that will be analyzed. 2) The image will be in the cloud storage, which at the same time feeds an exclusive access website of dermatologists.3) Images are analyzed in real time using an image recognition service provided by IBM, which is integrated into a cloud-hosted web platform and an-Android application. 4)The result of the image processing is visualized by the dermatologist who makes a remote diagnosis.5) This diagnosis is received by the general practitioner or nurse, responsible for transmitting the diagnosis and treatment to the patient. This model was validated in a group of 60 patients, where 28 suffer from skin cancer in the early stage, 12 in the late stage and 20 are healthy patients, in a network of clinics in Lima, Peru. The obtained result was 97.5% of assertiveness on the analyzed skin lesions and 95% in healthy patients.
Revisión por pares
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Yi-ke. "Using an extended object model for object-oriented parallel simulation of VLSI microprocessors /." free to MU campus, to others for purchase, 1996. http://wwwlib.umi.com/cr/mo/fullcit?p9823320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Michael D. "Evaluating a parallel distributed processing model of human conceptual structure /." Title page, table of contents and abstract only, 1992. http://web4.library.adelaide.edu.au/theses/09ARPS/09arpsl479.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chown, Timothy John. "An extended facet region model for image representation and analysis." Thesis, University of Southampton, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zeileis, Achim, and Yves Croissant. "Extended Model Formulas in R. Multiple Parts and Multiple Responses." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2009. http://epub.wu.ac.at/1056/1/document.pdf.

Full text
Abstract:
Model formulas are the standard approach for specifying the variables in statistical models in the S language. Although being eminently useful in an extremely wide class of applications, they have certain limitations including being confined to single responses and not providing convenient support for processing formulas with multiple parts. The latter is relevant for models with two or more sets of variable, e.g., regressors/instruments in instrumental variable regressions, two-part models such as hurdle models, or alternative-specific and individual-specific variables in choice models among many others. The R package Formula addresses these two problems by providing a new class "Formula" (inheriting from "formula") that accepts an additional formula operator | separating multiple parts and by allowing all formula operators (including the new |) on the left-hand side to support multiple responses.
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
11

Campbell, Duncan Karl Gordon. "Clumps : a candidate model of efficient, general purpose parallel computation." Thesis, University of Exeter, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Temeltas, H. "Real-time identification of robot dynamic model parameters using parallel processing." Thesis, University of Nottingham, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mandviwala, Hasnain A. "Capsules expressing composable computations in a parallel programming model /." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26684.

Full text
Abstract:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Ramachandran, Umakishore; Committee Member: Knobe Kathleen; Committee Member: Pande, Santosh; Committee Member: Prvulovic, Milos; Committee Member: Rehg, James M.. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
14

Jelly, Innes E. "A parallel process model and architecture for a Pure Logic Language." Thesis, Sheffield Hallam University, 1990. http://shura.shu.ac.uk/8778/.

Full text
Abstract:
The research presented in this thesis has been concerned with the use of parallel logic systems for the implementation of large knowledge bases. The thesis describes proposals for a parallel logic system based on a new logic programming language, the Pure Logic Language. The work has involved the definition and implementation of a new logic interpreter which incorporates the parallel execution of independent OR processes, and the specification and design of an appropriate non shared memory multiprocessor architecture. The Pure Logic Language which is under development at JeL, Bracknell, differs from Prolog in its expressive powers and implementation. The resolution based Prolog approach is replaced by a rewrite rule technique which successively transforms expressions according to logical axioms and user defined rules until no further rewrites are possible. A review of related work in the field of parallel logic language systems is presented. The thesis describes the different forms of parallelism within logic languages and discusses the decision to concentrate on the efficient implementation of OR parallelism. The parallel process model for the Pure Logic Language uses the same execution technique of rule rewriting but has been adapted to implement the creation of independent OR processes and the required message passing operations. The parallelism in the system is implemented automatically and, unlike many other parallel logic systems there are no explicit program annotations for the control of parallel execution. The spawning of processes involves computational overheads within the interpreter: these have been measured and results are presented. The functional requirements of a multiprocessor architecture are discussed: shared memory machines are not scalable for large numbers of processing elements, but, with no shared memory, data needed by offspring processors must be copied from the parent or else recomputed. The thesis describes an optimised format for the copying of data between processors. Because a one-to-many communication pattern exits between parent and offspring processors a broadcast architecture is indicated. The development of a system based on the broadcasting of data packets represents a new approach to the parallel execution of logic languages and has led to the design of a novel bus based multiprocessor architecture. A simulation of this multiprocessor architecture has been produced and the parallel logic interpreter mapped onto it: this provides data on the predicted performance of the system. A detailed analysis of these results is presented and the implications for future developments to the proposed system are discussed.
APA, Harvard, Vancouver, ISO, and other styles
15

Fan, Gaojie Fan. "Systems Factorial Technology extended to bilateral visual fields and model predictions testing." Miami University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=miami1578344429925516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Shi, Rui. "Applying the Extended Parallel Process Model to examine posters in 2008 Chinese annual anti-drug campaign." Winston-Salem, NC : Wake Forest University, 2009. http://dspace.zsr.wfu.edu/jspui/handle/10339/42600.

Full text
Abstract:
Thesis (M.A.)--Wake Forest University. Dept. of Communication, 2009.
Title from electronic thesis title page. Thesis advisor: Michael David Hazen. Vita. Includes bibliographical references (p. 50-55).
APA, Harvard, Vancouver, ISO, and other styles
17

Murniadi, Krishnamurti Murniadi. "Curbing Excessive Pornography Consumption Using Traditional, Relationship, and Religious Identity-Based Extended Parallel Process Model Messages." Kent State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=kent153295926543633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Shao, L. (Lan). "An extended model of decision field theory integrated with AHP structure for complex decision making problems." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201505261672.

Full text
Abstract:
Decision Field Theory (DFT) provides an approach to explain the deliberation process of decision making under dynamic environment. However, the performance of original DFT theory is imperfect when the dynamic environment is getting complex. This research is aimed to build an extended model of DFT with good explanation and prediction abilities under complex dynamic environment. The dynamic structure of Analytic Hierarchy Process (AHP) was used in order to improve the flexibility and adaptability of extended model. In this study, the systematic literature review (SLR) was conducted to explore the previous research in dynamic decision making field. The review protocol, regarding to review questions, review purpose and review method was developed during planning phase. After performing several steps of selection, 62 primary studies were selected. According to the results of analysis, limited number of primary studies are related to the practical application of DFT in specific context. Therefore, it is necessary to extend the DFT model. In practice, class attending behavior of students was selected, as one example of complex dynamic making problems, to evaluate the extended model. In order to collect relevant data of decision making, three rounds of web survey were conducted. The students from University of Oulu are the respondents of the web survey. In conclusion, the analysis results of data proved that proposed model is able to explain and predict the dynamic behavior of decision making well. This research opens space for future research about model studying and building.
APA, Harvard, Vancouver, ISO, and other styles
19

Kumar, Rahul. "Load Balancing Parallel Explicit State Model Checking." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd455.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Scheiman, Kevin S. "A Parallel Spectral Method Approach to Model Plasma Instabilities." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1527424992108785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Harris, Christopher John. "A parallel model for the heterogeneous computation of radio astronomy signal correlation." University of Western Australia. School of Physics, 2009. http://theses.library.uwa.edu.au/adt-WU2010.0019.

Full text
Abstract:
The computational requirements of scientific research are constantly growing. In the field of radio astronomy, observations have evolved from using single telescopes, to interferometer arrays of many telescopes, and there are currently arrays of massive scale under development. These interferometers use signal and image processing to produce data that is useful to radio astronomy, and the amount of processing required scales quadratically with the scale of the array. Traditional computational approaches are unable to meet this demand in the near future. This thesis explores the use of heterogeneous parallel processing to meet the computational demands of radio astronomy. In heterogeneous computing, multiple hardware architectures are used for processing. In this work, the Graphics Processing Unit (GPU) is used as a co-processor along with the Central Processing Unit (CPU) for the computation of signal processing algorithms. Specifically, the suitability of the GPU to accelerate the correlator algorithms used in radio astronomy is investigated. This work first implemented a FX correlator on the GPU, with a performance increase of one to two orders of magnitude over a serial CPU approach. The FX correlator algorithm combines pairs of telescope signals in the Fourier domain. Given N telescope signals from the interferometer array, N2 conjugate multiplications must be calculated in the algorithm. For extremely large arrays (N >> 30), this is a huge computational requirement. Testing will show that the GPU correlator produces results equivalent to that of a software correlator implemented on the CPU. However, the algorithm itself is adapted in order to take advantage of the processing power of the GPU. Research examined how correlator parameters, in particular the number of telescope signals and the Fast Fourier Transform (FFT) length, affected the results.
APA, Harvard, Vancouver, ISO, and other styles
22

Rullmann, Markus, Rainer Schaffer, Sebastian Siegel, and Renate Merker. "SYMPAD - A Class Library for Processing Parallel Algorithm Specifications." Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700971.

Full text
Abstract:
In this paper we introduce a new class library to model transformations of parallel algorithms. SYMPAD serves as a basis to develop automated tools and methods to generate efficient implementations of such algorithms. The paper gives an overview over the general structure, as well as features of the library. We further describe the fundamental design process that is controlled by our developed methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Saffrey, Peter. "Optimising communication structure for model checking." Thesis, Connect to electronic version, 2003. http://hdl.handle.net/1905/165.

Full text
Abstract:
Thesis (Ph. D.)--University of Glasgow, 2003.
Ph. D. thesis submitted to the Computing Science Department, University of Glasgow, 2003. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
24

Nte, Solomon. "Parallel Distributed Processing (PDP) models as a framework for designing cognitive rehabilitation therapy." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/parallel-distributed-processing-pdp-models-as-a-framework-for-designing-cognitive-rehabilitation-therapy(e1073718-8a5e-458a-90d1-ed55ed7dfca3).html.

Full text
Abstract:
Parallel Distributed Processing (PDP) modelling has simulated developmental learning across a range of domains such as reading (e.g. Seidenberg & McClelland,1989) or Semantics (e.g. Rogers et al. 2004). However aside from two notable exceptions (Plaut, 1996; Welbourne & Lambon Ralph, 2005b) modelling research has not addressed the simulation of relearning during spontaneous recovery or rehabilitation after brain damage, and no research has considered the effect of the learning environment. This thesis used an established PDP model of semantic memory (Rogers et al., 2004) to simulate the influence of the learning environment. A novel quantitative measure (called representational economy) was developed to monitor efficiency during learning. Developmental learning is considered to be multimodal (e.g. Gogate et al., 2000) whereas rehabilitation is normally carried out through therapy sessions employing unimodal learning tasks (Best & Nickels, 2000). This thesis hoped to discover whether multimodal rehabilitation may be more efficient (as suggested by Howard et al., 1985). Three sets of simulations were conducted: The first set contrasted multimodal and unimodal learning in development and recovery, and tested internal representations for robustness to damage finding multimodal learning to be more efficient in all cases. The second set looked at whether this multimodal advantage could be approximated by reordering unimodal tasks at the item level. Findings indicated that the multimodal advantage is dependent upon simultaneous item presentation across multiple modalities. The third set of simulations contrasted multimodal and unimodal environments during rehabilitation while manipulating background spontaneous recovery, therapy set size and damage severity finding a multimodal advantage for all conditions of rehabilitation. The thesis findings suggest PDP models may be well-suited to predicting the effects of rehabilitation, and that clinical exploration of multimodal learning environments may yield substantial benefits in patient-related work.
APA, Harvard, Vancouver, ISO, and other styles
25

Ruokamo, A. (Ari). "Building an image- and video processing application on Apple iOS platform using a parallel programming model." Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201805091684.

Full text
Abstract:
Today powerful parallel computer architectures empower numerous application areas in personal computing and consumer electronics and parallel computation is an established mainstay in personal mobile devices (PMD). During last ten years PMDs have been equipped with increasingly powerful parallel computation architectures (CPU+GPU) enabling rich gaming, photography and multimedia experiences and general purpose parallel computation through application programming interfaces such as OpenGL ES and Apple Metal. Using a narrative literature review this study viewed into current status of parallel computing and parallel programming and specifically its application and practices of digital image processing applied in the domain of Mobile Systems (MS) and Personal Mobile Devices (PMD). While the research on the context is an emerging topic, there still is a limited amount of research available on the topic. As acknowledged in the literature and in the practice, the OpenGL ES programming model for computing tasks can be a challenging environment for many programmers. With OpenGL ES, the paradigm shift from serial- to parallel programming, in addition to changes and challenges in used programming language and the tools supporting the development, can be a barrier for many programmers. In this thesis a Design Science Research (DSR) approach was applied to build an artefact — an image- and video processing application on Apple iOS software platform using OpenGL ES parallel programming model. An Open Source Software (OSS) parallel computing library GPUImage was applied in the implementation of the artefact filtering- and effects functionality. Using the library, the process of applying the parallel programming model was efficient and productive. The used library structures and functionality were effectively suppressing the complexity of OpenGL ES setup- and management programming and provided efficient filter structures for implementing image- and video filters and effects. The application filtering performance was measured in real-time- and post-processing cases and was perceived as good, alongside the feedback collected from demonstration sessions and end-users. However, designing new custom cinematic filters using OpenGL ES Shading Language is a challenging task and requires a great deal of specific knowledge of technical aspects of the OpenGL ES domain. The used programming language (OpenGL ES Shading Language) and tools supporting the work process of design, implementation and debugging of the GPU program algorithms were not optimal in terms of applicability and productivity. Findings note, that more generic and applicable language would benefit the development of parallel computation applications on PMD platforms.
APA, Harvard, Vancouver, ISO, and other styles
26

Nielson, Curtis R. "A Descriptive Performance Model of Small, Low Cost, Diskless Beowulf Clusters." Diss., CLICK HERE for online access, 2003. http://contentdm.lib.byu.edu/ETD/image/etd280.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sisman, Cagri Tahsin. "Parallel Processing Of Three-dimensional Navier-stokes Equations For Compressible Flows." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606544/index.pdf.

Full text
Abstract:
The aim of this study is to develop a code that is capable of solving three-dimensional compressible flows which are viscous and turbulent, and parallelization of this code. Purpose of parallelization is to obtain a computational efficiency in time respect which enables the solution of complex flow problems in reasonable computational times. In the first part of the study, which is the development of a three-dimensional Navier-Stokes solver for turbulent flows, first step is to develop a two-dimensional Euler code using Roe flux difference splitting method. This is followed by addition of sub programs involving calculation of viscous fluxes. Third step involves implementation of Baldwin-Lomax turbulence model to the code. Finally, the Euler code is generalized to three-dimensions. At every step, code validation is done by comparing numerical results with theoretical, experimental or other numerical results, and adequate consistency between these results is obtained. In the second part, which is the parallelization of the developed code, two-dimensional code is parallelized by using Message Passing Interface (MPI), and important improvements in computational times are obtained.
APA, Harvard, Vancouver, ISO, and other styles
28

孫昱東 and Yudong Sun. "A distributed object model for solving irregularly structured problemson distributed systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31243630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Mao, Ai-sheng. "A Theoretical Network Model and the Incremental Hypercube-Based Networks." Thesis, University of North Texas, 1995. https://digital.library.unt.edu/ark:/67531/metadc277860/.

Full text
Abstract:
The study of multicomputer interconnection networks is an important area of research in parallel processing. We introduce vertex-symmetric Hamming-group graphs as a model to design a wide variety of network topologies including the hypercube network.
APA, Harvard, Vancouver, ISO, and other styles
30

Sun, Yudong. "A distributed object model for solving irregularly structured problems on distributed systems /." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23501662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Glendenning, Kurtis M. "Browser Based Visualization for Parameter Spaces of Big Data Using Client-Server Model." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1441203223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kerr, Andrew. "A model of dynamic compilation for heterogeneous compute platforms." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47719.

Full text
Abstract:
Trends in computer engineering place renewed emphasis on increasing parallelism and heterogeneity. The rise of parallelism adds an additional dimension to the challenge of portability, as different processors support different notions of parallelism, whether vector parallelism executing in a few threads on multicore CPUs or large-scale thread hierarchies on GPUs. Thus, software experiences obstacles to portability and efficient execution beyond differences in instruction sets; rather, the underlying execution models of radically different architectures may not be compatible. Dynamic compilation applied to data-parallel heterogeneous architectures presents an abstraction layer decoupling program representations from optimized binaries, thus enabling portability without encumbering performance. This dissertation proposes several techniques that extend dynamic compilation to data-parallel execution models. These contributions include: - characterization of data-parallel workloads - machine-independent application metrics - framework for performance modeling and prediction - execution model translation for vector processors - region-based compilation and scheduling We evaluate these claims via the development of a novel dynamic compilation framework, GPU Ocelot, with which we execute real-world workloads from GPU computing. This enables the execution of GPU computing workloads to run efficiently on multicore CPUs, GPUs, and a functional simulator. We show data-parallel workloads exhibit performance scaling, take advantage of vector instruction set extensions, and effectively exploit data locality via scheduling which attempts to maximize control locality.
APA, Harvard, Vancouver, ISO, and other styles
33

Adams, William Edward. "Untangling the threads reduction for a concurrent object-based programming model /." Digital version:, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p9992741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Wahlström, Niklas. "Modeling of Magnetic Fields and Extended Objects for Localization Applications." Doctoral thesis, Linköpings universitet, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-122396.

Full text
Abstract:
The level of automation in our society is ever increasing. Technologies like self-driving cars, virtual reality, and fully autonomous robots, which all were unimaginable a few decades ago, are realizable today, and will become standard consumer products in the future. These technologies depend upon autonomous localization and situation awareness where careful processing of sensory data is required. To increase efficiency, robustness and reliability, appropriate models for these data are needed.In this thesis, such models are analyzed within three different application areas, namely (1) magnetic localization, (2) extended target tracking, and (3) autonomous learning from raw pixel information. Magnetic localization is based on one or more magnetometers measuring the induced magnetic field from magnetic objects. In this thesis we present a model for determining the position and the orientation of small magnets with an accuracy of a few millimeters. This enables three-dimensional interaction with computer programs that cannot be handled with other localization techniques. Further, an additional model is proposed for detecting wrong-way drivers on highways based on sensor data from magnetometers deployed in the vicinity of traffic lanes. Models for mapping complex magnetic environments are also analyzed. Such magnetic maps can be used for indoor localization where other systems, such as GPS, do not work. In the second application area, models for tracking objects from laser range sensor data are analyzed. The target shape is modeled with a Gaussian process and is estimated jointly with target position and orientation. The resulting algorithm is capable of tracking various objects with different shapes within the same surveillance region. In the third application area, autonomous learning based on high-dimensional sensor data is considered. In this thesis, we consider one instance of this challenge, the so-called pixels to torques problem, where an agent must learn a closed-loop control policy from pixel information only. To solve this problem, high-dimensional time series are described using a low-dimensional dynamical model. Techniques from machine learning together with standard tools from control theory are used to autonomously design a controller for the system without any prior knowledge. System models used in the applications above are often provided in continuous time. However, a major part of the applied theory is developed for discrete-time systems. Discretization of continuous-time models is hence fundamental. Therefore, this thesis ends with a method for performing such discretization using Lyapunov equations together with analytical solutions, enabling efficient implementation in software.
Hur kan man få en dator att följa pucken i bordshockey för att sammanställa match-statistik, en pensel att måla virtuella vattenfärger, en skalpell för att digitalisera patologi, eller ett multi-verktyg för att skulptera i 3D?  Detta är fyra applikationer som bygger på den patentsökta algoritm som utvecklats i avhandlingen. Metoden bygger på att man gömmer en liten magnet i verktyget, och placerar ut ett antal tre-axliga magnetometrar - av samma slag som vi har i våra smarta telefoner - i ett nätverk kring vår arbetsyta. Magnetens magnetfält ger upphov till en unik signatur i sensorerna som gör att man kan beräkna magnetens position i tre frihetsgrader, samt två av dess vinklar. Avhandlingen tar fram ett komplett ramverk för dessa beräkningar och tillhörande analys. En annan tillämpning som studerats baserat på denna princip är detektion och klassificering av fordon. I ett samarbete med Luleå tekniska högskola med projektpartners har en algoritm tagits fram för att klassificera i vilken riktning fordonen passerar enbart med hjälp av mätningar från en två-axlig magnetometer. Tester utanför Luleå visar på i princip 100% korrekt klassificering. Att se ett fordon som en struktur av magnetiska dipoler i stället för en enda stor, är ett exempel på ett så kallat utsträckt mål. I klassisk teori för att följa flygplan, båtar mm, beskrivs målen som en punkt, men många av dagens allt noggrannare sensorer genererar flera mätningar från samma mål. Genom att ge målen en geometrisk utsträckning eller andra attribut (som dipols-strukturer) kan man inte enbart förbättra målföljnings-algoritmerna och använda sensordata effektivare, utan också klassificera målen effektivare. I avhandlingen föreslås en modell som beskriver den geometriska formen på ett mer flexibelt sätt och med en högre detaljnivå än tidigare modeller i litteraturen. En helt annan tillämpning som studerats är att använda maskininlärning för att lära en dator att styra en plan pendel till önskad position enbart genom att analysera pixlarna i video-bilder. Metodiken går ut på att låta datorn få studera mängder av bilder på en pendel, i det här fallet 1000-tals, för att förstå dynamiken av hur en känd styrsignal påverkar pendeln, för att sedan kunna agera autonomt när inlärningsfasen är klar. Tekniken skulle i förlängningen kunna användas för att utveckla autonoma robotar.

In the electronic version figure 2.2a is corrected.


COOPLOC
APA, Harvard, Vancouver, ISO, and other styles
35

Ungruh, Joachim. "A neurally based vision model for line extraction and attention." Thesis, Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/8303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Argante, Erco. "CoCa a model for parallelization of high energy physics software /." Eindhoven : Eindhoven University of Technology, 1998. http://catalog.hathitrust.org/api/volumes/oclc/41892351.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Knee, Simon. "Opal : modular programming using the BSP model." Thesis, University of Oxford, 1997. http://ora.ox.ac.uk/objects/uuid:97d95f01-a098-499c-8c07-303b853c2460.

Full text
Abstract:
Parallel processing can provide the huge computational resources that are required to solve todays grand challenges, at a fraction of the cost of developing sequential machines of equal power. However, even with such attractive benefits the parallel software industry is still very small compared to its sequential counterpart. This has been attributed to the lack of an accepted parallel model of computation, therefore leading to software which is architecture dependent with unpredictable performance. The Bulk Synchronous Parallel (BSP) model provides a solution to these problems and can be compared to the Von Neumann model of sequential computation. In this thesis we investigate the issues involved in providing a modular programming environment based on the BSP model. Using our results we present Opal, a BSP programming language that has been designed for parallel programming-in-the-large. While other BSP languages and libraries have been developed, none of them provide support for libraries of parallel algorithms. A library mechanism must be introduced into BSP without destroying the existing cost model. We examine such issues and show that the active library mechanism of Opal leads to algorithms which still have predictable performance. If algorithms are to retain acceptable levels of performance across a range of machines then they must be able to adapt to the architecture that they are executing on. Such adaptive algorithms require support from the programming language, an issue that has been addressed in Opal. To demonstrate the Opal language and its modular features we present a number of example algorithms. Using an Opal compiler that has been developed we show that we can accurately predict the performance of these algorithms. The thesis concludes that by using Opal it is possible to program the BSP model in a modular fashion that follows good software engineering principles. This enables large scale parallel software to be developed that is architecture independent, has predictable performance and is adaptive to the target architecture.
APA, Harvard, Vancouver, ISO, and other styles
38

Cohen, Elizabeth L. "Exploring Subtext Processing in Narrative Persuasion: The Role of Eudaimonic Entertainment Use Motivation and a Supplemental Conclusion Scene." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/communication_diss/35.

Full text
Abstract:
This study sought to expand current narrative persuasion models by examining the role of subtext processing. The extended elaboration likelihood model suggests that transportation leads to persuasion by reducing counterarguments to stories’ persuasive subtexts. The model implicitly argues that transportation should reduce total subtext processing, including counterarguments and intended elaboration. But this study reasoned that people with stronger eudaimonic motivation to have meaningful entertainment experiences, would put more effort into processing stories’ subtexts while engaging with the narrative. Because less eudaimonically motivated individuals may be at risk for missing the subtext, it was also expected that adding a supplemental conclusion scene that reiterates the intended message would facilitate persuasion.Following a pre-test survey, 201 undergraduate students were randomly assigned to view an episode of the crime drama Numb3rs: one of two versions of “Harvest,” designed to promote organ donation (with or without a conclusion scene), or a control episode. After viewing, participants completed a thought-listing task and second survey. Results show that “Harvest” did not result in persuasive outcomes related to organ donation. Transportation was a marginally significant positive predictor of total subtext processing. Contrary to predictions, eudaimonic motivation negatively predicted amount of total subtext processing.Eudaimonic motivation also negatively (but marginally) predicted doctor mistrust, but this effect was moderated by conclusion condition: eudaimonic motivation was negatively associated with doctor mistrust only in the no conclusion condition. Eudaimonic motivation was also negatively (but marginally) associated with intended elaboration. Further examination showed that, compared to people with low eudaimonic motivation, those with high eudaimonic motivation were less likely to engage in intended elaboration, but only in the no conclusion condition. This pattern of findings provides indirect evidence that intended elaboration was responsible for decreasing doctor mistrust among people with high eudaimonic motivation who saw the conclusion. But surprisingly, intended elaboration was not directly related to any persuasive outcomes.The findings tentatively suggest that transportation and subtext processing can coexist and that eudaimonic motivation can affect the extent to which viewers engage in subtext processing during narrative engagement. The results also indicate that supplemental conclusions may be useful tools for narrative persuasion.
APA, Harvard, Vancouver, ISO, and other styles
39

Penczek, Frank. "Static guarantees for coordinated components : a statically typed composition model for stream-processing networks." Thesis, University of Hertfordshire, 2012. http://hdl.handle.net/2299/9046.

Full text
Abstract:
Does your program do what it is supposed to be doing? Without running the program providing an answer to this question is much harder if the language does not support static type checking. Of course, even if compile-time checks are in place only certain errors will be detected: compilers can only second-guess the programmer’s intention. But, type based techniques go a long way in assisting programmers to detect errors in their computations earlier on. The question if a program behaves correctly is even harder to answer if the program consists of several parts that execute concurrently and need to communicate with each other. Compilers of standard programming languages are typically unable to infer information about how the parts of a concurrent program interact with each other, especially where explicit threading or message passing techniques are used. Hence, correctness guarantees are often conspicuously absent. Concurrency management in an application is a complex problem. However, it is largely orthogonal to the actual computational functionality that a program realises. Because of this orthogonality, the problem can be considered in isolation. The largest possible separation between concurrency and functionality is achieved if a dedicated language is used for concurrency management, i.e. an additional program manages the concurrent execution and interaction of the computational tasks of the original program. Such an approach does not only help programmers to focus on the core functionality and on the exploitation of concurrency independently, it also allows for a specialised analysis mechanism geared towards concurrency-related properties. This dissertation shows how an approach that completely decouples coordination from computation is a very supportive substrate for inferring static guarantees of the correctness of concurrent programs. Programs are described as streaming networks connecting independent components that implement the computations of the program, where the network describes the dependencies and interactions between components. A coordination program only requires an abstract notion of computation inside the components and may therefore be used as a generic and reusable design pattern for coordination. A type-based inference and checking mechanism analyses such streaming networks and provides comprehensive guarantees of the consistency and behaviour of coordination programs. Concrete implementations of components are deliberately left out of the scope of coordination programs: Components may be implemented in an external language, for example C, to provide the desired computational functionality. Based on this separation, a concise semantic framework allows for step-wise interpretation of coordination programs without requiring concrete implementations of their components. The framework also provides clear guidance for the implementation of the language. One such implementation is presented and hands-on examples demonstrate how the language is used in practice.
APA, Harvard, Vancouver, ISO, and other styles
40

Triplett, Josh. "Relativistic Causal Ordering A Memory Model for Scalable Concurrent Data Structures." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/497.

Full text
Abstract:
High-performance programs and systems require concurrency to take full advantage of available hardware. However, the available concurrent programming models force a difficult choice, between simple models such as mutual exclusion that produce little to no concurrency, or complex models such as Read-Copy Update that can scale to all available resources. Simple concurrent programming models enforce atomicity and causality, and this enforcement limits concurrency. Scalable concurrent programming models expose the weakly ordered hardware memory model, requiring careful and explicit enforcement of causality to preserve correctness, as demonstrated in this dissertation through the manual construction of a scalable hash-table item-move algorithm. Recent research on "relativistic programming" aims to standardize the programming model of Read-Copy Update, but thus far these efforts have lacked a generalized memory ordering model, requiring data-structure-specific reasoning to preserve causality. I propose a new memory ordering model, "relativistic causal ordering", which combines the scalabilty of relativistic programming and Read-Copy Update with the simplicity of reader atomicity and automatic enforcement of causality. Programs written for the relativistic model translate to scalable concurrent programs for weakly-ordered hardware via a mechanical process of inserting barrier operations according to well-defined rules. To demonstrate the relativistic causal ordering model, I walk through the straightforward construction of a novel concurrent hash-table resize algorithm, including the translation of this algorithm from the relativistic model to a hardware memory model, and show through benchmarks that the resulting algorithm scales far better than those based on mutual exclusion.
APA, Harvard, Vancouver, ISO, and other styles
41

Morrison, Adrian Franklin. "An Efficient Method for Computing Excited State Properties of Extended Molecular Aggregates Based on an Ab-Initio Exciton Model." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1509730158943602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Feng, Shuangtong. "Efficient Parallelization of 2D Ising Spin Systems." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/36263.

Full text
Abstract:
The problem of efficient parallelization of 2D Ising spin systems requires realistic algorithmic design and implementation based on an understanding of issues from computer science and statistical physics. In this work, we not only consider fundamental parallel computing issues but also ensure that the major constraints and criteria of 2D Ising spin systems are incorporated into our study. This realism in both parallel computation and statistical physics has rarely been reflected in previous research for this problem.

In this thesis,we designed and implemented a variety of parallel algorithms for both sweep spin selection and random spin selection. We analyzed our parallel algorithms on a portable and general parallel machine model, namely the LogP model. We were able to obtain rigorous theoretical run-times on LogP for all the parallel algorithms. Moreover, a guiding equation was derived for choosing data layouts (blocked vs. stripped) for sweep spin selection. In regards to random spin selection, we were able to develop parallel algorithms with efficient communication schemes. We analyzed randomness of our schemes using statistical methods and provided comparisons between the different schemes. Furthermore, algorithms were implemented and performance data gathered and analyzed in order to determine further design issues and validate theoretical analysis.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
43

Jadrný, Miroslav. "Model workflow a jeho grafické rozhraní." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237094.

Full text
Abstract:
Business process management is important topic in business information systems. Workflow systems are taking the top places in company information system architecture due to aspiration to make business process more and more optimized. This project is about parallel processing and implementation of business processes parallel processing in complex information systems. Content of this project is o function and object library for modeling business process in Vema, a. s. Workflow system. Important part of this project is parallel processing solution and its implementation.
APA, Harvard, Vancouver, ISO, and other styles
44

Bengtsson, Jerker. "Models and Methods for Development of DSP Applications on Manycore Processors." Doctoral thesis, Högskolan i Halmstad, Centrum för forskning om inbyggda system (CERES), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-14706.

Full text
Abstract:
Advanced digital signal processing systems require specialized high-performance embedded computer architectures. The term high-performance translates to large amounts of data and computations per time unit. The term embedded further implies requirements on physical size and power efficiency. Thus the requirements are of both functional and non-functional nature. This thesis addresses the development of high-performance digital signal processing systems relying on manycore technology. We propose building two-level hierarchical computer architectures for this domain of applications. Further, we outline a tool flow based on methods and analysis techniques for automated, multi-objective mapping of such applications on distributed memory manycore processors. In particular, the focus is put on how to provide a means for tunable strategies for mapping of task graphs on array structured distributed memory manycores, with respect to given application constraints. We argue for code mapping strategies based on predicted execution performance, which can be used in an auto-tuning feedback loop or to guide manual tuning directed by the programmer. Automated parallelization, optimisation and mapping to a manycore processor benefits from the use of a concurrent programming model as the starting point. Such a model allows the programmer to express different types and granularities of parallelism as well as computation characteristics of importance in the addressed class of applications. The programming model should also abstract away machine dependent hardware details. The analytical study of WCDMA baseband processing in radio base stations, presented in this thesis, suggests dataflow models as a good match to the characteristics of the application and as execution model abstracting computations on a manycore. Construction of portable tools further requires a manycore machine model and an intermediate representation. The models are needed in order to decouple algorithms, used to transform and map application software, from hardware. We propose a manycore machine model that captures common hardware resources, as well as resource dependent performance metrics for parallel computation and communication. Further, we have developed a multifunctional intermediate representation, which can be used as source for code generation and for dynamic execution analysis. Finally, we demonstrate how we can dynamically analyse execution using abstract interpretation on the intermediate representation. It is shown that the performance predictions can be used to accurately rank different mappings by best throughput or shortest end-to-end computation latency.
APA, Harvard, Vancouver, ISO, and other styles
45

Qiao, Hao. "Sparse hierarchical model order reduction for high speed interconnects." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:8881/R/?func=dbin-jump-full&object_id=32359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bueno, Yvette. "The Co-Construction of Self-Talk and Illness Narratives: An HIV Intervention Case Study." Scholarly Repository, 2009. http://scholarlyrepository.miami.edu/oa_dissertations/200.

Full text
Abstract:
This case study investigates the co-construction communication patterns that emerged during an Human Immunodeficiency Virus (HIV) intervention designed to reduce negative and critical self-talk. The transcripts of eight sequential acupressure and behavioral (SAB) counseling intervention sessions between a therapist and two medically nonadherent HIV-infected women were analyzed using Giorgi's (1989, 1994, 1997, 2006) phenomeonlogical method of inquiry. The analysis revealed three major themes: "assessing the present," "reviewing the past," and "forging the future," and eight subthemes: "safe atmosphere," "disclosure," "negotiating meaning," "releasing the past," "breaking the past-to-present pattern," "reducing uncertainty," "generating options," and "projecting images." Prior to the intervention sessions, the women reported experiencing negative and critical self-talk and inconsistent medication adherence. Self-talk and illness narrative modifications were evident within and across sessions as the therapist used sequential acupressure and behavioral counseling techniques. During the one month follow-up, the participants reported no experience of negative and critical self-talk and described actions taken toward goals discussed and imagined during the intervention such as medication adherence, exercise, and reenrollment in school. The co-construction themes that emerged in the intervention were consistent with findings in the comforting message literature with specific parallels to the factor analysis findings of Bippus (2001). This work lends support to comforting message research and suggests that distinctions between everyday comforting messages and chronic illness support strategies may be more similar than anticipated. Other study conclusions include clinical and practical implications for people working with HIV-infected individuals.
APA, Harvard, Vancouver, ISO, and other styles
47

Arroyo, Negrete Elkin Rafael. "Continuous reservoir model updating using an ensemble Kalman filter with a streamline-based covariance localization." Texas A&M University, 2006. http://hdl.handle.net/1969.1/4859.

Full text
Abstract:
This work presents a new approach that combines the comprehensive capabilities of the ensemble Kalman filter (EnKF) and the flow path information from streamlines to eliminate and/or reduce some of the problems and limitations of the use of the EnKF for history matching reservoir models. The recent use of the EnKF for data assimilation and assessment of uncertainties in future forecasts in reservoir engineering seems to be promising. EnKF provides ways of incorporating any type of production data or time lapse seismic information in an efficient way. However, the use of the EnKF in history matching comes with its shares of challenges and concerns. The overshooting of parameters leading to loss of geologic realism, possible increase in the material balance errors of the updated phase(s), and limitations associated with non-Gaussian permeability distribution are some of the most critical problems of the EnKF. The use of larger ensemble size may mitigate some of these problems but are prohibitively expensive in practice. We present a streamline-based conditioning technique that can be implemented with the EnKF to eliminate or reduce the magnitude of these problems, allowing for the use of a reduced ensemble size, thereby leading to significant savings in time during field scale implementation. Our approach involves no extra computational cost and is easy to implement. Additionally, the final history matched model tends to preserve most of the geological features of the initial geologic model. A quick look at the procedure is provided that enables the implementation of this approach into the current EnKF implementations. Our procedure uses the streamline path information to condition the covariance matrix in the Kalman Update. We demonstrate the power and utility of our approach with synthetic examples and a field case. Our result shows that using the conditioned technique presented in this thesis, the overshooting/undershooting problems disappears and the limitation to work with non- Gaussian distribution is reduced. Finally, an analysis of the scalability in a parallel implementation of our computer code is given.
APA, Harvard, Vancouver, ISO, and other styles
48

Heiden, Erin Ose. "Injuries among individuals with pre-existing spinal cord injury: understanding injury patterns, burdens, and prevention." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/1624.

Full text
Abstract:
As a growing body of research has focused on the individual, social, and environmental factors that facilitate life after spinal cord injury (SCI), particular emphasis has been placed on health conditions that are modifiable and preventable. Subsequent injuries are a serious health problem for individuals with SCI. They are a direct threat to further morbidity and mortality, and are both a cause and consequence other secondary health conditions. As a first step toward understanding this public health problem, the purpose of this dissertation research was to describe the patterns, burdens, and prevention of subsequent injury among individuals with SCI. In three distinct, but related studies, this dissertation examined the characteristics of hospitalizations due to an injury among individuals with paraplegia, and compared the differences in length of stay (LOS) and hospital costs of injury hospitalizations between individuals with quadriplegia versus paraplegia. In addition, it explored the experience of subsequent injury among individuals with SCI who return to work and examined perceptions of threat and efficacy in preventing subsequent injury using the Extended Parallel Process Model. Using discharge level weighting available in the Nationwide Inpatient Sample, Study 1 calculated national estimates of injury hospitalizations for individuals with paraplegia by patient, hospital, and injury characteristics. Most injury hospitalizations occurred among males, to individuals 35-49 years, and were due to falls, poisonings, or motor vehicle traffic. With the same dataset, Study 2 used logistic regression to estimate the effect of patient characteristics on odds of hospitalized patients with quadriplegia versus paraplegia, and linear regression to estimate predicted differences in hospital costs for individuals with quadriplegia compared to paraplegia. Fewer injury hospitalizations but longer hospital stays, and higher hospital costs per discharge were found for individuals with quadriplegia compared to individuals with paraplegia. Males, younger age, and the uninsured were significant predictors of higher hospital costs. Finally, Study 3 used in-depth interviews to qualitatively explore the perceptions on subsequent injury among individuals with SCI who return to work, and found individuals with SCI who return to work recognized the importance of preventing subsequent injury, and were taking actions to prevent subsequent injury in their daily life and in the workplace. The significance of this research is that it is the first description of injury hospitalizations for all causes of injury by specific type of SCI, and the associated medical outcomes of LOS and direct medical costs. Prevention of subsequent injury should be a priority. The perceptions of individuals with SCI about the severity of and their susceptibility to injury and the efficacy of individual and environmental actions to prevent subsequent injury described in this research should be used to inform the development of interventions that prevent subsequent injury.
APA, Harvard, Vancouver, ISO, and other styles
49

Gichamo, Tseganeh Zekiewos. "Advancing Streamflow Forecasts Through the Application of a Physically Based Energy Balance Snowmelt Model With Data Assimilation and Cyberinfrastructure Resources." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7463.

Full text
Abstract:
The Colorado Basin River Forecast Center (CBRFC) provides forecasts of streamflow for purposes such as flood warning and water supply. Much of the water in these basins comes from spring snowmelt, and the forecasters at CBRFC currently employ a suite of models that include a temperature-index snowmelt model. While the temperature-index snowmelt model works well for weather and land cover conditions that do not deviate from those historically observed, the changing climate and alterations in land use necessitate the use of models that do not depend on calibrations based on past data. This dissertation reports work done to overcome these limitations through using a snowmelt model based on physically invariant principles that depends less on calibration and can directly accommodate weather and land use changes. The first part of the work developed an ability to update the conditions represented in the model based on observations, a process referred to as data assimilation, and evaluated resulting improvements to the snowmelt driven streamflow forecasts. The second part of the research was the development of web services that enable automated and efficient access to and processing of input data to the hydrological models as well as parallel processing methods that speed up model executions. These tasks enable the more detailed models and data assimilation methods to be more efficiently used for streamflow forecasts.
APA, Harvard, Vancouver, ISO, and other styles
50

Andersen, Brandon Thomas. "Multi-Processor Computation of Thrombus Growth and Embolization in a Model of Blood-Biomaterial Interaction Based on Fluid Dynamics." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3465.

Full text
Abstract:
This work describes the development and testing of a real-time three-dimensional computational fluid dynamics simulation of thrombosis and embolization to be used in the design of blood-contacting devices. Features of the model include the adhesion and aggregation of blood platelets on device material surfaces, shear and chemical activation of blood platelets, and embolization of platelet aggregates due to shear forces. As thrombus develops, blood is diverted from its regular flow field. If shear forces on a thrombus are sufficient to overcome the strength of adhesion, the thrombus is dislodged from the wall. Development of the model included preparing thrombosis and embolization routines to run in a parallel processing configuration, and estimating necessary parameters for the model including the adhesion strength of platelet conglomerations to the device surfaces and the criterion threshold for the coalescence of neighboring thrombi. Validation of the model shows that the effect of variations in geometry may be accurately predicted through computational simulation. This work is based on previous work by Paul Goodman, Daniel Lattin, Jeff Ashton, and Denzel Frost.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography