To see the other types of publications on this topic, follow the link: Virtual memory systems.

Dissertations / Theses on the topic 'Virtual memory systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Virtual memory systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Griffiths, R. B. "Virtual memory systems using magnetic bubble memory." Thesis, Bucks New University, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.356215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramesh, Bharath. "Samhita: Virtual Shared Memory for Non-Cache-Coherent Systems." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23687.

Full text
Abstract:
Among the key challenges of computing today are the emergence of many-core architectures and the resulting need to effectively exploit explicit parallelism. Indeed, programmers are striving to exploit parallelism across virtually all platforms and application domains. The shared memory programming model effectively addresses the parallelism needs of mainstream computing (e.g., portable devices, laptops, desktop, servers), giving rise to a growing ecosystem of shared memory parallel techniques, tools, and design practices. However, to meet the extreme demands for processing and memory of critical problem domains, including scientific computation and data intensive computing, computing researchers continue to innovate in the high-end distributed memory architecture space to create cost-effective and scalable solutions. The emerging distributed memory architectures are both highly parallel and increasingly heterogeneous. As a result, they do not present the programmer with a cache-coherent view of shared memory, either across the entire system or even at the level of an individual node. Furthermore, it remains an open research question which programming model is best for the heterogeneous platforms that feature multiple traditional processors along with accelerators or co-processors. Hence, we have two contradicting trends. On the one hand, programming convenience and the presence of shared memory     call for a shared memory programming model across the entire heterogeneous system. On the other hand, increasingly parallel and heterogeneous nodes lacking cache-coherent shared memory call for a message passing model. In this dissertation, we present the architecture of Samhita, a distributed shared memory (DSM) system that addresses the challenge of providing shared memory for non-cache-coherent systems. We define regional consistency (RegC), the memory consistency model implemented by Samhita. We present performance results for Samhita on several computational kernels and benchmarks, on both cluster supercomputers and heterogeneous systems. The results demonstrate the promising potential of Samhita and the RegC model, and include the largest scale evaluation by a significant margin for any DSM system reported to date.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Martinez, Peck Mariano. "Application-Level Virtual Memory for Object-Oriented Systems." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2012. http://tel.archives-ouvertes.fr/tel-00764991.

Full text
Abstract:
Lors de l'exécution des applications à base d'objets, plusieurs millions d'objets peuvent être créés, utilisés et enfin détruits s'ils ne sont plus référencés. Néanmoins, des dysfonc- tionnements peuvent apparaître, quand des objets qui ne sont plus utilisés ne peuvent être détruits car ils sont référencés. De tels objets gaspillent la mémoire principale et les ap- plications utilisent donc davantage de mémoire que ce qui est effectivement requis. Nous affirmons que l'utilisation du gestionnaire de mémoire virtuel du système d'exploitation ne convient pas toujours, car ce dernier est totalement isolé des applications. Le système d'exploitation ne peut pas prendre en compte ni le domaine ni la structure des applications. De plus, les applications n'ont aucun moyen de contrôler ou influencer la gestion de la mémoire virtuelle. Dans cette thèse, nous présentons Marea, un gestionnaire de mémoire virtuelle piloté par les applications à base d'objets. Il constitue une solution originale qui permet aux développeurs de gérer la mémoire virtuelle au niveau applicatif. Les développeurs d'une application peuvent ordonner à notre système de libérer la mémoire principale en trans- férant les objets inutilisés, mais encore référencés vers une mémoire secondaire (telle qu'un disque dur). En plus de la description du modèle et des algorithmes sous-jacents à Marea, nous présentons notre implémentation dans le langage Pharo. Notre approche a été validée à la fois qualitativement et quantitativement. Ainsi, nous avons réalisés des expérimentations et des mesures sur des applications grandeur-nature pour montrer que Marea peut réduire l'empreinte mémoire de 25% et jusqu'à 40%.
APA, Harvard, Vancouver, ISO, and other styles
4

Milouchev, Alexandre (Alexandre M. ). "Estimating memory locality for virtual machines on NUMA systems." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85448.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 59-61).
The multicore revolution sparked another, similar movement towards scalable memory architectures. With most machines nowadays exhibiting non-uniform memory access (NUMA) properties, software and operating systems have seen the necessity to optimize their memory management to take full advantage of such architectures. Type 1 (native) hypervisors, in particular, are required to extract maximum performance from the underlying hardware, as they often run dozens of virtual machines (VMs) on a single system and provide clients with performance guarantees that must be met. While VM memory demand is often satisfied by CPU caches, memory-intensive workloads may induce a higher rate of last-level cache misses, requiring more accesses to RAM. On today's typical NUMA systems, accessing local RAM is approximately 50% faster than remote RAM. We discovered that current-generation processors from major manufacturers do not provide inexpensive ways to characterize the memory locality achieved by VMs and their constituents. Instead, we present in this thesis a series of techniques based on statistical sampling of memory that produce powerful estimates for NUMA locality and related metrics. Our estimates offer tremendous insight on inefficient placement of VMs and memory, and can be a solid basis for algorithms aiming at dynamic reorganization for improvements in locality, as well as NUMA-aware CPU scheduling algorithms.
by Alexandre Milouchev.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Huffman, Michael John. "JDiet: Footprint Reduction for Memory-constrained Systems." DigitalCommons@CalPoly, 2009. https://digitalcommons.calpoly.edu/theses/108.

Full text
Abstract:
Main memory remains a scarce computing resource. Even though main memory is becoming more abundant, software applications are inexorably engineered to consume as much memory as is available. For example, expert systems, scientific computing, data mining, and embedded systems commonly suffer from the lack of main memory availability. This thesis introduces JDiet, an innovative memory management system for Java applications. The goal of JDiet is to provide the developer with a highly configurable framework to reduce the memory footprint of a memory-constrained system, enabling it to operate on much larger working sets. Inspired by buffer management techniques common in modern database management systems, JDiet frees main memory by evicting non-essential data to a disk-based store. A buffer retains a fixed amount of managed objects in main memory. As non-resident objects are accessed, they are swapped from the store to the buffer using an extensible replacement policy. While the Java virtual machine naïvely delegates virtual memory management to the operating system, JDiet empowers the system designer to select both the managed data and replacement policy. Guided by compile-time configuration, JDiet performs aspect-oriented bytecode engineering, requiring no explicit coupling to the source or compiled code. The results of an experimental evaluation of the effectiveness of JDiet are reported. A JDiet-enabled XML DOM parser is capable of parsing and processing over 200% larger input documents by sacrificing less than an order of magnitude in performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Musunuru, Venkata Krishna Kanth. "Virtuo-ITS: An Interactive Tutoring System to Teach Virtual Memory Concepts of an Operating System." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1495481049986755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mohindra, Ajay. "Issues in the design of distributed shared memory systems." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/9123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

RAMAN, VENKATESH. "A STUDY OF CLUSTER PAGING METHODS TO BOOST VIRTUAL MEMORY PERFORMANCE." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1014062558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Petit, Martí Salvador Vicente. "Efficient Home-Based protocols for reducing asynchronous communication in shared virtual memory systems." Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/2908.

Full text
Abstract:
En la presente tesis se realiza una evaluación exhaustiva de ls Sistemas de Memoria Distribuida conocidos como Sistemas de Memoria Virtual Compartida. Este tipo de sistemas posee características que los hacen especialmente atractivos, como son su relativo bajo costo, alta portabilidad y paradigma de progración de memoria compartida. La evaluación consta de dos partes. En la primera se detallan las bases de diseño y el estado del arte de la investigación sobre este tipo de sistemas. En la segunda, se estudia el comportamiento de un conjunto representativo de cargas paralelas respecto a tres ejes de caracterización estrechamente relacionados con las prestaciones en estos sistemas. Mientras que la primera parte apunta la hipótesis de que la comunicación asíncrona es una de las principales causas de pérdida de prestaciones en los Sistemas de Memoria Virtual Compartida, la segunda no sólo la confirma, sino que ofrece un detallado análisis de las cargas del que se obteiene información sobre la potencial comunicación asíncrona atendiendo a diferentes parámetros del sistema. El resultado de la evaluación se utiliza para proponer dos nuevos protocolos para el funcionamiento de estos sistemas que utiliza un mínimo de recursos de hardware, alcanzando prestaciones similares e incluso superiores en algunos casos a sistemas que utilizan circuitos hardware de propósito específico para reducir la comunicación asíncrona. En particular, uno de los protocolos propuestos es comparado con una reconocida técnica hardware para reducir la comunicación asíncrona, obteniendo resultados satisfactorios y complementarios a la técnica comparada. Todos los modelos y técnicas usados en este trabajo han sido implementados y evalados utilizando un nuevo entorno de simulación desarollado en el contexto de este trabajo.
Petit Martí, SV. (2003). Efficient Home-Based protocols for reducing asynchronous communication in shared virtual memory systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/2908
Palancia
APA, Harvard, Vancouver, ISO, and other styles
10

Costa, Prats Juan José. "Efficient openMP over sequentially consistent distributed shared memory systems." Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/81012.

Full text
Abstract:
Nowadays clusters are one of the most used platforms in High Performance Computing and most programmers use the Message Passing Interface (MPI) library to program their applications in these distributed platforms getting their maximum performance, although it is a complex task. On the other side, OpenMP has been established as the de facto standard to program applications on shared memory platforms because it is easy to use and obtains good performance without too much effort. So, could it be possible to join both worlds? Could programmers use the easiness of OpenMP in distributed platforms? A lot of researchers think so. And one of the developed ideas is the distributed shared memory (DSM), a software layer on top of a distributed platform giving an abstract shared memory view to the applications. Even though it seems a good solution it also has some inconveniences. The memory coherence between the nodes in the platform is difficult to maintain (complex management, scalability issues, high overhead and others) and the latency of the remote-memory accesses which can be orders of magnitude greater than on a shared bus due to the interconnection network. Therefore this research improves the performance of OpenMP applications being executed on distributed memory platforms using a DSM with sequential consistency evaluating thoroughly the results from the NAS parallel benchmarks. The vast majority of designed DSMs use a relaxed consistency model because it avoids some major problems in the area. In contrast, we use a sequential consistency model because we think that showing these potential problems that otherwise are hidden may allow the finding of some solutions and, therefore, apply them to both models. The main idea behind this work is that both runtimes, the OpenMP and the DSM layer, should cooperate to achieve good performance, otherwise they interfere one each other trashing the final performance of applications. We develop three different contributions to improve the performance of these applications: (a) a technique to avoid false sharing at runtime, (b) a technique to mimic the MPI behaviour, where produced data is forwarded to their consumers and, finally, (c) a mechanism to avoid the network congestion due to the DSM coherence messages. The NAS Parallel Benchmarks are used to test the contributions. The results of this work shows that the false-sharing problem is a relative problem depending on each application. Another result is the importance to move the data flow outside of the critical path and to use techniques that forwards data as early as possible, similar to MPI, benefits the final application performance. Additionally, this data movement is usually concentrated at single points and affects the application performance due to the limited bandwidth of the network. Therefore it is necessary to provide mechanisms that allows the distribution of this data through the computation time using an otherwise idle network. Finally, results shows that the proposed contributions improve the performance of OpenMP applications on this kind of environments.
APA, Harvard, Vancouver, ISO, and other styles
11

Bonnier, Victor. "Comparison between OpenStack virtual machines and Docker containers in regards to performance." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19934.

Full text
Abstract:
Cloud computing is a fast growing technology which more and more companies are starting to use throughout the years. When deploying a cloud computing application it is important to know what kind of technology that you should use. Two popular technologies are containers and virtual machines. The objective with this study was to find out how the performance differs between Docker containers and OpenStack virtual machines in regards to memory usage, CPU utilization, time to boot up and throughput from a scalability perspective when scaling between two and four instances of containers and virtual machines. The comparison was done by having two different virtual machines running, one with Docker that ran the containers and another machine with OpenStack that was running a stack of my virtual machines. To gather the data from the virtual machines I used the command ”htop” and to get the data from the containers, I used the command ”Docker stats”. The results from the experiment showed a favor towards the Docker containers where the boot time on the virtual machines were between 280-320 seconds and the containers had between 5-8 seconds bootup time. The memory usage was more than doubled on the virtual machines than the containers. The CPU utilization and throughput favored the containers and the gap in performance increased when scaling the application outwards to four instances in all cases except for the throughput when adding information to a database. The conclusion that can be drawn from this is that Docker containers are favored over the OpenStack virtual machines from a performance perspective. There are still other aspects to think about regarding when choosing which technology to use when deploying a cloud application, such as security for example.
APA, Harvard, Vancouver, ISO, and other styles
12

Mathiason, Gunnar. "Segmentation in a Distributed Real-Time Main-Memory Database." Thesis, University of Skövde, Department of Computer Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-737.

Full text
Abstract:

To achieve better scalability, a fully replicated, distributed, main-memory database is divided into subparts, called segments. Segments may have individual degrees of redundancy and other properties that can be used for replication control. Segmentation is examined for the opportunity of decreasing replication effort, lower memory requirements and decrease node recovery times. Typical usage scenarios are distributed databases with many nodes where only a small number of the nodes share information. We present a framework for virtual full replication that implements segments with scheduled replication of updates between sharing nodes.

Selective replication control needs information about the application semantics that is specified using segment properties, which includes consistency classes and other properties. We define a syntax for specifying the application semantics and segment properties for the segmented database. In particular, properties of segments that are subject to hard real-time constraints must be specified. We also analyze the potential improvements for such an architecture.

APA, Harvard, Vancouver, ISO, and other styles
13

Hines, Michael R. "Techniques for collective physical memory ubiquity within networked clusters of virtual machines." Diss., Online access via UMI:, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
14

Yavneh, Jonathan S. "Virtual communities in the law enforcement environment do these systems lead to enhanced organizational memory /." Thesis, Monterey, Calif. : Naval Postgraduate School, 2008. http://edocs.nps.edu/npspubs/scholarly/theses/2008/Dec/08Dec%5FYavneh.pdf.

Full text
Abstract:
Thesis (M.A. in Security Studies (Homeland Security and Defense))--Naval Postgraduate School, December 2008.
Thesis Advisor(s): Bergin, Richard ; Josefek, Robert. "December 2008." Description based on title screen as viewed on February 5, 2009. Includes bibliographical references (p. 69-71). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
15

KUNAPULI, UDAYKUMAR. "A STUDY OF SWAP CACHE BASED PREFETCHING TO IMPROVE VITUAL MEMORY PERFORMANCE." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1014063417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Cárdenas, Delgado Sonia Elizabeth. "VR systems for memory assessment and depth perception." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/94629.

Full text
Abstract:
La evolución de la tecnología de Realidad Virtual (RV) ha contribuido en todos los campos, incluyendo la psicología. Esta evolución implica mejoras tanto en hardware como en software, que permiten experiencias más inmersivas. En un entorno de RV los usuarios pueden percibir la sensación de "presencia" y sentirse "inmersos". Estas sensaciones son posibles utilizando HMDs. Hoy en día, el desarrollo de los HMDs se ha centrado en mejorar sus características técnicas para ofrecer inmersión total. En psicología, los entornos de RV son una herramienta de investigación. Hay algunas aplicaciones para evaluar la memoria espacial que utilizan métodos básicos de interacción. Sin embargo, sistemas de RV que incorporen estereoscopía y movimiento físico todavía no se han explotado en psicología. En esta tesis, se ha desarrollado un nuevo sistema de RV que combina características inmersivas, interactivas y de movimiento. El sistema de RV (tarea en un laberinto virtual) se ha utilizado para evaluar la memoria espacial y la percepción de profundidad. Se han integrado dos tipos diferentes de interacción: una basada en locomoción que consistió en pedalear en una bicicleta fija (condición1) y otra estacionaria usando un gamepad (condición2). El sistema integró dos tipos de visualización: 1) Oculus Rift (OR); 2) Una gran pantalla estéreo. Se diseñaron dos estudios. El primer estudio (N=89) evaluó la memoria espacial a corto plazo usando el OR y los dos tipos de interacción. Los resultados indican que existían diferencias significativas entre ambas condiciones. Los participantes que utilizaron la condición2 obtuvieron mejor rendimiento que los que utilizaron la tarea en la condición1. Sin embargo, no se encontraron diferencias significativas en las puntuaciones de satisfacción e interacción entre ambas condiciones. El desempeño en la tarea correlacionó con el desempeño en las pruebas neuropsicológicas clásicas, revelando la verosimilitud entre ellas. El segundo estudio (N=59) incluyó participantes con y sin estereopsis. Este estudio evaluó la percepción de profundidad comparando los dos sistemas de visualización. Los participantes realizaron la tarea usando la condición2. Los resultados mostraron que las diferentes características del sistema de visualización no influyeron en el rendimiento en la tarea entre los participantes con y sin estereopsis. Se encontraron diferencias significativas a favor del HMD entre las dos condiciones y entre los dos grupos de participantes respecto a la percepción de profundidad. Los participantes que no tenían estereopsis y no podían percibir la profundidad cuando utilizaban otros sistemas de visualización, tuvieron la ilusión de percepción de profundidad cuando utilizaron el OR. El estudio sugiere que para las personas que no tienen estereopsis, el seguimiento de la cabeza influye en gran medida en la experiencia 3D. Los resultados estadísticos de ambos estudios han demostrado que el sistema de RV desarrollado es una herramienta apropiada para evaluar la memoria espacial a corto plazo y la percepción de profundidad. Por lo tanto, los sistemas de RV que combinan inmersión total, interacción y movimiento pueden ser una herramienta útil para la evaluación de procesos cognitivos humanos como la memoria. De estos estudios se han extraído las siguientes conclusiones generales: 1) La tecnología de RV y la inmersión proporcionada por los actuales HMDs son herramientas adecuadas para aplicaciones psicológicas, en particular, la evaluación de la memoria espacial a corto plazo; 2) Un sistema de RV como el presentado podría ser utilizado como herramienta para evaluar o entrenar adultos en habilidades relacionadas con la memoria espacial a corto plazo; 3) Los dos tipos de interacción utilizados para la navegación en el laberinto virtual podrían ser útiles para su uso con diferentes colectivos; 4) El OR permite que los usuarios sin estereopsis puedan percibir l
The evolution of Virtual Reality (VR) technology has contributed in all fields, including psychology. This evolution involves improvements in hardware and software allowing more immersive experiences. In a VR environment users can perceive the sensation of "presence" and feel "immersed". These sensations are possible using VR devices as HMDs. Nowadays, the development of the HMDs has focused on improving their technical features to offer full immersion. In psychology, VR environments are research tools because they allow the use of new paradigms that are not possible to employ in a real environment. There are some applications for assessing spatial memory that use basic methods of HCI. However, VR systems that incorporate stereoscopy and physical movement have not yet been exploited in psychology. In this thesis, a novel VR system combining immersive, interactive and motion features was developed. This system was used for the assessment of the spatial memory and the evaluation of depth perception. For this system, a virtual maze task was designed and implemented. In this system, two different types of interaction were integrated: a locomotion-based interaction pedaling a fixed bicycle (condition1), and a stationary interaction using a gamepad (condition2). This system integrated two types of display systems: 1) The Oculus Rift; 2) A large stereo screen. Two studies were designed to determine the efficacy of the VR system using physical movement and immersion. The first study (N=89) assessed the spatial short term memory using the Oculus Rift and the two types of interaction The results showed that there were statistically significant differences between both conditions. The participants who performed the condition2 got better performance than participants who performed the condition1. However, there were no statistically significant differences in satisfaction and interaction scores between both conditions. The performance on the task correlated with the performance on other classical neuropsychological tests, revealing a verisimilitude between them. The second study (N=59) involved participants who had and who had not stereopsis. This study assessed the depth perception by comparing the two display systems. The participants performed the task using the condition2. The results showed that the different features of the display system did not influence the performance on the task between the participants with and without stereopsis. Statistically significant differences were found in favor of the HMD between the two conditions and between the two groups of participants regard to depth perception. The participants who did not have stereopsis and could not perceive the depth when they used other display systems (e.g. CAVE); however, they had the illusion of depth perception when they used the Oculus Rift. The study suggests that for the people who did not have stereopsis, the head tracking largely influences the 3D experience. The statistical results of both studies have proven that the VR system developed for this research is an appropriate tool to assess the spatial short-term memory and the depth perception. Therefore, the VR systems that combine full immersion, interaction and movement can be a helpful tool for the assessment of human cognitive processes as the memory. General conclusions from these studies are: 1) The VR technology and immersion provided by current HMDs are appropriate tools for psychological applications, in particular, the assessment of spatial short-term memory; 2) A VR system like the one presented in this thesis could be used as a tool to assess or train adults in skills related to spatial short-term memory; 3) The two types of interaction (condition1 and condition2) used for navigation within the virtual maze could be helpful to use with different collectives; 4) The Oculus Rift allows that the users without stereopsis can perceive the depth perception of 3D objects and have rich 3D experiences.
L'evolució de la tecnologia de Realitat Virtual (RV) ha contribuït en tots els camps, incloent la psicologia. Aquesta evolució implica millores en el maquinari i el programari que permeten experiències més immersives. En un entorn de RV, els usuaris poden percebre la sensació de "presència" i sentir-se "immersos". Aquestes sensacions són possibles utilitzant HMDs. Avui dia, el desenvolupament dels HMDs s'ha centrat a millorar les seves característiques tècniques per oferir immersió plena. En la psicologia, els entorns de RV són eines de recerca. Hi ha algunes aplicacions per avaluar la memòria espacial que utilitzen mètodes bàsics d'interacció. Tanmateix, sistemes de RV que incorporen estereoscòpia i moviment físic no s'han explotat en psicologia. En aquesta tesi, s'ha desenvolupat un sistema de RV novell que combina immersió, interacció i moviment. El sistema (tasca en un laberint virtual) s'ha utilitzat per a l'avaluació de la memòria espacial i la percepció de profunditat. S'han integrat dos tipus d'interacció: una interacció basada en locomoció pedalejant una bicicleta fixa (condició1), i l'altra una interacció estacionària usant un gamepad (condició2). S'han integrat dos tipus de sistemes de pantalla: 1) L'Oculus Rift; 2) Una gran pantalla estereoscòpica. Dos estudis van ser dissenyats. El primer estudi (N=89) va avaluar la memòria a curt termini i espacial utilitzant l'Oculus Rift i els dos tipus d'interacció. Els resultats indiquen que hi havia diferències significatives entre les dues condicions. Els participants que van utilitzar la condició2 van obtenir millor rendiment que els participants que van utilitzar la condició1. Tanmateix, no hi havia diferències significatives dins satisfacció i puntuacions d'interacció entre les dues condicions. El rendiment de la tasca va correlacionar amb el rendiment en les proves neuropsicològiques clàssiques, revelant versemblança entre elles. El segon estudi (N=59) va implicar participants que van tenir i que van haver-hi no estereopsis. Aquest estudi va avaluar la percepció de profunditat comparant els dos sistemes de pantalla. Els participants realitzen la tasca utilitzant la condició2. Els resultats van mostrar que les diferents característiques del sistema de pantalla no va influir en el rendiment en la tasca entre els participants qui tenien i els qui no tenien estereopsis. Diferències significatives van ser trobades a favor del HMD entre les dues condicions i entre els dos grups de participants. Els participants que no van tenir estereopsis i no podien percebre la profunditat quan van utilitzar altres sistemes de pantalla (per exemple, CAVE), van tenir la il.lusió de percepció de profunditat quan van utilitzar l'Oculus Rift. L'estudi suggereix que per les persones que no van tenir estereopsis, el seguiment del cap influeix en gran mesura en l'experiència 3D. Els resultats estadístics dels dos estudis han provat que el sistema de RV desenvolupat per aquesta recerca és una eina apropiada per avaluar la memòria espacial a curt termini i la percepció de profunditat. Per això, els sistemes de RV que combinen immersió plena, interacció i moviment poden ser una eina útil per la avaluació de processos cognitius humans com la memòria Les conclusions generals que s'han extret d'aquests estudis, són les següents: 1) La tecnologia de RV i la immersió proporcionada pels HMDs són eines apropiades per aplicacions psicològiques, en particular, la avaluació de memòria espacial a curt termini; 2) Un sistema de RV com el presentat podria ser utilitzat com a eina per avaluar o entrenar adults en habilitats relacionades amb la memòria espacial a curt termini; 3) Els dos tipus d'interacció utilitzats per navegació dins del laberint virtual podrien ser útils per al seu ús amb diferent col.lectius; 3) L'Oculus Rift permet que els usuaris que no tenen estereopsis puguen percebre la percepció de profunditat dels objectes 3D i tenir
Cárdenas Delgado, SE. (2017). VR systems for memory assessment and depth perception [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/94629
TESIS
APA, Harvard, Vancouver, ISO, and other styles
17

Campello, Daniel Jose. "Optimizing Main Memory Usage in Modern Computing Systems to Improve Overall System Performance." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2568.

Full text
Abstract:
Operating Systems use fast, CPU-addressable main memory to maintain an application’s temporary data as anonymous data and to cache copies of persistent data stored in slower block-based storage devices. However, the use of this faster memory comes at a high cost. Therefore, several techniques have been implemented to use main memory more efficiently in the literature. In this dissertation we introduce three distinct approaches to improve overall system performance by optimizing main memory usage. First, DRAM and host-side caching of file system data are used for speeding up virtual machine performance in today’s virtualized data centers. The clustering of VM images that share identical pages, coupled with data deduplication, has the potential to optimize main memory usage, since it provides more opportunity for sharing resources across processes and across different VMs. In our first approach, we study the use of content and semantic similarity metrics and a new algorithm to cluster VM images and place them in hosts where through deduplication we improve main memory usage. Second, while careful VM placement can improve memory usage by eliminating duplicate data, caches in current systems employ complex machinery to manage the cached data. Writing data to a page not present in the file system page cache causes the operating system to synchronously fetch the page into memory, blocking the writing process. In this thesis, we address this limitation with a new approach to managing page writes involving buffering the written data elsewhere in memory and unblocking the writing process immediately. This buffering allows the system to service file writes faster and with less memory resources. In our last approach, we investigate the use of emerging byte-addressable persistent memory technology to extend main memory as a less costly alternative to exclusively using expensive DRAM. We motivate and build a tiered memory system wherein persistent memory and DRAM co-exist and provide improved application performance at lower cost and power consumption with the goal of placing the right data in the right memory tier at the right time. The proposed approach seamlessly performs page migration across memory tiers as access patterns change and/or to handle tier memory pressure.
APA, Harvard, Vancouver, ISO, and other styles
18

Wright, Richard A. "Effects of Virtual Reality on the Cognitive Memory and Handgun Accuracy Development of Law Enforcement Neophytes." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4966.

Full text
Abstract:
Abstract The purpose of this research was to investigate the effects of virtual reality training on the development of cognitive memory and handgun accuracy by law enforcement neophytes. One hundred and six academy students from 6 different academy classes were divided into two groups, experimental and control. The experimental group was exposed to virtual reality training for a period of 8 hours. The control group was exposed to the traditional, non-interactive training that occurred on a gun range, also for a period of 8 hours. After exposing the groups to their respective training, a counter-balance technique was utilized to expose both groups to a series of 3 law enforcement related scenarios. The time and number of shots that each participant used to cognitively process and solve the scenarios were collected and analyzed by group and gender. There was a significant difference, by group, in both time and accuracy, with the virtual reality group using less time and posting more accurate scores. Mean accuracy scores indicated that the males participants were more accurate in their response to the scenario administration.
APA, Harvard, Vancouver, ISO, and other styles
19

Fassbender, Eric. "VirSchool the effect of music on memory for facts learned in a virtual environment /." Phd thesis, Australia : Macquarie University, 2009. http://hdl.handle.net/1959.14/76852.

Full text
Abstract:
Thesis (PhD)--Macquarie University, Faculty of Science, Dept. of Computing, 2009.
Bibliography: p. [265]-280.
Introduction -- Literature review -- Method -- Experiments -- Conclusion.
Video games are becoming increasingly popular and their level of sophistication comes close to that of professional movie productions. Educational institutions and corporations are beginning to use video games for teaching purposes, however, not much is known about the use and effectiveness of video games for such purposes. One even less explored factor in video games is the music that is played throughout the course of the games. Little is known about the role that this music plays in cognitive processes and what effect background music has on players' memory. It is this question that the present thesis explores by asking which effect background music has on participants' memory for facts that are learned from a virtual environment. -- To answer the research question, a computer-animated history lesson, called VirSchool, was created which used the history of the Macquarie Lighthouse in Sydney as a basis for two experiments. Different musical stimuli accompanied the audio-visual presentation of the history topic. These stimuli were tested for their effectiveness to support participants' memory. The VirSchool history lesson was first presented in a Reality Center (a highly immersive, semi-cylindrical 3 projector display system) and one soundtrack was identified which showed a statistically significant improvement in the number of facts that participants remembered correctly from the VirSchool history lesson. Furthermore, Experiment 1 investigated how variations of tempo and pitch of the musical stimuli affected memory performance. It was found that slow tempo and low pitch were beneficial for remembrance of facts from the VirSchool history lesson. -- The beneficial soundtrack that was identified in Experiment 1 was reduced in tempo and lowered in pitch and was subsequently used as the sole musical stimulus in Experiment 2. Furthermore, because of equipment failure, Experiment 2 offered the opportunity to compare memory performance of participants in the Reality Center and a 3-monitor display system, which was used as a replacement for the defect Reality Center. Results showed that, against expectation, the memory for facts from the VirSchool history lesson was significantly better in the less immersive 3-monitor display system. Moreover, manipulated background music played in the second five and a half minutes of the VirSchool history lesson in the Reality Center resulted in a statistically significant improvement of participants' remembrance of facts from the second five and a half minutes of the VirSchool history lesson. The opposite effect was observed in the 3-monitor display system where participants remembered less information from the second five and a half minutes of the VirSchool history lesson if music was played in the second five and a half minutes of the VirSchool history lesson. -- The results from the present study reveal that in some circumstances music has a significant influence on memory in a virtual environment and in others it does not. These findings contribute towards and encourage further investigation of our understanding of the role that music plays in virtual learning environments so that they may be utilised to advance learning of future generations of students.
Mode of access: World Wide Web.
280 p. ill. (some col.)
APA, Harvard, Vancouver, ISO, and other styles
20

Silva, Ricardo Leandro Piantola da. "Uso do conceito de qualidade do conteúdo da memória em algoritmos de gerência de memória paginada." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-20072016-090442/.

Full text
Abstract:
No contexto da gerência de memória em sistemas operacionais, vários grupos de pesquisa desenvolvem trabalhos na área de algoritmos para gerência de memória virtual e alguns algoritmos para substituição de páginas têm sido propostos na literatura recente. No entanto, essas propostas não chegaram a um algoritmo que resolva satisfatoriamente o problema de desempenho na gerência de memória. Ainda não existe um consenso entre os pesquisadores de como essa questão deve ser tratada de maneira eficiente, e as propostas de algoritmos desenvolvidas possuem grande sobrecarga devido à sua complexidade. O objetivo deste trabalho é propor uma forma eficiente de gerenciar a memória com a composição de técnicas de busca, posicionamento e substituição de páginas. A hipótese aqui adotada é que para tratar o problema da gerência da memória é melhor consumir recursos computacionais determinando quais páginas deveriam estar na memória em um determinado instante de tempo do que gastar recursos determinando qual página será retirada da memória. A tese apresenta uma reanálise dos principais trabalhos que têm como objetivo o desempenho da gerência de memória, tornando possível retirar conclusões e ideias sobre quais fatores influenciam de maneira positiva com relação ao desempenho do sistema. A partir deste estudo, é determinado o conceito de qualidade do conteúdo da memória e criada uma métrica para medi-la. Aplicando tal conceito, formula-se um método sistêmico de construção de algoritmos de gerência de memória. Realiza-se uma aplicação desse método, criando-se então, os algoritmos RR+ng e RRlock+ng. A métrica é aplicada em simulações na fase final do método, mostrando-se adequada para realizar as análises. Os resultados obtidos mostram que a hipótese tratar o problema da gerência da memória, consumindo recursos computacionais determinando quais páginas devem estar na memória ao invés de quais devem deixá-la mostrou-se válida e parece promissora.
When it comes to memory management in operating systems, many research groups have been developing works in the memory management algorithms area and some page replacement algorithms have been proposed in the recent literature. Such proposals were not successful in developing algorithms that worked well as far as the performance in memory management is concerned. There is no consensus among the researches about how this problem can be treated efficiently, and the algorithms proposed have high overhead because of their complexity. The objective of this work is to propose an efficient memory management with the composition of page fetch, placement and replacement techniques. This thesis hypothesis is that to treat the memory management problem it is better to consume computational resources determining which pages must be in the memory in a given time than to waste resources defining which pages would be evicted from the memory. This work presents a reanalysis of the main works whose objective is memory management performance, making it possible to draw different conclusions and ideas about what factors may have a positive influence with respect to system performance. This study develops both the concept of quality of memory contents and a metric to measure it. Besides, a systemic method to create memory management algorithms is devised, applying the concept just created. Then, the method is followed, creating the RR+ng and RRlock+ng algorithms. In the final phase of the method, the metric is applied in simulations, proving to be adequate to perform the analysis. The results show that the idea of treating the memory management problem, consuming computational resources to determine which pages must be in the memory instead of which ones must leave it, hold true and seems to be promising.
APA, Harvard, Vancouver, ISO, and other styles
21

Saad, Ibrahim Mohamed Mohamed. "Extracting Parallelism from Legacy Sequential Code Using Transactional Memory." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/71861.

Full text
Abstract:
Increasing the number of processors has become the mainstream for the modern chip design approaches. However, most applications are designed or written for single core processors; so they do not benefit from the numerous underlying computation resources. Moreover, there exists a large base of legacy software which requires an immense effort and cost of rewriting and re-engineering to be made parallel. In the past decades, there has been a growing interest in automatic parallelization. This is to relieve programmers from the painful and error-prone manual parallelization process, and to cope with new architecture trend of multi-core and many-core CPUs. Automatic parallelization techniques vary in properties such as: the level of paraellism (e.g., instructions, loops, traces, tasks); the need for custom hardware support; using optimistic execution or relying on conservative decisions; online, offline or both; and the level of source code exposure. Transactional Memory (TM) has emerged as a powerful concurrency control abstraction. TM simplifies parallel programming to the level of coarse-grained locking while achieving fine-grained locking performance. This dissertation exploits TM as an optimistic execution approach for transforming a sequential application into parallel. The design and the implementation of two frameworks that support automatic parallelization: Lerna and HydraVM, are proposed, along with a number of algorithmic optimizations to make the parallelization effective. HydraVM is a virtual machine that automatically extracts parallelism from legacy sequential code (at the bytecode level) through a set of techniques including code profiling, data dependency analysis, and execution analysis. HydraVM is built by extending the Jikes RVM and modifying its baseline compiler. Correctness of the program is preserved through exploiting Software Transactional Memory (STM) to manage concurrent and out-of-order memory accesses. Our experiments show that HydraVM achieves speedup between 2×-5× on a set of benchmark applications. Lerna is a compiler framework that automatically and transparently detects and extracts parallelism from sequential code through a set of techniques including code profiling, instrumentation, and adaptive execution. Lerna is cross-platform and independent of the programming language. The parallel execution exploits memory transactions to manage concurrent and out-of-order memory accesses. This scheme makes Lerna very effective for sequential applications with data sharing. This thesis introduces the general conditions for embedding any transactional memory algorithm into Lerna. In addition, the ordered version of four state-of-art algorithms have been integrated and evaluated using multiple benchmarks including RSTM micro benchmarks, STAMP and PARSEC. Lerna showed great results with average 2.7× (and up to 18×) speedup over the original (sequential) code. While prior research shows that transactions must commit in order to preserve program semantics, placing the ordering enforces scalability constraints at large number of cores. In this dissertation, we eliminates the need for commit transactions sequentially without affecting program consistency. This is achieved by building a cooperation mechanism in which transactions can forward some changes safely. This approach eliminates some of the false conflicts and increases the concurrency level of the parallel application. This thesis proposes a set of commit order algorithms that follow the aforementioned approach. Interestingly, using the proposed commit-order algorithms the peak gain over the sequential non-instrumented execution in RSTM micro benchmarks is 10× and 16.5× in STAMP. Another main contribution is to enhance the concurrency and the performance of TM in general, and its usage for parallelization in particular, by extending TM primitives. The extended TM primitives extracts the embedded low level application semantics without affecting TM abstraction. Furthermore, as the proposed extensions capture common code patterns, it is possible to be handled automatically through the compilation process. In this work, that was done through modifying the GCC compiler to support our TM extensions. Results showed speedups of up to 4× on different applications including micro benchmarks and STAMP. Our final contribution is supporting the commit-order through Hardware Transactional Memory (HTM). HTM contention manager cannot be modified because it is implemented inside the hardware. Given such constraint, we exploit HTM to reduce the transactional execution overhead by proposing two novel commit order algorithms, and a hybrid reduced hardware algorithm. The use of HTM improves the performance by up to 20% speedup.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Somsing, Autcharaporn. "Understanding the determinants of creativity at an individual and team level." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTD046.

Full text
Abstract:
Beaucoup d'organisations se sont appuyées sur la créativité pour dépasser leurs concurrents et le savoir sur la façon de soutenir la créativité est devenu critique. En règle générale, la créativité aurait pour origine les individus ou un groupe de personnes travaillant ensemble. Par conséquent, dans cette thèse, notre objectif est de fournir une meilleure compréhension de la façon de faciliter la créativité à la fois au niveau individuel et de l'équipe. Pour l'équipe, nous nous concentrerons sur la créativité de l'équipe virtuelle qui a été peu étudiée malgré l’intérêt de l’analyse. Les quatre articles de cette thèse visent à fournir une meilleure compréhension de la créativité et à en identifier les déterminants tant au niveau individuel que pour l’équipe virtuelle. Nous avons analysé la littérature sur la créativité individuelle pour améliorer la compréhension de la créativité des employés. Notre analyse suggère qu'il est plus efficace de tenir compte à la fois des facteurs individuels et contextuels pour évaluer la créativité des employés. Pourtant, les interactions entre les facteurs individuels et contextuels sont multiples. Par conséquent, nous suggérons d'envisager une approche en terme de fit de la créativité entre les facteurs individuels et contextuels, ce qui nous permet également de proposer de nouvelles pratiques pour le management des ressources humaines. De plus, d’un point de vue théorique, plusieurs chercheurs suggèrent une relation étroite entre la prise de risque et la créativité des employés alors que peu d'études ont analysé cette relation. Le second article confirme qu'il existe une relation positive entre la prise de risque et la créativité des employés et aussi démontre que les facteurs individuels et contextuels issus de la littérature sur le risque et sur la créativité ont mutuellement un impact sur la prise de risque. Ensuite, sur la base de la relation étroite entre les théories du risque et de la créativité, nous abordons le comportement créatif des managers en intégrant le modèle BAM (behavioral agency model) et la théorie des capacités dynamiques. L'objectif de notre modèle théorique est d'expliquer en quoi le comportement créatif des managers devant décider d’importantes décisions stratégiques pourrait être considéré comme dynamique et évoluer avec le temps. Concernant la créativité de l'équipe virtuelle, nous avons examiné les déterminants de la créativité. Nous avons constaté que l’approche par les TMS est fructueuse pour la compréhension de la créativité de l'équipe virtuelle. Nos résultats apportent une contribution à la fois à la littérature sur la créativité et à celle concernant les équipes virtuelles et fournissent d'importantes implications managériales pour les équipes virtuelles.Dans l'ensemble, nos recherches sur la créativité individuelle sont également utiles pour les membres de l'équipe virtuelle alors considérés au niveau individuel et l’étude du comportement créatif des managers pourrait aussi s’appliquer aux managers d’équipes virtuelles. Ces quatre articles permettent (1) de fournir une vision globale de la créativité des employés en proposant l'approche par le fit; (2) d’examiner les relations précises entre la prise de risque et la créativité des employés; (3) d’étendre la théorie de la créativité en intégrant le modèle BAM et la théorie des capacités dynamiques afin de considérer la créativité comme dynamique; (4) et de révéler le rôle important du TMS pour la créativité de l'équipe virtuelle
Many organizations have relied on creativity to outclass their competitors and the knowledge of how to support creativity has been critical. Generally, creativity could be derived from individuals or a group of individuals working together. Hence, in this thesis, we aim to provide a better understanding on how to facilitate creativity at both individual and team levels. Precisely, for the team level, we focus on virtual team creativity which has been under-researched and challenging to discover. The four articles in this dissertation aim to provide a better understanding and identify the determinants of both individual and virtual team creativity. We have reviewed individual creativity literature to extend the understanding of employee creativity. The review suggests that it is more efficient to consider both individual and contextual factors in order to assess employee creativity. Still, the interactions between individual and contextual factors are varied. Therefore, we suggest considering creativity fit approaches between individual and contextual factors derived from the review and we also provide the comprehensive practices for human resource management. In addition, theoretically, several theorists suggest the close relation between risk-taking and employee creativity whereas very few studies have investigated its relations. The second article confirms that there is a positive relation between risk-taking and employee creativity and also demonstrate that individual and contextual factors from both risk and creativity literature are mutually impacted on risk-taking. Later, based on the close relation between risk and creativity theories, we develop the creative behavior of managers by integrating the behavioral agency model and dynamic capabilities theories. The objective of this theoretical model is to explain how the creative behavior of managers in making an important strategic decision could be viewed as dynamic and evolved over time. For virtual team creativity, we aim to examine the determinants of virtual team creativity which have been recently explored. We found that Transactive Memory Systems, which have been challenging due to their importance with regard to virtual teams, have a positive impact on virtual team creativity. The findings extend both creativity and virtual team literature and provide important practical implications for virtual teams. Overall, the investigation of individual creativity is also useful for virtual team members at an individual level and managers’ creative behavior could also assess the creative behavior of virtual team managers. These four articles could in fact (1) provide the global view of employee creativity by proposing the fit approach; (2) examine the precise relations of risk-taking and employee creativity; (3) extend the creativity theory by integrating BAM and the dynamic capabilities theory to consider creativity as dynamic; (4) and reveal the critical roles of TMS in virtual team creativity
APA, Harvard, Vancouver, ISO, and other styles
23

Sinha, Udayan Prabir. "Memory Management Error Detection in Parallel Software using a Simulated Hardware Platform." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-219606.

Full text
Abstract:
Memory management errors in concurrent software running on multi-core architectures can be difficult and costly to detect and repair. Examples of errors are usage of uninitialized memory, memory leaks, and data corruptions due to unintended overwrites of data that are not owned by the writing entity. If memory management errors could be detected at an early stage, for example when using a simulator before the software has been delivered and integrated in a product, significant savings could be achieved. This thesis investigates and develops methods for detection of usage of uninitialized memory in software that runs on a virtual hardware platform. The virtual hardware platform has models of Ericsson Radio Base Station hardware for baseband processing and digital radio processing. It is a bit-accurate representation of the underlying hardware, with models of processors and peripheral units, and it is used at Ericsson for software development and integration. There are tools available, such as Memcheck (Valgrind), and MemorySanitizer and AddressSanitizer (Clang), for memory management error detection. The features of such tools have been investigated, and memory management error detection algorithms were developed for a given processor’s instruction set. The error detection algorithms were implemented in a virtual platform, and issues and design considerations reflecting the application-specific instruction set architecture of the processor, were taken into account. A prototype implementation of memory error presentation with error locations mapped to the source code of the running program, and presentation of stack traces, was done, using functionality from a debugger. An experiment, using a purpose-built test program, was used to evaluate the error detection capability of the algorithms in the virtual platform, and for comparison with the error detection capability of Memcheck. The virtual platform implementation detects all known errors, except one, in the program and reports them to the user in an appropriate manner. There are false positives reported, mainly due to the limited awareness about the operating system used on the simulated processor
Minneshanteringsfel i parallell mjukvara som exekverar på flerkärniga arkitekturer kan vara svåra att detektera, samt kostsamma att åtgärda. Exempel på fel kan vara användning av ej initialiserat minne, minnesläckage, samt att data blir överskrivna av en process som inte är ägare till de data som skrivs över. Om minneshanteringsfel kan detekteras i ett tidigt skede, t ex genom att använda en simulator, som körs innan mjukvaran har levererats och integrerats i en produkt, skulle man kunna erhålla signifikanta kostnadsbesparingar. Detta examensarbete undersöker och utvecklar metoder för detektion av ej initialiserat minne i mjukvara som körs på en virtuell plattform. Den virtuella plattformen innehåller modeller av delar av den digitala hårdvara, för basband och radio, som finns i en Ericsson radiobasstation. Modellerna är bit-exakta representationer av motsvarande hårdvarublock, och innefattar processorer och periferienheter. Den virtuella plattformen används av Ericsson för utveckling och integration av mjukvara. Det finns verktyg, exempelvis Memcheck (Valgrind), samt MemorySanitizer och AddressSanitizer (Clang), som kan användas för att detektera minneshanteringsfel. Egenskaper hos sådana verktyg har undersökts, och algoritmer för detektion av minneshanteringsfel har utvecklats, för en specifik processor och dess instruktioner. Algoritmerna har implementerats i en virtuell plattform, och kravställningar och design-överväganden som speglar den tillämpnings-specifika instruktionsrepertoaren för den valda processorn, har behandlats. En prototyp-implementation av presentation av minneshanteringsfel, där källkodsraderna samt anropsstacken för de platser där fel har hittats pekas ut, har utvecklats, med användning av en debugger. Ett experiment, som använder sig av ett för ändamålet utvecklat program, har använts för att utvärdera feldetektions-förmågan för de algoritmer som implementerats i den virtuella plattformen, samt för att jämföra med feldetektions-förmågan hos Memcheck. De algoritmer som implementerats i den virtuella plattformen kan, för det program som används, detektera alla kända fel, förutom ett. Algoritmerna rapporterar också falska felindikeringar. Dessa rapporter är huvudsakligen ett resultat av att den aktuella implementationen har begränsad kunskap om det operativsystem som används på den simulerade processorn.
APA, Harvard, Vancouver, ISO, and other styles
24

Thrash, Tyler. "Categorical bias in transient and enduring spatial representation." Miami University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=miami1302800868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mostert, Sias. "The transputer virtual memory system." Thesis, Stellenbosch : Stellenbosch University, 1990. http://hdl.handle.net/10019.1/69047.

Full text
Abstract:
Thesis (MIng.)--Stellenbosch University, 1990.
ENGLISH ABSTRACT: The transputer virtual memory system provide, for the transputer without memory management primitives, a viable virtual memory system. This report evaluates the architecture and its parameters. The basic software is also implemented a.nd described. The disk subsystem with software and hard",,'are is also evaluated in a single disk environment. It is shown that the unique features of the TVM system has advantages and disadvantages when compared to conventional virtual memory systems. One of the advantages is that a conventional operating system with memory protection can now also be implemented on the transputer. The main conclusion is that this is a performance effective implementation of a virtual memory system with unique features that should be exploited further.
AFRIKAANSE OPSOMMING: Die transputer virtuele geheue verskaf, vir 'n verwerker sander virtuele geheue ondersteuning, 'n doeltreffende virtuele geheue stelsel. Die verslag evalueer die argitektuur en sy parameters. Die skyfsubstelsel met programmatuur en apparatuur word ook geevalueer in 'n enkel skyfkoppelvlak omgewing. Daar word bewys dat die upieke eienskappe van die TVG (transputer virtuele geheue) voor- en nadele besit wanneer dit vElrgelyk word met konvensionele virtuele geheue stelsels. Een van die voordele is dat 'n konvensionele bedryfstelsel met geheue beskerming nou op 'n transputer ge-implementeer kan word. Die hoofnadeel agv die spesifieke argitektuur gee slegs 'n 15% degradering in werkverrigting. Dit word egter slegs oar 'n sekere datagrootte ervaar en kom tipies nie ter sprake wanneer daar massiewe programme geloop word nie.
APA, Harvard, Vancouver, ISO, and other styles
26

Baruchi, Artur. "Memory Dispatcher: uma contribuição para a gerência de recursos em ambientes virtualizados." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-17082010-110618/.

Full text
Abstract:
As Máquinas Virtuais ganharam grande importância com o advento de processadores multi-core (na plataforma x86) e com o barateamento de componentes de hardware, como a memória. Por conta desse substancial aumento do poder computacional, surgiu o desafio de tirar proveito dos recursos ociosos encontrados nos ambientes corporativos, cada vez mais populados por equipamentos multi-core e com vários Gigabytes de memória. A virtualização, mesmo sendo um conceito já antigo, tornou-se novamente popular neste cenário, pois com ela foi possível utilizar melhor os recursos computacionais, agora abundantes. Este trabalho tem como principal foco estudar algumas das principais técnicas de gerência de recursos computacionais em ambientes virtualizados. Apesar de muitos dos conceitos aplicados nos projetos de Monitores de Máquinas Virtuais terem sido portados de Sistemas Operacionais convencionais com pouca, ou nenhuma, alteração; alguns dos recursos ainda são difíceis de virtualizar com eficiência devido a paradigmas herdados desses mesmos Sistemas Operacionais. Por fim, é apresentado o Memory Dispatcher (MD), um mecanismo de gerenciamento de memória, com o objetivo principal de distribuir a memória entre as Máquinas Virtuais de modo mais eficaz. Este mecanismo, implementado em C, foi testado no Monitor de Máquinas Virtuais Xen e apresentou ganhos de memória de até 70%.
Virtual Machines have gained great importance with advent of multi-core processors (on platform x86) and with low cost of hardware parts, like physical memory. Due to this computational power improvement a new challenge to take advantage of idle resources has been created. The virtualization technology, even being an old concept became popular in research centers and corporations. With this technology idle resources now can be exploited. This work has the objective to show the main techniques to manage computational resources in virtual environments. Although many of current concepts used in Virtual Machine Monitors project has been ported, with minimal changes, from conventional Operating Systems there are some resources that are difficult to virtualize with efficiency due to old paradigms still present in Operating Systems projects. Finally, the Memory Dispatcher (MD) is presented, a mechanism used to memory management. The main objective of MD is to improve the memory share among Virtual Machines. This mechanism was developed in C and it was tested in Xen Virtual Machine Monitor. The MD showed memory gains up to 70%.
APA, Harvard, Vancouver, ISO, and other styles
27

Muchalski, Fernando José. "Alocação de máquinas virtuais em ambientes de computação em nuvem considerando o compartilhamento de memória." Universidade Tecnológica Federal do Paraná, 2014. http://repositorio.utfpr.edu.br/jspui/handle/1/1005.

Full text
Abstract:
A virtualização é uma tecnologia chave para a computação em nuvem que permite fornecer recursos computacionais, em forma de máquinas virtuais, para o consumo de serviços de computação. Nos ambientes de computação em nuvem, é importante manter sob controle a alocação de máquinas virtuais nos servidores físicos. Uma alocação adequada implica na redução de custos com hardware, energia e refrigeração, além da melhora da qualidade de serviço. Hipervisores recentes implementam mecanismos para reduzir o consumo de memória RAM através do compartilhamento de páginas idênticas entre máquinas virtuais. Esta dissertação apresenta um novo algoritmo de alocação de máquinas virtuais que busca o equilíbrio no uso dos recursos de CPU, memória, disco e rede e, sobretudo, considera o potencial de compartilhamento de memória entre máquinas virtuais. Através de simulações em cenários distintos, verificou-se que o algoritmo é superior à abordagem padrão na questão do uso equilibrado de recursos e que, considerando o compartilhamento de memória, houve um ganho significativo na disponibilidade deste recurso ao final das alocações.
Virtualization is a key technology for cloud computing, it provides computational resources as virtual machines for consumption of computing services. In cloud computing environments it is important to keep under control the allocation of virtual machines in physical servers. A good allocation brings benefits such as reduction costs in hardware, power, and cooling, also improving the quality of service. Recent hypervisors implement mechanisms to reduce RAM consumption by sharing identical pages between virtual machines. This dissertation presents a new algorithm for virtual machines allocation that seeks the balanced use of CPU, memory, disk, and network. In addition, it considers the potential for sharing memory among virtual machines. Simulations on three distinct scenarios demonstrate that it is superior to the standard approach when considering the balanced use of resources. Considering shared memory, there was an appreciable gain in availability of resources.
APA, Harvard, Vancouver, ISO, and other styles
28

Melo, Alba Cristina M. A. "Conception d'un système supportant des modèles de cohérence multiples pour les machines parallèles à mémoire virtuelle partagée." Grenoble INPG, 1996. http://www.theses.fr/1996INPG0108.

Full text
Abstract:
La programmation par variables partagees est utilisee dans les architectures paralleles sans memoire commune grace a une couche logicielle qui simule la memoire physiquement partagee. Le maintien de l'abstraction parfaite d'une memoire unique necessite un grand nombre d'operations de coherence et, par consequent, une degradation importante des performances. Afin de palier cette degradation, plusieurs systemes se servent des modeles de coherence de la memoire plus relaches, qui permettent une concurrence plus importante entre les acces mais compliquent le modele de programmation. Le choix d'un modele de coherence est donc un compromis entre les performances et la simplicite de la programmation. Ces deux facteurs dependent des attentes des utilisateurs et des caracteristiques d'acces aux donnees de chaque applications parallele. Cette these presente diva, un systeme a memoire virtuelle partagee qui supporte plusieurs modeles de coherence de la memoire. Avec diva, l'utilisateur peut choisir la semantique de la memoire partagee la plus appropriee a l'execution correcte et performante de son application. De plus, diva offre a l'utilisateur la possibilite de definir ses propres modeles de coherence. L'existence des modeles multiples a l'interieur de diva a guide les choix de conception de plusieurs autres mecanismes. Ainsi, nous proposons une interface unique de synchronisation et des mecanismes de remplacement et prechargement des pages adaptes a un environnement a modeles multiples. Un prototype de diva a ete mis en uvre sur la machine parallele intel/paragon. L'analyse d'une application qui s'execute sur des differents modeles de coherence nous a permis de montrer que le choix du modele de coherence affecte directement les performances d'une application
APA, Harvard, Vancouver, ISO, and other styles
29

Tran, Chinh Nguyen. "An automatic test generation system for testing virtual memory operations /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

MIN, RUI. "USING RUNTIME INFORMATION TO IMPROVE MEMORY SYSTEM PERFORMANCE." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1134043707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ye, Lei. "Energy Management for Virtual Machines." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/283603.

Full text
Abstract:
Current computing infrastructures use virtualization to increase resource utilization by deploying multiple virtual machines on the same hardware. Virtualization is particularly attractive for data center, cloud computing, and hosting services; in these environments computer systems are typically configured to have fast processors, large physical memory and huge storage capable of supporting concurrent execution of virtual machines. Subsequently, this high demand for resources is directly translating into higher energy consumption and monetary costs. Increasingly managing energy consumption of virtual machines is becoming critical. However, virtual machines make the energy management more challenging because a layer of virtualization separates hardware from the guest operating system executing inside a virtual machine. This dissertation addresses the challenge of designing energy-efficient storage, memory and buffer cache for virtual machines by exploring innovative mechanisms as well as existing approaches. We analyze the architecture of an open-source virtual machine platform Xen and address energy management on each subsystem. For storage system, we study the I/O behavior of the virtual machine systems. We address the isolation between virtual machine monitor and virtual machines, and increase the burstiness of disk accesses to improve energy efficiency. In addition, we propose a transparent energy management on main memory for any types of guest operating systems running inside virtual machines. Furthermore, we design a dedicated mechanism for the buffer cache based on the fact that data-intensive applications heavily rely on a large buffer cache that occupies a majority of physical memory. We also propose a novel hybrid mechanism that is able to improve energy efficiency for any memory access. All the mechanisms achieve significant energy savings while lowering the impact on performance for virtual machines.
APA, Harvard, Vancouver, ISO, and other styles
32

Stimson, Jared M. "Forensic analysis of Windows' virtual memory incorporating the system's page-file." Thesis, Monterey, California. Naval Postgraduate School, 2008. http://hdl.handle.net/10945/3714.

Full text
Abstract:
Computer Forensics is concerned with the use of computer investigation and analysis techniques in order to collect evidence suitable for presentation in court. The examination of volatile memory is a relatively new but important area in computer forensics. More recently criminals are becoming more forensically aware and are now able to compromise computers without accessing the hard disk of the target computer. This means that traditional incident response practice of pulling the plug will destroy the only evidence of the crime. While some techniques are available for acquiring the contents of main memory, few exist which can analyze these data in a meaningful way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed arbitrarily across physical memory or the hard disk, making it very difficult to recover useful information. This report will focus on how these disparate sources of information can be combined to give a single, contiguous address space for each process. Using address translation a tool is developed to reconstruct the virtual address space of a process by combining a physical memory dump with the page-file on the hard disk.
APA, Harvard, Vancouver, ISO, and other styles
33

Stimson, Jared M. "Forensic analysis of Window's® virtual memory incorporating the system's page-file." Monterey, Calif. : Naval Postgraduate School, 2008. http://edocs.nps.edu/npspubs/scholarly/theses/2008/Dec/08Dec%5FStimson.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, December 2008.
Thesis Advisor(s): Eagle, Chris S. "December 2008." Description based on title screen as viewed on February 2, 2009. Includes bibliographical references (p. 89-90). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
34

Veiga, Fellipe Medeiros. "Estudo da efetividade dos mecanismos de compartilhamento de memória em hipervisores." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1631.

Full text
Abstract:
A crescente demanda por ambientes de virtualização de larga escala, como os usados em datacenters e nuvens computacionais, faz com que seja necessário um gerenciamento eficiente dos recursos computacionais utilizados. Um dos recursos mais exigidos nesses ambientes é a memória RAM, que costuma ser o principal fator limitante em relação ao número de máquinas virtuais que podem executar sobre o mesmo host físico. Recentemente, hipervisores trouxeram mecanismos de compartilhamento transparente de memória RAM entre máquinas virtuais, visando diminuir a demanda total de memória no sistema. Esses mecanismos “fundem” páginas idênticas encontradas nas várias máquinas virtuais em um mesmo quadro de memória física, usando uma abordagem copy-on-write, de forma transparente para os sistemas convidados. O objetivo deste estudo é apresentar uma visão geral desses mecanismos e também avaliar seu desempenho e efetividade. São apresentados resultados de experimentos realizados com dois hipervisores populares (VMware e KVM), usando sistemas operacionais convidados distintos (Linux e Windows) e cargas de trabalho diversas (sintéticas e reais). Os resultados obtidos evidenciam diferenças significativas de desempenho entre os hipervisores em função dos sistemas convidados, das cargas de trabalho e do tempo.
The growing demand for large-scale virtualization environments, such as the ones used in cloud computing, has led to a need for efficient management of computing resources. RAM memory is the one of the most required resources in these environments, and is usually the main factor limiting the number of virtual machines that can run on the physical host. Recently, hypervisors have brought mechanisms for transparent memory sharing between virtual machines in order to reduce the total demand for system memory. These mechanisms “merge” similar pages detected in multiple virtual machines into the same physical memory, using a copy-on-write mechanism in a manner that is transparent to the guest systems. The objective of this study is to present an overview of these mechanisms and also evaluate their performance and effectiveness. The results of two popular hypervisors (VMware and KVM) using different guest operating systems (Linux and Windows) and different workloads (synthetic and real) are presented herein. The results show significant performance differences between hypervisors according to the guest system workloads and execution time.
APA, Harvard, Vancouver, ISO, and other styles
35

Ragan, Eric Dennis. "Supporting Learning through Spatial Information Presentations in Virtual Environments." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23207.

Full text
Abstract:
Though many researchers have suggested that 3D virtual environments (VEs) could provide advantages for conceptual learning, few studies have attempted to evaluate the validity of this claim. While many educational VEs share the challenge of providing learners with information within 3D spaces, few researchers have investigated what approaches are used to help learn new information from 3D spatial representations. It is not understood how well learners can take advantage of 3D layouts to help understand information. Additionally, although complex arrangements of information within 3D space can potentially allow for large amounts of information to be presented within a VE, accessing this information can become more difficult due to the increased navigational challenges.
Complicating these issues are details regarding display types and interaction devices used for educational applications. Compared to desktop displays, more immersive VE systems often provide display features (e.g., stereoscopy, increased field of view) that support improved perception and understanding of spatial information. Additionally, immersive VE often allow more familiar, natural interaction methods (e.g., physical walking or rotation of the head and body) to control viewing within the virtual space. It is unknown how these features interact with the types of spatial information presentations to affect learning.
The research presented in this dissertation investigates these issues in order to further the knowledge of how to design VEs to support learning. The research includes six studies (five empirical experiments and one case study) designed to investigate how spatial information presentations affect learning effectiveness and learner strategies. This investigation includes consideration for the complexity of spatial information layouts, the features of display systems that could affect the effectiveness of spatial strategies, and the degree of navigational control for accessing information. Based on the results of these studies, we created a set of design guidelines for developing VEs for learning-related activities. By considering factors of virtual information presentation, as well as those based on the display-systems, our guidelines support design decisions for both the software and hardware required for creating effective educational VEs.

Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
36

Bielski, Maciej. "Nouvelles techniques de virtualisation de la mémoire et des entrées-sorties vers les périphériques pour les prochaines générations de centres de traitement de données basés sur des équipements répartis déstructurés." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT022/document.

Full text
Abstract:
Cette thèse s'inscrit dans le contexte de la désagrégation des systèmes informatiques - une approche novatrice qui devrait gagner en popularité dans le secteur des centres de données. A la différence des systèmes traditionnels en grappes, où les ressources sont fournies par une ou plusieurs machines, dans les systèmes désagrégés les ressources sont fournies par des nœuds discrets, chaque nœud ne fournissant qu'un seul type de ressources (unités centrales de calcul, mémoire, périphériques). Au lieu du terme de machine, le terme de créneau (slot) est utilisé pour décrire une unité de déploiement de charge de travail. L'emplacement est assemblé dynamiquement avant un déploiement de charge de travail par l'orchestrateur système.Dans l'introduction nous abordons le sujet de la désagrégation et en présentons les avantages par rapport aux architectures en grappes. Nous ajoutons également au tableau une couche de virtualisation car il s'agit d'un élément crucial des centres de données. La virtualisation fournit une isolation entre les charges de travail déployées et un partitionnement flexible des ressources. Elle doit cependant être adaptée afin de tirer pleinement parti de la désagrégation. C'est pourquoi les principales contributions de ce travail se concentrent sur la prise en charge de la couche de virtualisation pour la mémoire désagrégée et la mise à disposition des périphériques.La première contribution principale présente les modifications de la pile logicielle liées au redimensionnement flexible de la mémoire d'une machine virtuelle (VM). Elles permettent d'ajuster la quantité de RAM hébergée (c'est à dire utilisée par la charge de travail en cours d'exécution dans une VM) pendant l'exécution avec une granularité d'une section mémoire. Du point de vue du logiciel il est transparent que la RAM proviennent de banques de mémoire locales ou distantes.La deuxième contribution discute des notions de partage de mémoire entre machines virtuelles et de migration des machines virtuelles dans le contexte de la désagrégation. Nous présentons d'abord comment des régions de mémoire désagrégées peuvent être partagées entre des machines virtuelles fonctionnant sur différents nœuds. De plus, nous discutons des différentes variantes de la méthode de sérialisation des accès simultanés. Nous expliquons ensuite que la notion de migration de VM a acquis une double signification avec la désagrégation. En raison de la désagrégation des ressources, une charge de travail est associée au minimum à un nœud de calcul et a un nœud mémoire. Il est donc possible qu'elle puisse être migrée vers des nœuds de calcul différents tout en continuant à utiliser la même mémoire, ou l'inverse. Nous discutons des deux cas et décrivons comment cela peut ouvrir de nouvelles opportunités pour la consolidation des serveurs.La dernière contribution de cette thèse est liée à la virtualisation des périphériques désagrégés. Partant de l'hypothèse que la désagrégation de l'architecture apporte de nombreux effets positifs en général, nous expliquons pourquoi elle n'est pas immédiatement compatible avec la technique d'attachement direct, est pourtant très populaire pour sa performance quasi native. Pour remédier à cette limitation, nous présentons une solution qui adapte le concept d'attachement direct à la désagrégation de l'architecture. Grâce à cette solution, les dispositifs désagrégés peuvent être directement attachés aux machines virtuelles, comme s'ils étaient branchés localement. De plus, l'OS hébergé, pour lequel la configuration de l'infrastructure sous-jacente n'est pas visible, n'est pas lui-même concerné par les modifications introduites
This dissertation is positioned in the context of the system disaggregation - a novel approach expected to gain popularity in the data center sector. In traditional clustered systems resources are provided by one or multiple machines. Differently to that, in disaggregated systems resources are provided by discrete nodes, each node providing only one type of resources (CPUs, memory and peripherals). Instead of a machine, the term of a slot is used to describe a workload deployment unit. The slot is dynamically assembled before a workload deployment by the unit called system orchestrator.In the introduction of this work, we discuss the subject of disaggregation and present its benefits, compared to clustered architectures. We also add a virtualization layer to the picture as it is a crucial part of data center systems. It provides an isolation between deployed workloads and a flexible resources partitioning. However, the virtualization layer needs to be adapted in order to take full advantage of disaggregation. Thus, the main contributions of this work are focused on the virtualization layer support for disaggregated memory and devices provisioning.The first main contribution presents the software stack modifications related to flexible resizing of a virtual machine (VM) memory. They allow to adjust the amount of guest (running in a VM) RAM at runtime on a memory section granularity. From the software perspective it is transparent whether they come from local or remote memory banks.As a second main contribution we discuss the notions of inter-VM memory sharing and VM migration in the disaggregation context. We first present how regions of disaggregated memory can be shared between VMs running on different nodes. This sharing is performed in a way that involved guests which are not aware of the fact that they are co-located on the same computing node or not. Additionally, we discuss different flavors of concurrent accesses serialization methods. We then explain how the VM migration term gained a twofold meaning. Because of resources disaggregation, a workload is associated to at least one computing node and one memory node. It is therefore possible that it is migrated to a different computing node and keeps using the same memory, or the opposite. We discuss both cases and describe how this can open new opportunities for server consolidation.The last main contribution of this dissertation is related to disaggregated peripherals virtualization. Starting from the assumption that the architecture disaggregation brings many positive effects in general, we explain why it breaks the passthrough peripheral attachment technique (also known as a direct attachment), which is very popular for its near-native performance. To address this limitation we present a design that adapts the passthrough attachment concept to the architecture disaggregation. By this novel design, disaggregated devices can be directly attached to VMs, as if they were plugged locally. Moreover, all modifications do not involve the guest OS itself, for which the setup of the underlying infrastructure is not visible
APA, Harvard, Vancouver, ISO, and other styles
37

Wirzberger, Maria, René Schmidt, Maria Georgi, Wolfram Hardt, Guido Brunnett, and Günter Daniel Rey. "Effects of system response delays on elderly humans’ cognitive performance in a virtual training scenario." Springer Nature, 2019. https://monarch.qucosa.de/id/qucosa%3A34294.

Full text
Abstract:
Observed influences of system response delay in spoken human-machine dialogues are rather ambiguous and mainly focus on perceived system quality. Studies that systematically inspect effects on cognitive performance are still lacking, and effects of individual characteristics are also often neglected. Building on benefits of cognitive training for decelerating cognitive decline, this Wizard-of-Oz study addresses both issues by testing 62 elderly participants in a dialogue-based memory training with a virtual agent. Participants acquired the method of loci with fading instructional guidance and applied it afterward to memorizing and recalling lists of German nouns. System response delays were randomly assigned, and training performance was included as potential mediator. Participants’ age, gender, and subscales of affinity for technology (enthusiasm, competence, positive and negative perception of technology) were inspected as potential moderators. The results indicated positive effects on recall performance with higher training performance, female gender, and less negative perception of technology. Additionally, memory retention and facets of affinity for technology moderated increasing system response delays. Participants also provided higher ratings in perceived system quality with higher enthusiasm for technology but reported increasing frustration with a more positive perception of technology. Potential explanations and implications for the design of spoken dialogue systems are discussed.
APA, Harvard, Vancouver, ISO, and other styles
38

Sivilli, Robert. "Vision-Based Testbeds for Control System Applicaitons." Master's thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5504.

Full text
Abstract:
In the field of control systems, testbeds are a pivotal step in the validation and improvement of new algorithms for different applications. They provide a safe, controlled environment typically having a significantly lower cost of failure than the final application. Vision systems provide nonintrusive methods of measurement that can be easily implemented for various setups and applications. This work presents methods for modeling, removing distortion, calibrating, and rectifying single and two camera systems, as well as, two very different applications of vision-based control system testbeds: deflection control of shape memory polymers and trajectory planning for mobile robots. First, a testbed for the modeling and control of shape memory polymers (SMP) is designed. Red-green-blue (RGB) thresholding is used to assist in the webcam-based, 3D reconstruction of points of interest. A PID based controller is designed and shown to work with SMP samples, while state space models were identified from step input responses. Models were used to develop a linear quadratic regulator that is shown to work in simulation. Also, a simple to use graphical interface is designed for fast and simple testing of a series of samples. Second, a robot testbed is designed to test new trajectory planning algorithms. A template-based predictive search algorithm is investigated to process the images obtained through a low-cost webcam vision system, which is used to monitor the testbed environment. Also a user-friendly graphical interface is developed such that the functionalities of the webcam, robots, and optimizations are automated. The testbeds are used to demonstrate a wavefront-enhanced, B-spline augmented virtual motion camouflage algorithm for single or multiple robots to navigate through an obstacle dense and changing environment, while considering inter-vehicle conflicts, obstacle avoidance, nonlinear dynamics, and different constraints. In addition, it is expected that this testbed can be used to test different vehicle motion planning and control algorithms.
M.S.A.E.
Masters
Mechanical and Aerospace Engineering
Engineering and Computer Science
Aerospace Engineering; Space Systems Design and Engineering
APA, Harvard, Vancouver, ISO, and other styles
39

Schuhknecht, Felix Martin [Verfasser], and Jens [Akademischer Betreuer] Dittrich. "Closing the circle of algorithmic and system-centric database optimization : a comprehensive survey on adaptive indexing, data partitioning, and the rewiring of virtual memory / Felix Martin Schuhknecht ; Betreuer: Jens Dittrich." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2016. http://d-nb.info/1122110634/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Vašíček, Libor. "Efektivní správa paměti ve vícevláknových aplikacích." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235931.

Full text
Abstract:
This thesis describes design and implementation of effective memory management for multi-threaded applications. At first, the virtual memory possibilities are described, which can be found in the latest operating systems, such as Microsoft Windows and Linux. Afterwards the most frequently used algorithms for memory management are explained. Consequently, their features are used properly for a new memory manager. Final design includes particular tools for application debugging and profiling. At the end of the thesis a series of tests and evaluation of achieved results were done.
APA, Harvard, Vancouver, ISO, and other styles
41

Yung-ChangChiu and 邱永昌. "Study of Debugging Mechanisms for Virtual Shared Memory Multiprocessor Systems." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/13408827172115456894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Vaughan, Francis Alexander. "Implementation of distributed orthogonal persistence using virtual memory / Francis Vaughan." 1994. http://hdl.handle.net/2440/18556.

Full text
Abstract:
Includes bibliography.
246 p. : ill. ; 30 cm.
Title page, contents and abstract only. The complete thesis in print form is available from the University Library.
This thesis explores the implementation of orthogonally persistent systems that make direct use of the attributes of paged virtual memory found in the majority of conventional computing platforms. These attributes are exploited to support object movement for persistent storage to addressable memory, to aid in garbage collection, to provide the illusion of larger storage spaces than the underlying architecture allows, and to provide distribution of the persistent system. It also explores the different models of distribution, communication mechanisms between federated spaces and the problem of maintaining consistency between separate persistent spaces in a manner which ensures both a reliable and resilient computational environment.
Thesis (Ph.D.)--University of Adelaide, Dept. of Computer Science,1995
APA, Harvard, Vancouver, ISO, and other styles
43

Lee, Chao-Hsien, and 李照賢. "Application-Navigated Virtual-Memory Management System." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/57239920509272151651.

Full text
Abstract:
博士
國立交通大學
資訊科學學系
86
To bridge the performance gap between disks and microprocessors, conventional operating systems employed memory cache in file systems and virtual memory management systems. However, since the system kernels do not know applications'' access patterns, the fixed memory-cache management scheme cannot meet all applications'' needs. Application performance and system throughout are thus degraded. This dissertation proposes a new virtual memory management system, the hipec system, to effectively increase the application performance and system throughout. Hipec partitions the conventional virtual memory management scheme into two levels: the kernel only handles the allocations of memory cache, while the user applications are responsible for managing the allocated cache. Hipec includes two major implementations. The first is the in-kernel strategy interpreter for supporting application-navigated virtual memory management. The strategy interpreter also protects the system from misbehaved or malicious applications. The second is the kernel page-frame allocation policy, which can fairly share page frames among all running applications. In addition, in order to help application designers to observe the application access patterns, two auxiliary tools are implemented. Application designers, therefore, can tune the caching strategies to meet applications'' specific needs. From empirical evaluations, hipec can improve the application performance and system throughput.
APA, Harvard, Vancouver, ISO, and other styles
44

Hung-Wei, Tseng. "An Energy-Efficient Virtual Memory System with Flash Memory as the Secondary Storage." 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-0207200522234300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Tseng, Hung-Wei, and 曾宏偉. "An Energy-Efficient Virtual Memory System with Flash Memory as the Secondary Storage." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/29405303533502730218.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
93
Modern operation system often adopts the virtual memory approach to allow the physical memory shared among multiple tasks. Traditional virtual memory system is designed for decades assuming a magnetic disk as the secondary storage. Recently, flash memory becomes a popular storage alternative for many portable devices with the continuing improvements on its capacity, reliability and much lower power consumption than mechanical hard drives. The NAND flash memory is organized with blocks, and each block contains a set of pages. The characteristics of flash memory are quite different from a magnetic disk. Therefore, in this thesis, we revisit virtual memory system design considering limitations imposed by flash memory. In the traditional virtual memory system, a full dirty page is written back to the secondary storage on a page fault. We found that this could result in unnecessary writes thereby wasting energy. We propose the subpaging technique that partitions a page into subunits which has the same size as the flash write unit (flash page). Only dirty subpages are written to flash memory on a page fault. The other issue that we study in this thesis is the storage cache management. Unlike traditional disk cache management, care needs to be taken to guarantee that the flash pages of a main memory page are replaced from the cache in sequence. Experimental results show that the energy reduction of combined subpaging and caching techniques is up to 40%.
APA, Harvard, Vancouver, ISO, and other styles
46

Kamala, R. "MIST : Mlgrate The Storage Too." Thesis, 2013. http://etd.iisc.ernet.in/handle/2005/2630.

Full text
Abstract:
We address the problem of migration of local storage of desktop users to remote sites. Assuming a network connection is maintained between the source and destination after the migration makes it possible for us to transfer a fraction of storage state while trying to operate as close to disconnected mode as possible. We have designed an approach to determine the subset of storage state that is to be transferred based on past accesses. We show that it is feasible to use information about files accessed to determine clusters and hot-spots in the file system. Using the tree structure of the file system and by applying an appropriate similarity measure to user accesses, we can approximate the working sets of the data accessed by the applications running at the time. Our results indicate that our technique reduces the amount of data to be copied by two orders of magnitude, bringing it into the realm of the possible.
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Fang-ming, and 楊豐銘. "The Implementation of Virtual Memory in Embedded Micro-Kernel Operating System." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/79879766275238700192.

Full text
Abstract:
碩士
國立成功大學
電腦與通信工程研究所
96
Virtual memory is a memory management technology commonly used in modern operating systems, which not only allows computer system to run software using memory more than physically configured, but also provides more efficiency in memory utilization and high security among process running in the system. This thesis presents a study and implementation of virtual memory management in an embedded micro-kernel operating system. Tranditionally, embedded operating systems in general do not support virtual memory due to the application sepcific characteristics and concerns in performance. The goal of this thesis therefore is to provide an efficient and reliable virtual memory management mechanism. The important parts of this implementation include a page management allocator in Zinix micro-kernel operating system, a page access permission protection mechanism with memory management unit(MMU) support and process swap mechanism that takes performance issues into consideration. With the wide spreading of embedded system applications, it is worthy considering supporting virtual memory management technique in embedded operating systems. This implementation provides an efficient and safe virtual memory management system and gives another choice for developers. Developers can choose whether or not to use the virtual memory management functionality by configurating the system environment setting according to their particular requirements or needs.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Yen-Kai, and 王彥凱. "MMU Cache System and Thread Block Scheduling Enhancement for Virtual Memory Support on GPGPU." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/88587440284789983325.

Full text
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
103
As “Dark silicon” phenomenon becomes obvious in advanced process, the performance of IC will soon be bounded by the power budget. Investigation on the design of customized hardware accelerator gradually takes the place of mainstream research on CPU. Graphics Processing Unit(GPU) is originally developed for the acceleration of graphic computation. It now is evolving into a programmable, general purpose computing unit(GPGPU). In the future, heterogeneous system architecture(HSA) will merge all computing units(CPU, GPU, DSP…etc) to share same virtual address space to simplify programming and to allow sharing of data between these units. As a result each unit will need to have an MMU to translate its virtual addresses into physical addresses. However, with large amount of memory accesses by these units, system performance may be impacted by the address translation process. This paper evaluates the virtual address translation impact on GPU through software simulation. We propose to place L1 private TLB and L2 shared TLB to reduce the overhead of address translation. We analyze the correlation between block ID and memory address trace. By collecting runtime information, we can select better thread block scheduling strategy to achieve higher performance.
APA, Harvard, Vancouver, ISO, and other styles
49

Hart, Simon J., and Uno R. Kodres. "Design, implementation, and evaluation of a virtual shared memory system in a multi-transputer network." Thesis, 1987. http://hdl.handle.net/10945/22733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

FANG, MING-YI, and 方明義. "The design and evaluation of demand paged virtual memory management schemes for a multiprocessor operating system." Thesis, 1987. http://ndltd.ncl.edu.tw/handle/02636472725461067844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography