Academic literature on the topic 'Time-shared computer systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Time-shared computer systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Time-shared computer systems"

1

Chung, Shu, and Edward J. Haug. "Real-Time Simulation of Multibody Dynamics on Shared Memory Multiprocessors." Journal of Dynamic Systems, Measurement, and Control 115, no. 4 (1993): 627–37. http://dx.doi.org/10.1115/1.2899190.

Full text
Abstract:
This paper presents a recursive variational formulation for real-time simulation of multibody mechanical systems on shared memory parallel computers. Static scheduling algorithms are employed to evenly distribute computation on shared memory multi-processors. Based on the methods developed, a general-purpose dynamic simulation program is shown to simulate multibody systems faster than real-time, enabling operator-in-the-loop simulation of ground vehicles and robots.
APA, Harvard, Vancouver, ISO, and other styles
2

Kherani, Arzad A. "Sojourn times in (discrete) time shared systems and their continuous time limits." Queueing Systems 60, no. 3-4 (2008): 171–91. http://dx.doi.org/10.1007/s11134-008-9092-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Yao, Lijun Sun, Haibo Wang, Lavanya Gopalakrishnan, and Ronald Eaton. "Novel prioritized LRU circuits for shared cache in computer systems." Modern Physics Letters B 34, no. 23 (2020): 2050242. http://dx.doi.org/10.1142/s0217984920502425.

Full text
Abstract:
Cache sharing technique is critical in multi-core and multi-threading systems. It potentially delays the execution of real-time applications and makes the prediction of the worst-case execution time (WCET) of real-time applications more challenging. Prioritized cache has been demonstrated as a promising approach to address this challenge. Instead of the conventional prioritized cache schemes realized at the architecture level by using cache controllers, this work presents two prioritized least recently used (LRU) cache replacement circuits that directly accomplish the prioritization inside the cache circuits, hence significantly reduces the cache access latency. The performance, hardware and power overheads due to the proposed prioritized LRU circuits are investigated based on a 65 nm CMOS technology. It shows that the proposed circuits have very low overhead compared to conventional cache circuits. The presented techniques will lead to more effective prioritized shared cache implementations and benefit the development of high-performance real-time systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Beltrán, Marta, Antonio Guzmán, and Jose L. Bosque. "A New CPU Availability Prediction Model for Time-Shared Systems." IEEE Transactions on Computers 57, no. 7 (2008): 865–75. http://dx.doi.org/10.1109/tc.2008.24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shade, E., and K. T. Narayana. "Real-Time Semantics for Shared-Variable Concurrency." Information and Computation 102, no. 1 (1993): 56–82. http://dx.doi.org/10.1006/inco.1993.1002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Myoungjun, and Soontae Kim. "Time-sensitivity-aware shared cache architecture for multi-core embedded systems." Journal of Supercomputing 75, no. 10 (2019): 6746–76. http://dx.doi.org/10.1007/s11227-019-02891-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Po-Cheng, Jyh-Biau Chang, Tyng-Yeu Liang, and Ce-Kuen Shieh. "A progressive multi-layer resource reconfiguration framework for time-shared grid systems." Future Generation Computer Systems 25, no. 6 (2009): 662–73. http://dx.doi.org/10.1016/j.future.2009.01.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Schliecker, S., M. Negrean, and R. Ernst. "Response Time Analysis on Multicore ECUs With Shared Resources." IEEE Transactions on Industrial Informatics 5, no. 4 (2009): 402–13. http://dx.doi.org/10.1109/tii.2009.2032068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Niu, Linwei. "Reliability-Aware Energy-Efficient Scheduling for (m, k)-Constrained Real-Time Systems Through Shared Time Slots." Microprocessors and Microsystems 77 (September 2020): 103110. http://dx.doi.org/10.1016/j.micpro.2020.103110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Yen-Wen, and I.-Hsuan Peng. "Shared protection of lightpath with guaranteed switching time over DWDM networks." Journal of Communications and Networks 8, no. 2 (2006): 228–33. http://dx.doi.org/10.1109/jcn.2006.6182752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Time-shared computer systems"

1

Krishnaswamy, Vijaykumar. "Shared state management for time-sensitive distributed applications." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/8197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Suh, Gookwon Edward 1977. "Analytical cache models with applications to cache partitioning in time-shared systems." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pugh, Calvin Renaldo. "Evaluation of in-house versus time-shared computer services utilizing the systems engineering process." Master's thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-02022010-020319/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jacob, Jeremy. "On shared systems." Thesis, University of Oxford, 1987. http://ora.ox.ac.uk/objects/uuid:a17b30b9-eef5-4db2-8420-6df3cf3f8175.

Full text
Abstract:
Most computing systems are shared between users of various kinds. This thesis treats such systems as mathematical objects, and investigates two of their properties: refinement and security. The first is the analysis of the conditions under which one shared system can be replaced by another, the second the determination of a measure of the information flow through a shared system. Under the heading of refinement we show what it means for one shared system to be a suitable replacement for another, both in an environment of co-operating users and in an environment of independent users. Both refine- ment relations are investigated, and a large example is given to demonstrate the relation for cooperating users. We show how to represent the security of a shared system as an 'inference function', and define several security properties in terms of such functions. A partial order is defined on systems, with the meaning 'at least as secure as'. We generalise inference functions to produce 'security specifications' which can be used to capture the desired degree of security in any shared system. We define what it means for a shared system to meet a security specification and indicate how implementations may be derived from their specifications in some cases. A summary of related work is given.
APA, Harvard, Vancouver, ISO, and other styles
5

Balaguer, Sandie. "Study of concurrency in real-time distributed systems." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00821978.

Full text
Abstract:
This thesis is concerned with the modeling and the analysis of distributedreal-time systems. In distributed systems, components evolve partlyindependently: concurrent actions may be performed in any order, withoutinfluencing each other and the state reached after these actions does notdepends on the order of execution. The time constraints in distributed real-timesystems create complex dependencies between the components and the events thatoccur. So far, distributed real-time systems have not been deeply studied, andin particular the distributed aspect of these systems is often left aside. Thisthesis explores distributed real-time systems. Our work on distributed real-timesystems is based on two formalisms: time Petri nets and networks of timedautomata, and is divided into two parts.In the first part, we highlight the differences between centralized anddistributed timed systems. We compare the main formalisms and their extensions,with a novel approach that focuses on the preservation of concurrency. Inparticular, we show how to translate a time Petri net into a network of timedautomata with the same distributed behavior. We then study a concurrency relatedproblem: shared clocks in networks of timed automata can be problematic when oneconsiders the implementation of a model on a multi-core architecture. We showhow to avoid shared clocks while preserving the distributed behavior, when thisis possible.In the second part, we focus on formalizing the dependencies between events inpartial order representations of the executions of Petri nets and time Petrinets. Occurrence nets is one of these partial order representations, and theirstructure directly provides the causality, conflict and concurrency relationsbetween events. However, we show that, even in the untimed case, some logicaldependencies between event occurrences are not directly described by thesestructural relations. After having formalized these logical dependencies, wesolve the following synthesis problem: from a formula that describes a set ofruns, we build an associated occurrence net. Then we study the logicalrelations in a simplified timed setting and show that time creates complexdependencies between event occurrences. These dependencies can be used to definea canonical unfolding, for this particular timed setting.
APA, Harvard, Vancouver, ISO, and other styles
6

Akgul, Bilge Ebru Saglam. "The System-on-a-Chip Lock Cache." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5253.

Full text
Abstract:
In this dissertation, we implement efficient lock-based synchronization by a novel, high performance, simple and scalable hardware technique and associated software for a target shared-memory multiprocessor System-on-a-Chip (SoC). The custom hardware part of our solution is provided in the form of an intellectual property (IP) hardware unit which we call the SoC Lock Cache (SoCLC). SoCLC provides effective lock hand-off by reducing on-chip memory traffic and improving performance in terms of lock latency, lock delay and bandwidth consumption. The proposed solution is independent from the memory hierarchy, cache protocol and the processor architectures used in the SoC, which enables easily applicable implementations of the SoCLC (e.g., as a reconfigurable or partially/fully custom logic), and which distinguishes SoCLC from previous approaches. Furthermore, the SoCLC mechanism has been extended to support priority inheritance with an immediate priority ceiling protocol (IPCP) implemented in hardware, which enhances the hard real-time performance of the system. Our experimental results in a four-processor SoC indicate that SoCLC can achieve up to 37% overall speedup over spin-lock and up to 48% overall speedup over MCS for a microbenchmark with false sharing. The priority inheritance implemented as part of the SoCLC hardware, on the other hand, achieves 1.43X speedup in overall execution time of a robot application when compared to the priority inheritance implementation under the Atalanta real-time operating system. Furthermore, it has been shown that with the IPCP mechanism integrated into the SoCLC, all of the tasks of the robot application could meet their deadlines (e.g., a high priority task with 250us worst case response time could complete its execution in 93us with SoCLC, however the same task missed its deadline by completing its execution in 283us without SoCLC). Therefore, with IPCP support, our solution can provide better real-time guarantees for real-time systems. To automate SoCLC design, we have also developed an SoCLC-generator tool, PARLAK, that generates user specified configurations of a custom SoCLC. We used PARLAK to generate SoCLCs from a version for two processors with 32 lock variables occupying 2,520 gates up to a version for fourteen processors with 256 lock variables occupying 78,240 gates.
APA, Harvard, Vancouver, ISO, and other styles
7

Krappman, Alfred. "Identifying and alleviating shared cache contention : Achieving reliability of real-time tasks on a multi-OS and multi-core system." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207857.

Full text
Abstract:
Current architecture trends results in processors being equipped with more cores and larger shared caches. Concurrent applications on multicore processors may interfere with each other when accessing shared resources. This is especially troublesome if deadline-bound real-time tasks are running. A tool illustrating contention was developed. The tool was used to confirm the contention problem and to evaluate the developed solution. This thesis surveyed state-of-the-art approaches concerned with mitigating contention. The approaches can be categorized as requiring modifications to the operating system, requiring modifications to hardware, requiring both or requiring neither. The approaches were also characterized by whether they focused on the source of contention or the contended for resource. An approach involving throttling of individual cores by clock modulation and toggling of hardware prefetchers was developed and tested. The solution was demonstrably effective in reducing contention. Contention effects were not eliminated. Possible further work include improving autonomous detection of contention and accounting for, and illustrating contention effects involving, additional contended for resources.
APA, Harvard, Vancouver, ISO, and other styles
8

Nagar, Kartik. "Precise Analysis of Private And Shared Caches for Tight WCET Estimates." Thesis, 2016. http://etd.iisc.ernet.in/handle/2005/2742.

Full text
Abstract:
Worst Case Execution Time (WCET) is an important metric for programs running on real-time systems, and finding precise estimates of a program’s WCET is crucial to avoid over-allocation and wastage of hardware resources and to improve the schedulability of task sets. Hardware Caches have a major impact on a program’s execution time, and accurate estimation of a program’s cache behavior generally leads to significant reduction of its estimated WCET. However, the cache behavior of an access cannot be determined in isolation, since it depends on the access history, and in multi-path programs, the sequence of accesses made to the cache is not fixed. Hence, the same access can exhibit different cache behavior in different execution instances. This issue is further exacerbated in shared caches in a multi-core architecture, where interfering accesses from co-running programs on other cores can arrive at any time and modify the cache state. Further, cache analysis aimed towards WCET estimation should be provably safe, in that the estimated WCET should always exceed the actual execution time across all execution instances. Faced with such contradicting requirements, previous approaches to cache analysis try to find memory accesses in a program which are guaranteed to hit the cache, irrespective of the program input, or the interferences from other co-running programs in case of a shared cache. To do so, they find the worst-case cache behavior for every individual memory access, analyzing the program (and interferences to a shared cache) to find whether there are execution instances where an access can super a cache miss. However, this approach loses out in making more precise predictions of private cache behavior which can be safely used for WCET estimation, and is significantly imprecise for shared cache analysis, where it is often impossible to guarantee that an access always hits the cache. In this work, we take a fundamentally different approach to cache analysis, by (1) trying to find worst-case behavior of groups of cache accesses, and (2) trying to find the exact cache behavior in the worst-case program execution instance, which is the execution instance with the maximum execution time. For shared caches, we propose the Worst Case Interference Placement (WCIP) technique, which finds the worst-case timing of interfering accesses that would cause the maximum number of cache misses on the worst case execution path of the program. We first use Integer Linear Programming (ILP) to find an exact solution to the WCIP problem. However, this approach does not scale well for large programs, and so we investigate the WCIP problem in detail and prove that it is NP-Hard. In the process, we discover that the source of hardness of the WCIP problem lies in finding the worst case execution path which would exhibit the maximum execution time in the presence of interferences. We use this observation to propose an approximate algorithm for performing WCIP, which bypasses the hard problem of finding the worst case execution path by simply assuming that all cache accesses made by the program occur on a single path. This allows us to use a simple greedy algorithm to distribute the interfering accesses by choosing those cache accesses which could be most affected by interferences. The greedy algorithm also guarantees that the increase in WCET due to interferences is linear in the number of interferences. Experimentally, we show that WCIP provides substantial precision improvement in the final WCET over previous approaches to shared cache analysis, and the approximate algorithm almost matches the precision of the ILP-based approach, while being considerably faster. For private caches, we discover multiple scenarios where hit-miss predictions made by traditional Abstract Interpretation-based approaches are not sufficient to fully capture cache behavior for WCET estimation. We introduce the concept of cache miss paths, which are abstractions of program path along which an access can super a cache miss. We propose an ILP-based approach which uses cache miss paths to find the exact cache behavior in the worst-case execution instance of the program. However, the ILP-based approach needs information about the worst-case execution path to predict the cache behavior, and hence it is difficult to integrate it with other micro-architectural analysis. We then show that most of the precision improvement of the ILP-based approach can be recovered without any knowledge of the worst-case execution path, by a careful analysis of the cache miss paths themselves. In particular, we can use cache miss paths to find the worst-case behavior of groups of cache accesses. Further, we can find upper bounds on the maximum number of times that cache accesses inside loops can exhibit worst-case behavior. This results in a scalable, precise method for performing private cache analysis which can be easily integrated with other micro-architectural analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Masson, Constantin. "Framework for Real-time collaboration on extensive Data Types using Strong Eventual Consistency." Thèse, 2018. http://hdl.handle.net/1866/22532.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Time-shared computer systems"

1

Nemati, Farhang, Thomas Nolte, and Moris Behnam. "Partitioning Real-Time Systems on Multiprocessors with Shared Resources." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17653-1_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fortin, Marie, Anca Muscholl, and Igor Walukiewicz. "Model-Checking Linear-Time Properties of Parametrized Asynchronous Shared-Memory Pushdown Systems." In Computer Aided Verification. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63390-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kuehn, Paul J., and Imran Nawab. "Analysis of Distributed Real-Time Control Systems with Shared Network Infrastructures." In Lecture Notes in Computer Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30523-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ioannidis, Sotiris, and Sandhya Dwarkadas. "Compiler and Run-Time Support for Adaptive Load Balancing in Software Distributed Shared Memory Systems." In Languages, Compilers, and Run-Time Systems for Scalable Computers. Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/3-540-49530-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brooks, Laurence, Christopher J. Davis, and Mark Lycett. "Investigating the Interdependence of Organisations and Information Systems." In Issues and Trends in Technology and Human Interaction. IGI Global, 2007. http://dx.doi.org/10.4018/978-1-59904-268-8.ch013.

Full text
Abstract:
Using Personal Construct Theory (PCT) as an underlying conceptual frame, this chapter explores the interdependence of organisations and information systems. Two PCT related techniques - Repertory Grid Analysis (RepGrid) and Cognitive Mapping (CM) - were used to investigate the dynamics of this interaction. Changing business models and information technologies were investigated in two distinct work settings: in each case, the technique contributed substantial insight into the role of information systems in that context. The analysis shows that the techniques have matured to a stage where they provide a basis for improved understanding of the organisational complexities related to information technologies. The techniques focus on the social construction of meaning by articulating and interpreting the discourse that surrounds the development, implementation and use of information technology in organisations. It is these ongoing discourses that create the dynamic complexities in the organisations, as they ‘play’ themselves out, and develop, over time. Current research has articulated and improved awareness of the issues and concerns that surround computer-based information systems (CBIS). Despite the differing contexts and work processes, the findings from each case suggest that the techniques facilitated social construction and increased the conceptual agility of managers, leading to improved integration of organisational processes and technology. The chapter concludes by drawing out the idea of the development of a conceptual model to act as a framework for the analysis of cognitive schema and shared understanding. In developing and participating in this shared understanding both organisational and technological communities could increase their awareness of each other’s issues and concerns, thereby enabling them to improve the conceptual agility of the organisation.
APA, Harvard, Vancouver, ISO, and other styles
6

Francischetti-Corrêa, Moacyr. "Molecular Visualization with Supports of Interaction, Immersion, and Collaboration among Geographically Separated Research Groups." In Information Systems and Technologies for Enhancing Health and Social Care. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-3667-5.ch017.

Full text
Abstract:
In the search for new drug and medication discovery, especially related to drug-receptor interaction, the importance of interaction among researchers is essential, either within the same room or dispersed throughout the world. The interaction in an online environment through the molecule shared view and information and ideas exchange via text and/or voice enable researchers to discuss molecule aspects under study, increasing the chances of reaching the new compound’s identification. In subsequent sections an architecture that uses concepts of Distributed Multiuser Virtual Reality, Computer-Supported Collaborative Work, and world-wide network communication techniques for creating a high performance system that allows real-time interaction among various geographically dispersed research groups studying molecular visualization and using hardware systems ranging from desktop to immersive systems as CAVEs is presented. Its construction was made possible by defining a structure based on local servers to each group, which communicate with each other on a remote network, and creating a protocol for communication among these servers that seeks agility to minimize the negative effects of packet loss and delay delivery, Internet characteristic problems.
APA, Harvard, Vancouver, ISO, and other styles
7

Freitas, Sarah, and Mark Levene. "Spam." In Encyclopedia of Human Computer Interaction. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-562-7.ch082.

Full text
Abstract:
With the advent of the electronic mail system in the 1970s, a new opportunity for direct marketing using unsolicited electronic mail became apparent. In 1978, Gary Thuerk compiled a list of those on the Arpanet and then sent out a huge mailing publicising Digital Equipment Corporation (DEC—now Compaq) systems. The reaction from the Defense Communications Agency (DCA), who ran Arpanet, was very negative, and it was this negative reaction that ensured that it was a long time before unsolicited e-mail was used again (Templeton, 2003). As long as the U.S. government controlled a major part of the backbone, most forms of commercial activity were forbidden (Hayes, 2003). However, in 1993, the Internet Network Information Center was privatized, and with no central government controls, spam, as it is now called, came into wider use. The term spam was taken from the Monty Python Flying Circus (a UK comedy group) and their comedy skit that featured the ironic spam song sung in praise of spam (luncheon meat)—“spam, spam, spam, lovely spam”—and it came to mean mail that was unsolicited. Conversely, the term ham came to mean e-mail that was wanted. Brad Templeton, a UseNet pioneer and chair of the Electronic Frontier Foundation, has traced the first usage of the term spam back to MUDs (Multi User Dungeons), or real-time multi-person shared environment, and the MUD community. These groups introduced the term spam to the early chat rooms (Internet Relay Chats). The first major UseNet (the world’s largest online conferencing system) spam sent in January 1994 and was a religious posting: “Global alert for all: Jesus is coming soon.” The term spam was more broadly popularised in April 1994, when two lawyers, Canter and Siegel from Arizona, posted a message that advertized their information and legal services for immigrants applying for the U.S. Green Card scheme. The message was posted to every newsgroup on UseNet, and after this incident, the term spam became synonymous with junk or unsolicited e-mail. Spam spread quickly among the UseNet groups who were easy targets for spammers simply because the e-mail addresses of members were widely available (Templeton, 2003).
APA, Harvard, Vancouver, ISO, and other styles
8

Arthur, W. Brian. "Cognition: The Black Box of Economics." In Perspectives on Adaptation in Natural and Artificial Systems. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195162929.003.0021.

Full text
Abstract:
John Holland's ideas have always been marked by a deep instinct for the real—for what works. Thus, in his design of computer algorithms, he avoids mathematical formalisms and goes instead to deeper sources—to mechanisms drawn from biology. In his investigation of human cognitive thinking, he avoids frameworks based on deduction, logic, and choices over closed sets; and goes instead to induction, generative creation, and choices over open-ended possibilities. Running through all Holland's work, in fact, is an instinct for the generative and for the open-ended. Holland's worlds are ones where new entities are constantly created to be tested in the environment, and where these are not drawn from any closed and labeled collection of predetermined possibilities. This makes his science algorithmic rather than analytical, evolutionary rather than equilibriumbased, and novelty-generating rather than static. It makes his science, in a word, realistic. Insofar as the standard sciences are analytical, equilibrium-based, and nongenerative in their possibilities, Holland's thinking offers them a different approach. Here I want to see what a John Holland approach has to offer economics. My involvement with Holland's ideas began in the late summer of 1987. He and I were the first Visiting Fellows of the newly formed Santa Fe Institute, and we shared a house. I had taken up Holland's fascination with evolutionary algorithms, and, by 1988, John and I were attempting to design what was to become the first artificial stock market. It took me some time to realize that John Holland had thought deeply about a great deal more than evolutionary algorithms, and that he had interesting ideas also in psychology. Over the next thirteen years, I found myself applying Holland's thinking about cognition to problems within economics. At first it appeared that Holland's approach—based largely on induction—applied best to specific problems, and I tried to think of the simplest possible problem in economics that would illustrate the need for induction. The result was my El Farol bar problem. Later I began to realize that economics does not need inductive approaches to specific problems as much as it needs to reexamine the foundations of its assumptions about decision making.
APA, Harvard, Vancouver, ISO, and other styles
9

Blum, Bruce I. "Adaptive Design." In Beyond Programming. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195091601.003.0018.

Full text
Abstract:
Finally, I am about to report on my own research. In the material that preceded this chapter I have tried to present the work of others. My role has been closer to that of a journalist than a scientist. Because I have covered so much ground, my presentations may be criticized as superficial; the chapters left more unanswered than they have answered. Nevertheless, by the time the reader has reached this point, we should have a shared perception of the design process and its rational foundations. Perhaps I could have accomplished this with fewer pages or with greater focus. I did not choose that path because I wanted the reader to build a perspective of her own, a perspective in which my model of adaptive design (as well as many other alternative solutions) would seem reasonable. The environment for adaptive design that I describe in this chapther is quite old. work began on the project in 1980, and the environment was frozen in 1982. My software engineering research career began in 1985. Prior to that time I was paid to develop useful software products (i.e., applications that satisfy the sponsor’s needs). Since 1985 I have been supported by research funds to deliver research products (i.e., new and relevant knowledge). Of course, there is no clear distinction between my practitioner and research activities, and my research—despite its change in paradigm—has always had a strong pragmatic bias. Many of my software engineering research papers were published when I was developing applications, and my work at the Johns Hopkins Medical Institutions was accepted as research in medical informatics (i.e., how computer technology can assist the practice of medicine and the delivery of care). The approach described in this chapter emerged from attempts to improve the application of computers in medicine, and this is how I finally came to understand software development—from the perspective of complex, life-critical, open interactive information systems. There is relatively little in this chapter that has not already been published. The chapter integrates what is available in a number of overlapping (and generally unreferenced) papers. I began reporting on my approach before it was fully operational (Blum 1981), but that is not uncommon in this profession.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yingxu. "The Theoretical Framework of Cognitive Informatics." In Human Computer Interaction. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-87828-991-9.ch004.

Full text
Abstract:
Cognitive Informatics (CI) is a transdisciplinary enquiry of the internal information processing mechanisms and processes of the brain and natural intelligence shared by almost all science and engineering disciplines. This article presents an intensive review of the new field of CI. The structure of the theoretical framework of CI is described encompassing the Layered Reference Model of the Brain (LRMB), the OAR model of information representation, Natural Intelligence (NI) vs. Artificial Intelligence (AI), Autonomic Computing (AC) vs. imperative computing, CI laws of software, the mechanism of human perception processes, the cognitive processes of formal inferences, and the formal knowledge system. Three types of new structures of mathematics, Concept Algebra (CA), Real-Time Process Algebra (RTPA), and System Algebra (SA), are created to enable rigorous treatment of cognitive processes of the brain as well as knowledge representation and manipulation in a formal and coherent framework. A wide range of applications of CI in cognitive psychology, computing, knowledge engineering, and software engineering has been identified and discussed.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Time-shared computer systems"

1

Yao Wang, Lavanya Gopalakrishnan, Haibo Wang, and Ronald Eaton. "Design of prioritized LRU circuit for shared cache in real-time computer systems." In 2016 13th IEEE International Conference on Solid-State and Integrated Circuit Technology (ICSICT). IEEE, 2016. http://dx.doi.org/10.1109/icsict.2016.7998982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Myoungjun, and Soontae Kim. "Performance-controllable shared cache architecture for multi-core soft real-time systems." In 2013 IEEE 31st International Conference on Computer Design (ICCD). IEEE, 2013. http://dx.doi.org/10.1109/iccd.2013.6657097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pellegrini, Alessandro, and Francesco Quaglia. "Wait-Free Global Virtual Time Computation in Shared Memory TimeWarp Systems." In 2014 26th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD). IEEE, 2014. http://dx.doi.org/10.1109/sbac-pad.2014.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Marshall, Adele, and Aleksandar Novakovic. "Analysing the Performance of a Real-Time Healthcare 4.0 System using Shared Frailty Time to Event Models." In 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS). IEEE, 2019. http://dx.doi.org/10.1109/cbms.2019.00129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Green, Scott A., Mark Billinghurst, XiaoQi Chen, and J. Geoffrey Chase. "Human Robot Collaboration: An Augmented Reality Approach—A Literature Review and Analysis." In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-34227.

Full text
Abstract:
Future space exploration will demand the cultivation of human-robotic systems, however, little attention has been paid to the development of human-robot teams. Current methods for autonomous plan creation are often complex and difficult to use. So a system is needed that enables humans and robotic systems to naturally and effectively collaborate. Effective collaboration takes place when the participants are able to communicate in a natural and effective manner. Grounding, the common understanding between conversational participants, shared spatial referencing and situational awareness, are crucial components of communication and collaboration. This paper briefly reviews the fields of human-robot interaction and Augmented Reality (AR), the overlaying of computer graphics onto the real worldview. The strengths of AR are discussed and how they might be used for more effective human-robot collaboration is described. Then a description of an architecture that we have developed is given that uses AR as a means for real time understanding of the shared spatial scene. This architecture enables grounding and enhances situational awareness, thus laying the necessary groundwork for natural and effective human-robot collaboration.
APA, Harvard, Vancouver, ISO, and other styles
6

Sharma, Naveen, and Paul S. Wang. "The PIER Parallel FEA Program Generator." In ASME 1993 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1993. http://dx.doi.org/10.1115/cie1993-0036.

Full text
Abstract:
Abstract In this paper we describe a coupled symbolic-numeric approach for solving PDE-based mathematical models on sequential and parallel computers. PIER, an experimental software system that we are developing, synthesizes F77 subroutines for finite element modeling directly from very-high level user input specifications. The system is being developed in Common Lisp and uses MAXIMA computer algebra system for symbolic mathematical computations. PIER input syntax provide high-level statements to specify finite element discretization and methods for solving systems of equations. The user composes the finite element analysis process using MAXIMA input syntax and F77 statements along with these statements. Symbolic quantities for element formulation like shape functions, element equations etc. are automatically derived. The input model characteristics, derived formulae, desired solution methods, and the target machine knowledge are then used to generate numerical code for FEA solution steps. The benefits of this approach include: 1) substantially reducing the time and effort required to solve mathematical models, 2) ability to solve models in higher-dimensions, and 3) automatic retargeting of numeric computations to multiple parallel architectures. Currently, we are applying the techniques developed in our research to the numeric solution of problems in computational liquid crystal physics and the theory of elasticity. Sequent shared-memory multiprocessor is the current target parallel computer.
APA, Harvard, Vancouver, ISO, and other styles
7

Jun, Seung Kook, and Venkat N. Krovi. "The Smart Car Project: Development and Implementation of a Modular Scaled Test-Bed." In ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/cie-48258.

Full text
Abstract:
In this paper, we investigate the development, implementation and testing of an inexpensive scaled-prototype “Smart Car Test-Bed”. The test-bed consists of retrofitting a commercially available radio-control (RC) truck with a PC/104 based computer with various embedded sensorand actuatorsubsystems together and multiple modes of communication (radio frequency (RF) and IEEE 802.11b wireless ethernet). The overall goal of our work is the creation of an inexpensive test-bed equipped with a real-time mediated control system to enhance the overall system autonomy and robustness. This test-bed enables us to study several concepts including: (i) mediation of human user control of complex robot systems; (ii) multi-user shared teleoperation; and (iii) robustness of the control in the presence of varying grades of communication — issues that are pertinent to a number of current and future generations of military/civilian systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Hofbauer, John. "Solving Various Train Approach Speeds to Highway Crossings Using Innovative Technologies." In 2018 Joint Rail Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/jrc2018-6266.

Full text
Abstract:
The use of using cleaner energy (zero emissions) transportation has become a key focus in the North America even within rail transportation. Within in North America the migration from Diesel to Electric Locomotives, utilizing overhead catenary systems with voltages in the 25kV range for passenger trains has become the standard. In addition, “Shared-Use Rail Corridors” have become more prevalent in North America (USA and Canada), the use of Constant Warning Time Devices (CWTD) based on a change of inductance in the rail are less reliable within Electrified railroads. With shared use track, it is understood that a difference exists between freight and passenger train speeds, resulting in the need for other methods to detect and determine the correct approach times become a priority. Implementing Computer-Based Train Control (CBTC) systems or Positive Train Control (PTC) technology can mitigate the problem if they communicate / request highway crossing activation. But in locations where PTC is not being installed or in Canada where it is not required, other methods need to be explored. This paper will review and analyze the following: 1. Review the existing systems being deployed; 2. Evaluate the deployed systems effectiveness; 3. Test and record data using various innovative technologies including: Axle counters determining speed of approaching train, acceleration (+ / −); 4. Conclusion for the integrating new axle counter technologies and existing track circuits.
APA, Harvard, Vancouver, ISO, and other styles
9

Tumkor, Serdar, Mingshao Zhang, Zhou Zhang, Yizhe Chang, Sven K. Esche, and Constantin Chassapis. "Integration of a Real-Time Remote Experiment Into a Multi-Player Game Laboratory Environment." In ASME 2012 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/imece2012-86944.

Full text
Abstract:
While real-time remote experiments have been used in engineering and science education for over a decade, more recently virtual learning environments based on game systems have been explored for their potential usage in educational laboratories. However, combining the advantages of both these approaches and integrating them into an effective learning environment has not been reported yet. One of the challenges in creating such a combination is to overcome the barriers between real and virtual systems, i.e. to select compatible platforms, to achieve an efficient mapping between the real world and the virtual environment and to arrange for efficient communications between the different system components. This paper will present a pilot implementation of a multi-player game-based virtual laboratory environment that is linked to the remote experimental setup of an air flow rig. This system is designed for a junior-level mechanical engineering laboratory on fluid mechanics. In order to integrate this remote laboratory setup into the virtual laboratory environment, an existing remote laboratory architecture had to be redesigned. The integrated virtual laboratory platform consists of two main parts, namely an actual physical experimental device controlled by a remote controller and a virtual laboratory environment that was implemented using the ‘Source’ game engine, which forms the basis of the commercially available computer game ‘Half-Life 2’ in conjunction with ‘Garry’s Mod’ (GM). The system implemented involves a local device controller that exchanges data in the form of shared variables and Dynamical Link Library (DLL) files with the virtual laboratory environment, thus establishing the control of real physical experiments from inside the virtual laboratory environment. The application of a combination of C++ code, Lua scripts [1] and LabVIEW Virtual Instruments makes the platform very flexible and expandable. This paper will present the architecture of this platform and discuss the general benefits of virtual environments that are linked with real physical devices.
APA, Harvard, Vancouver, ISO, and other styles
10

Xiao-Jian, Yi, Shi Jian, Dong Hai-Ping, and Lai Yue-Hua. "Reliability Analysis of Repairable System With Multiple Fault Modes Based on GO Methodology." In ASME 2014 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/imece2014-36198.

Full text
Abstract:
GO methodology is a success-oriented method for system reliability analysis. There are components with multiple fault modes in repairable systems. It is a problem to use the existing GO method to make reliability analysis of such repairable systems. A new GO method for reliability analysis of such repairable systems with multiple fault modes is presented in this paper. For quantitative reliability analysis of repairable system, formulas of reliability parameters of operators which are used to describe components with multiple fault modes in reparable systems are derived based on Markov process theory. Qualitative reliability analysis of repairable systems with multiple fault modes is conducted by combining the existing GO method with Fussell-Vesely method. This new GO method is applied for the first time in reliability analysis of a Hydraulic Transmission Oil Supply System (HTOSS) of a Power-Shift Steering Transmission under high speed condition. Firstly, the operator type and fault modes of each component are determined through systematic analysis. Secondly, GO model of the system is built. And availability of each component is computed with the above equations deduced in this paper. Then, success probability of the system is calculated respectively by the direct algorithm, modified algorithm with shared signals and exact algorithm with shared signals. And all system minimum cut sets containing all fault modes are obtained by using the new GO method. Finally, Compared with Fault Tree Analysis and Monte Carlo simulation, the results show that this new GO method is correct and suitable for reliability analysis of repairable systems with multiple fault modes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography