Littérature scientifique sur le sujet « Parallel programs »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Parallel programs ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Parallel programs"

1

Rubin, Robert, Larry Rudolph et Dror Zernik. « Debugging parallel programs in parallel ». ACM SIGPLAN Notices 24, no 1 (3 janvier 1989) : 216–25. http://dx.doi.org/10.1145/69215.69236.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Prakash, S., E. Deelman et R. Bagrodia. « Asynchronous parallel simulation of parallel programs ». IEEE Transactions on Software Engineering 26, no 5 (mai 2000) : 385–400. http://dx.doi.org/10.1109/32.846297.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Sridharan, Srinath, Gagan Gupta et Gurindar S. Sohi. « Adaptive, efficient, parallel execution of parallel programs ». ACM SIGPLAN Notices 49, no 6 (5 juin 2014) : 169–80. http://dx.doi.org/10.1145/2666356.2594292.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hoey, James, Irek Ulidowski et Shoji Yuen. « Reversing Imperative Parallel Programs ». Electronic Proceedings in Theoretical Computer Science 255 (31 août 2017) : 51–66. http://dx.doi.org/10.4204/eptcs.255.4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Saman, MD Yazid, et David J. Evans. « Verification of parallel programs ». International Journal of Computer Mathematics 56, no 1-2 (janvier 1995) : 23–37. http://dx.doi.org/10.1080/00207169508804385.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Albright, Larry, Jay Alan Jackson et Joan Francioni. « AURALIZATION OF PARALLEL PROGRAMS ». ACM SIGCHI Bulletin 23, no 4 (octobre 1991) : 86–87. http://dx.doi.org/10.1145/126729.1056083.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Psarris, Kleanthis. « Program analysis techniques for transforming programs for parallel execution ». Parallel Computing 28, no 3 (mars 2002) : 455–69. http://dx.doi.org/10.1016/s0167-8191(01)00132-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Martins, Francisco, Vasco Thudichum Vasconcelos et Hans Hüttel. « Inferring Types for Parallel Programs ». Electronic Proceedings in Theoretical Computer Science 246 (8 avril 2017) : 28–36. http://dx.doi.org/10.4204/eptcs.246.6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Aschieri, Federico, Agata Ciabattoni et Francesco Antonio Genco. « Classical Proofs as Parallel Programs ». Electronic Proceedings in Theoretical Computer Science 277 (7 septembre 2018) : 43–57. http://dx.doi.org/10.4204/eptcs.277.4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Terekhov, Andrey N., Alexandr A. Golovan et Mikhail A. Terekhov. « Parallel Programs in RuC Project ». Computer Tools in Education, no 2 (27 avril 2018) : 25–30. http://dx.doi.org/10.32603/2071-2340-2018-2-25-30.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Thèses sur le sujet "Parallel programs"

1

Smith, Edmund. « Parallel solution of linear programs ». Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8833.

Texte intégral
Résumé :
The factors limiting the performance of computer software periodically undergo sudden shifts, resulting from technological progress, and these shifts can have profound implications for the design of high performance codes. At the present time, the speed with which hardware can execute a single stream of instructions has reached a plateau. It is now the number of instruction streams that may be executed concurrently which underpins estimates of compute power, and with this change, a critical limitation on the performance of software has come to be the degree to which it can be parallelised. The research in this thesis is concerned with the means by which codes for linear programming may be adapted to this new hardware. For the most part, it is codes implementing the simplex method which will be discussed, though these have typically lower performance for single solves than those implementing interior point methods. However, the ability of the simplex method to rapidly re-solve a problem makes it at present indispensable as a subroutine for mixed integer programming. The long history of the simplex method as a practical technique, with applications in many industries and government, has led to such codes reaching a great level of sophistication. It would be unexpected in a research project such as this one to match the performance of top commercial codes with many years of development behind them. The simplex codes described in this thesis are, however, able to solve real problems of small to moderate size, rather than being confined to random or otherwise artificially generated instances. The remainder of this thesis is structured as follows. The rest of this chapter gives a brief overview of the essential elements of modern parallel hardware and of the linear programming problem. Both the simplex method and interior point methods are discussed, along with some of the key algorithmic enhancements required for such systems to solve real-world problems. Some background on the parallelisation of both types of code is given. The next chapter describes two standard simplex codes designed to exploit the current generation of hardware. i6 is a parallel standard simplex solver capable of being applied to a range of real problems, and showing exceptional performance for dense, square programs. i8 is also a parallel, standard simplex solver, but now implemented for graphics processing units (GPUs).
Styles APA, Harvard, Vancouver, ISO, etc.
2

D'Paola, Oscar Naim. « Performance visualization of parallel programs ». Thesis, University of Southampton, 1995. https://eprints.soton.ac.uk/365532/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Busvine, David John. « Detecting parallel structures in functional programs ». Thesis, Heriot-Watt University, 1993. http://hdl.handle.net/10399/1415.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Justo, George Roger Ribeiro. « Configuration-oriented development of parallel programs ». Thesis, University of Kent, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333965.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Mukherjee, Joy. « A Runtime Framework for Parallel Programs ». Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28756.

Texte intégral
Résumé :
This dissertation proposes the Weaves runtime framework for the execution of large scale parallel programs over lightweight intra-process threads. The goal of the Weaves framework is to help process-based legacy parallel programs exploit the scalability of threads without any modifications. The framework separates global variables used by identical, but independent, threads of legacy parallel programs without resorting to thread-based re-programming. At the same time, it also facilitates low-overhead collaboration among threads of a legacy parallel program through multi-granular selective sharing of global variables. Applications that follow the tenets of the Weaves framework can load multiple identical, but independent, copies of arbitrary object files within a single process. They can compose the runtime images of these object files in graph-like ways and run intra-process threads through them to realize various degrees of multi-granular selective sharing or separation of global variables among the threads. Using direct runtime control over the resolution of individual references to functions and variables, they can also manipulate program composition at fine granularities. Most importantly, the Weaves framework does not entail any modifications to either the source codes or the native codes of the object files. The framework is completely transparent. Results from experiments with a real-world process-based parallel application show that the framework can correctly execute a thousand parallel threads containing non-threadsafe global variables on a single machine - nearly twice as many as the traditional process-based approach can - without any code modifications. On increasing the number of machines, the application experiences super-linear speedup, which illustrates scalability. Results from another similar application, chosen from a different software area to emphasize the breadth of this research, show that the framework's facilities for low-overhead collaboration among parallel threads allows for significantly greater scales of achievable parallelism than technologies for inter-process collaboration allow. Ultimately, larger scales of parallelism enable more accurate software modeling of real-world parallel systems, such as computer networks and multi-physics natural phenomena.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Hinz, Peter. « Visualizing the performance of parallel programs ». Master's thesis, University of Cape Town, 1996. http://hdl.handle.net/11427/16141.

Texte intégral
Résumé :
Bibliography: pages 110-115.
The performance analysis of parallel programs is a complex task, particularly if the program has to be efficient over a wide range of parallel machines. We have designed a performance analysis system called Chiron that uses scientific visualization techniques to guide and help the user in performance analysis activities. The aim of Chiron is to give the user full control over what section of the data he/she wants to investigate in detail. Chiron uses interactive three-dimensional graphics techniques to display large amounts of data in a compact and easy to understand/ conceptualize way. The system assists in the tracking of performance bottlenecks by showing data in 10 different views and allowing the user to interact with the data. In this thesis the design and implementation of Chiron are described, and its effectiveness illustrated by means of three case studies.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Hayashi, Yasushi. « Shape-based cost analysis of skeletal parallel programs ». Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/14029.

Texte intégral
Résumé :
This work presents an automatic cost-analysis system for an implicitly parallel skeletal programming language. Although deducing interesting dynamic characteristics of parallel programs (and in particular, run time) is well known to be an intractable problem in the general case, it can be alleviated by placing restrictions upon the programs which can be expressed. By combining two research threads, the “skeletal” and “shapely” paradigms which take this route, we produce a completely automated, computation and communication sensitive cost analysis system. This builds on earlier work in the area by quantifying communication as well as computation costs, with the former being derived for the Bulk Synchronous Parallel (BSP) model. We present details of our shapely skeletal language and its BSP implementation strategy together with an account of the analysis mechanism by which program behaviour information (such as shape and cost) is statically deduced. This information can be used at compile-time to optimise a BSP implementation and to analyse computation and communication costs. The analysis has been implemented in Haskell. We consider different algorithms expressed in our language for some example problems and illustrate each BSP implementation, contrasting the analysis of their efficiency by traditional, intuitive methods with that achieved by our cost calculator. The accuracy of cost predictions by our cost calculator against the run time of real parallel programs is tested experimentally. Previous shape-based cost analysis required all elements of a vector (our nestable bulk data structure) to have the same shape. We partially relax this strict requirement on data structure regularity by introducing new shape expressions in our analysis framework. We demonstrate that this allows us to achieve the first automated analysis of a complete derivation, the well known maximum segment sum algorithm of Skillicorn and Cai.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Wei, Jiesheng. « Hardware error detection in multicore parallel programs ». Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42961.

Texte intégral
Résumé :
The scaling of Silicon devices has exacerbated the unreliability of modern computer systems, and power constraints have necessitated the involvement of software in hardware error detection. Simultaneously, the multi-core revolution has impelled software to become parallel. Therefore, there is a compelling need to protect parallel programs from hardware errors. Parallel programs’ tasks have significant similarity in control data due to the use of high-level programming models. In this thesis, we propose BlockWatch to leverage the similarity in parallel program’s control data for detecting hardware errors. BlockWatch statically extracts the similarity among different threads of a parallel program and checks the similarity at runtime. We evaluate BlockWatch on eight SPLASH-2 benchmarks to measure its performance overhead and error detection coverage. We find that BlockWatch incurs an average overhead of 15% across all programs, and provides an average SDC coverage of 97% for faults in the control data.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Zhu, Yingchun 1968. « Optimizing parallel programs with dynamic data structures ». Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=36745.

Texte intégral
Résumé :
Distributed memory parallel architectures support a memory model where some memory accesses are local, and thus inexpensive, while other memory accesses are remote, and potentially quite expensive. In order to achieve efficiency on such architectures, we need to reduce remote accesses. This is particularly challenging for applications that use dynamic data structures.
In this thesis, I present two compiler techniques to reduce the overhead of remote memory accesses for dynamic data structure based applications: locality techniques and communication optimizations. Locality techniques include a static locality analysis, which statically estimates when an indirect reference via a pointer can be safely assumed to be a local access, and dynamic locality checks, which consists of runtime tests to identify local accesses. Communication techniques include: (1) code movement to issue remote reads earlier and writes later; (2) code transformations to replace repeated/redundant remote accesses with one access; and (3) transformations to block or pipeline a group of remote requests together. Both locality and communication techniques have been implemented and incorporated into our EARTH-McCAT compiler framework, and a series of experiments have been conducted to evaluate these techniques. The experimental results show that we are able to achieve up to 26% performance improvement with each technique alone, and up to 29% performance improvement when both techniques are applied together.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Grove, Duncan A. « Performance modelling of message-passing parallel programs ». Title page, contents and abstract only, 2003. http://web4.library.adelaide.edu.au/theses/09PH/09phg8832.pdf.

Texte intégral
Résumé :
This dissertation describes a new performance modelling system, called the Performance Evaluating Virtual Parallel Machine (PEVPM). It uses a novel bottom-up approach, where submodels of individual computation and communication events are dynamically constructed from data-dependencies, current contention levels and the performance distributions of low-level operations, which define performance variability in the face of contention.
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Livres sur le sujet "Parallel programs"

1

Synchronization of parallel programs. Cambridge, Mass : MIT Press, 1985.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Synchronization of parallel programs. Oxford : North Oxford Academic, 1985.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Tomas, Gerald. Visualization of scientific parallel programs. Berlin : Springer-Verlag, 1994.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Pelagatti, Susanna. Structured development of parallel programs. London : Taylor & Francis, 1998.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Parallel execution of logic programs. Boston : Kluwer Academic Publishers, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Conery, JohnS. Parallel execution of logic programs. Boston, Mass : Kluwer, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Pelagatti, Susanna. Structured development of parallel programs. London : Taylor & Francis, 1998.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Wong, Pak Seng. Parallel evaluation of functional programs. Manchester : University of Manchester, 1993.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Conery, John S. Parallel Execution of Logic Programs. Boston, MA : Springer US, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Cok, Ronald S. Parallel programs for the transputer. Englewood Cliffs, N.J : Prentice Hall, 1991.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Chapitres de livres sur le sujet "Parallel programs"

1

Apt, Krzysztof R., et Ernst-Rüdiger Olderog. « Disjoint Parallel Programs ». Dans Verification of Sequential and Concurrent Programs, 101–24. New York, NY : Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4757-2714-2_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Apt, Krzysztof R., et Ernst-Rüdiger Olderog. « Disjoint Parallel Programs ». Dans Verification of Sequential and Concurrent Programs, 179–206. New York, NY : Springer New York, 1991. http://dx.doi.org/10.1007/978-1-4757-4376-0_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Korsloot, Mark, et Evan Tick. « Sequentializing Parallel Programs ». Dans Declarative Programming, Sasbachwalden 1991, 310–24. London : Springer London, 1992. http://dx.doi.org/10.1007/978-1-4471-3794-8_20.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Rauber, Thomas, et Gudula Rünger. « Performance Analysis of Parallel Programs ». Dans Parallel Programming, 151–96. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04818-0_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Rauber, Thomas, et Gudula Rünger. « Performance Analysis of Parallel Programs ». Dans Parallel Programming, 169–226. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37801-0_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Prakash, Sundeep, et Rajive Bagrodia. « Parallel simulation of data parallel programs ». Dans Languages and Compilers for Parallel Computing, 239–53. Berlin, Heidelberg : Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0014203.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Julliand, Jacques, et Guy-René Perrin. « Asynchronous functional parallel programs ». Dans Advances in Computing and Information — ICCI '90, 356–65. Berlin, Heidelberg : Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-53504-7_93.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Apt, Krzysztof R., et Ernst-Rüdiger Olderog. « Parallel Programs with Synchronization ». Dans Verification of Sequential and Concurrent Programs, 169–211. New York, NY : Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4757-2714-2_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Voss, Michael, et Rudolf Eigenmann. « Dynamically adaptive parallel programs ». Dans Lecture Notes in Computer Science, 109–20. Berlin, Heidelberg : Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/bfb0094915.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Gittins, Martin. « Debugging parallel Strand Programs ». Dans Parallel Execution of Logic Programs, 1–16. Berlin, Heidelberg : Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/3-540-55038-0_1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Parallel programs"

1

Rubin, Robert, Larry Rudolph et Dror Zernik. « Debugging parallel programs in parallel ». Dans the 1988 ACM SIGPLAN and SIGOPS workshop. New York, New York, USA : ACM Press, 1988. http://dx.doi.org/10.1145/68210.69236.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Phillips, Joel, Kurt Keutzer et Michael Wrinn. « Architecting parallel programs ». Dans 2008 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE, 2008. http://dx.doi.org/10.1109/iccad.2008.4681535.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Castañeda Lozano, Roberto, Murray Cole et Björn Franke. « Parallelizing Parallel Programs ». Dans PACT '20 : International Conference on Parallel Architectures and Compilation Techniques. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3410463.3414663.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Schwartz-Narbonne, Daniel, Feng Liu, Tarun Pondicherry, David August et Sharad Malik. « Parallel assertions for debugging parallel programs ». Dans 2011 9th IEEE/ACM International Conference on Formal Methods and Models for Codesign (MEMOCODE 2011). IEEE, 2011. http://dx.doi.org/10.1109/memcod.2011.5970525.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Margerm, Steven, Amirali Sharifian, Apala Guha, Arrvindh Shriraman et Gilles Pokam. « TAPAS : Generating Parallel Accelerators from Parallel Programs ». Dans 2018 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2018. http://dx.doi.org/10.1109/micro.2018.00028.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Sridharan, Srinath, Gagan Gupta et Gurindar S. Sohi. « Adaptive, efficient, parallel execution of parallel programs ». Dans PLDI '14 : ACM SIGPLAN Conference on Programming Language Design and Implementation. New York, NY, USA : ACM, 2014. http://dx.doi.org/10.1145/2594291.2594292.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Francioni, Joan M., Larry Albright et Jay Alan Jackson. « Debugging parallel programs using sound ». Dans the 1991 ACM/ONR workshop. New York, New York, USA : ACM Press, 1991. http://dx.doi.org/10.1145/122759.122765.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Jackson, J. A., et J. M. Francioni. « Aural signatures of parallel programs ». Dans Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences. IEEE, 1992. http://dx.doi.org/10.1109/hicss.1992.183294.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Heirman, Wim, Joni Dambre, Dirk Stroobandt et Jan Van Campenhout. « Rent's rule and parallel programs ». Dans the tenth international workshop. New York, New York, USA : ACM Press, 2008. http://dx.doi.org/10.1145/1353610.1353628.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Perumalla, Kalyan S., et Alfred J. Park. « Simulating billion-task parallel programs ». Dans 2014 International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS). IEEE, 2014. http://dx.doi.org/10.1109/spects.2014.6879997.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Parallel programs"

1

Foster, I. Information hiding in parallel programs. Office of Scientific and Technical Information (OSTI), janvier 1992. http://dx.doi.org/10.2172/10133018.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Foster, I. Language constructs for modular parallel programs. Office of Scientific and Technical Information (OSTI), mars 1996. http://dx.doi.org/10.2172/204015.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Socha, David, Mary L. Bailey et David Notkin. Voyeur : Graphical Views of Parallel Programs. Fort Belvoir, VA : Defense Technical Information Center, avril 1988. http://dx.doi.org/10.21236/ada197103.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Entriken, R. The parallel decomposition of linear programs. Office of Scientific and Technical Information (OSTI), novembre 1989. http://dx.doi.org/10.2172/5291579.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ho, James K., Tak C. Lee et R. P. Sundarraj. Decomposition of Linear Programs Using Parallel Computation. Fort Belvoir, VA : Defense Technical Information Center, décembre 1988. http://dx.doi.org/10.21236/ada203214.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Downey, Allen B. A Model for Speedup of Parallel Programs. Fort Belvoir, VA : Defense Technical Information Center, janvier 1997. http://dx.doi.org/10.21236/ada637068.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kennedy, Ken, John Mellor-Crummey, Guohua Jin, Vikram Adve et Robert J. Fowler. Compiling Scientific Programs for Scalable Parallel Systems. Fort Belvoir, VA : Defense Technical Information Center, février 2001. http://dx.doi.org/10.21236/ada387581.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Entriken, Robert. The Parallel of Decomposition of Linear Programs. Fort Belvoir, VA : Defense Technical Information Center, novembre 1989. http://dx.doi.org/10.21236/ada216100.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Poplawski, D. A. Synthetic models of distributed memory parallel programs. Office of Scientific and Technical Information (OSTI), septembre 1990. http://dx.doi.org/10.2172/6569514.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Entriken, Robert. A Parallel Decomposition Algorithm for Staircase Linear Programs. Fort Belvoir, VA : Defense Technical Information Center, décembre 1988. http://dx.doi.org/10.21236/ada204662.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie