Academic literature on the topic 'Parallel programs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallel programs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parallel programs"

1

Rubin, Robert, Larry Rudolph, and Dror Zernik. "Debugging parallel programs in parallel." ACM SIGPLAN Notices 24, no. 1 (January 3, 1989): 216–25. http://dx.doi.org/10.1145/69215.69236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Prakash, S., E. Deelman, and R. Bagrodia. "Asynchronous parallel simulation of parallel programs." IEEE Transactions on Software Engineering 26, no. 5 (May 2000): 385–400. http://dx.doi.org/10.1109/32.846297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sridharan, Srinath, Gagan Gupta, and Gurindar S. Sohi. "Adaptive, efficient, parallel execution of parallel programs." ACM SIGPLAN Notices 49, no. 6 (June 5, 2014): 169–80. http://dx.doi.org/10.1145/2666356.2594292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hoey, James, Irek Ulidowski, and Shoji Yuen. "Reversing Imperative Parallel Programs." Electronic Proceedings in Theoretical Computer Science 255 (August 31, 2017): 51–66. http://dx.doi.org/10.4204/eptcs.255.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Saman, MD Yazid, and David J. Evans. "Verification of parallel programs." International Journal of Computer Mathematics 56, no. 1-2 (January 1995): 23–37. http://dx.doi.org/10.1080/00207169508804385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Albright, Larry, Jay Alan Jackson, and Joan Francioni. "AURALIZATION OF PARALLEL PROGRAMS." ACM SIGCHI Bulletin 23, no. 4 (October 1991): 86–87. http://dx.doi.org/10.1145/126729.1056083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Psarris, Kleanthis. "Program analysis techniques for transforming programs for parallel execution." Parallel Computing 28, no. 3 (March 2002): 455–69. http://dx.doi.org/10.1016/s0167-8191(01)00132-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Martins, Francisco, Vasco Thudichum Vasconcelos, and Hans Hüttel. "Inferring Types for Parallel Programs." Electronic Proceedings in Theoretical Computer Science 246 (April 8, 2017): 28–36. http://dx.doi.org/10.4204/eptcs.246.6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aschieri, Federico, Agata Ciabattoni, and Francesco Antonio Genco. "Classical Proofs as Parallel Programs." Electronic Proceedings in Theoretical Computer Science 277 (September 7, 2018): 43–57. http://dx.doi.org/10.4204/eptcs.277.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Terekhov, Andrey N., Alexandr A. Golovan, and Mikhail A. Terekhov. "Parallel Programs in RuC Project." Computer Tools in Education, no. 2 (April 27, 2018): 25–30. http://dx.doi.org/10.32603/2071-2340-2018-2-25-30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Parallel programs"

1

Smith, Edmund. "Parallel solution of linear programs." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8833.

Full text
Abstract:
The factors limiting the performance of computer software periodically undergo sudden shifts, resulting from technological progress, and these shifts can have profound implications for the design of high performance codes. At the present time, the speed with which hardware can execute a single stream of instructions has reached a plateau. It is now the number of instruction streams that may be executed concurrently which underpins estimates of compute power, and with this change, a critical limitation on the performance of software has come to be the degree to which it can be parallelised. The research in this thesis is concerned with the means by which codes for linear programming may be adapted to this new hardware. For the most part, it is codes implementing the simplex method which will be discussed, though these have typically lower performance for single solves than those implementing interior point methods. However, the ability of the simplex method to rapidly re-solve a problem makes it at present indispensable as a subroutine for mixed integer programming. The long history of the simplex method as a practical technique, with applications in many industries and government, has led to such codes reaching a great level of sophistication. It would be unexpected in a research project such as this one to match the performance of top commercial codes with many years of development behind them. The simplex codes described in this thesis are, however, able to solve real problems of small to moderate size, rather than being confined to random or otherwise artificially generated instances. The remainder of this thesis is structured as follows. The rest of this chapter gives a brief overview of the essential elements of modern parallel hardware and of the linear programming problem. Both the simplex method and interior point methods are discussed, along with some of the key algorithmic enhancements required for such systems to solve real-world problems. Some background on the parallelisation of both types of code is given. The next chapter describes two standard simplex codes designed to exploit the current generation of hardware. i6 is a parallel standard simplex solver capable of being applied to a range of real problems, and showing exceptional performance for dense, square programs. i8 is also a parallel, standard simplex solver, but now implemented for graphics processing units (GPUs).
APA, Harvard, Vancouver, ISO, and other styles
2

D'Paola, Oscar Naim. "Performance visualization of parallel programs." Thesis, University of Southampton, 1995. https://eprints.soton.ac.uk/365532/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Busvine, David John. "Detecting parallel structures in functional programs." Thesis, Heriot-Watt University, 1993. http://hdl.handle.net/10399/1415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Justo, George Roger Ribeiro. "Configuration-oriented development of parallel programs." Thesis, University of Kent, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mukherjee, Joy. "A Runtime Framework for Parallel Programs." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28756.

Full text
Abstract:
This dissertation proposes the Weaves runtime framework for the execution of large scale parallel programs over lightweight intra-process threads. The goal of the Weaves framework is to help process-based legacy parallel programs exploit the scalability of threads without any modifications. The framework separates global variables used by identical, but independent, threads of legacy parallel programs without resorting to thread-based re-programming. At the same time, it also facilitates low-overhead collaboration among threads of a legacy parallel program through multi-granular selective sharing of global variables. Applications that follow the tenets of the Weaves framework can load multiple identical, but independent, copies of arbitrary object files within a single process. They can compose the runtime images of these object files in graph-like ways and run intra-process threads through them to realize various degrees of multi-granular selective sharing or separation of global variables among the threads. Using direct runtime control over the resolution of individual references to functions and variables, they can also manipulate program composition at fine granularities. Most importantly, the Weaves framework does not entail any modifications to either the source codes or the native codes of the object files. The framework is completely transparent. Results from experiments with a real-world process-based parallel application show that the framework can correctly execute a thousand parallel threads containing non-threadsafe global variables on a single machine - nearly twice as many as the traditional process-based approach can - without any code modifications. On increasing the number of machines, the application experiences super-linear speedup, which illustrates scalability. Results from another similar application, chosen from a different software area to emphasize the breadth of this research, show that the framework's facilities for low-overhead collaboration among parallel threads allows for significantly greater scales of achievable parallelism than technologies for inter-process collaboration allow. Ultimately, larger scales of parallelism enable more accurate software modeling of real-world parallel systems, such as computer networks and multi-physics natural phenomena.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Hinz, Peter. "Visualizing the performance of parallel programs." Master's thesis, University of Cape Town, 1996. http://hdl.handle.net/11427/16141.

Full text
Abstract:
Bibliography: pages 110-115.
The performance analysis of parallel programs is a complex task, particularly if the program has to be efficient over a wide range of parallel machines. We have designed a performance analysis system called Chiron that uses scientific visualization techniques to guide and help the user in performance analysis activities. The aim of Chiron is to give the user full control over what section of the data he/she wants to investigate in detail. Chiron uses interactive three-dimensional graphics techniques to display large amounts of data in a compact and easy to understand/ conceptualize way. The system assists in the tracking of performance bottlenecks by showing data in 10 different views and allowing the user to interact with the data. In this thesis the design and implementation of Chiron are described, and its effectiveness illustrated by means of three case studies.
APA, Harvard, Vancouver, ISO, and other styles
7

Hayashi, Yasushi. "Shape-based cost analysis of skeletal parallel programs." Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/14029.

Full text
Abstract:
This work presents an automatic cost-analysis system for an implicitly parallel skeletal programming language. Although deducing interesting dynamic characteristics of parallel programs (and in particular, run time) is well known to be an intractable problem in the general case, it can be alleviated by placing restrictions upon the programs which can be expressed. By combining two research threads, the “skeletal” and “shapely” paradigms which take this route, we produce a completely automated, computation and communication sensitive cost analysis system. This builds on earlier work in the area by quantifying communication as well as computation costs, with the former being derived for the Bulk Synchronous Parallel (BSP) model. We present details of our shapely skeletal language and its BSP implementation strategy together with an account of the analysis mechanism by which program behaviour information (such as shape and cost) is statically deduced. This information can be used at compile-time to optimise a BSP implementation and to analyse computation and communication costs. The analysis has been implemented in Haskell. We consider different algorithms expressed in our language for some example problems and illustrate each BSP implementation, contrasting the analysis of their efficiency by traditional, intuitive methods with that achieved by our cost calculator. The accuracy of cost predictions by our cost calculator against the run time of real parallel programs is tested experimentally. Previous shape-based cost analysis required all elements of a vector (our nestable bulk data structure) to have the same shape. We partially relax this strict requirement on data structure regularity by introducing new shape expressions in our analysis framework. We demonstrate that this allows us to achieve the first automated analysis of a complete derivation, the well known maximum segment sum algorithm of Skillicorn and Cai.
APA, Harvard, Vancouver, ISO, and other styles
8

Wei, Jiesheng. "Hardware error detection in multicore parallel programs." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42961.

Full text
Abstract:
The scaling of Silicon devices has exacerbated the unreliability of modern computer systems, and power constraints have necessitated the involvement of software in hardware error detection. Simultaneously, the multi-core revolution has impelled software to become parallel. Therefore, there is a compelling need to protect parallel programs from hardware errors. Parallel programs’ tasks have significant similarity in control data due to the use of high-level programming models. In this thesis, we propose BlockWatch to leverage the similarity in parallel program’s control data for detecting hardware errors. BlockWatch statically extracts the similarity among different threads of a parallel program and checks the similarity at runtime. We evaluate BlockWatch on eight SPLASH-2 benchmarks to measure its performance overhead and error detection coverage. We find that BlockWatch incurs an average overhead of 15% across all programs, and provides an average SDC coverage of 97% for faults in the control data.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhu, Yingchun 1968. "Optimizing parallel programs with dynamic data structures." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=36745.

Full text
Abstract:
Distributed memory parallel architectures support a memory model where some memory accesses are local, and thus inexpensive, while other memory accesses are remote, and potentially quite expensive. In order to achieve efficiency on such architectures, we need to reduce remote accesses. This is particularly challenging for applications that use dynamic data structures.
In this thesis, I present two compiler techniques to reduce the overhead of remote memory accesses for dynamic data structure based applications: locality techniques and communication optimizations. Locality techniques include a static locality analysis, which statically estimates when an indirect reference via a pointer can be safely assumed to be a local access, and dynamic locality checks, which consists of runtime tests to identify local accesses. Communication techniques include: (1) code movement to issue remote reads earlier and writes later; (2) code transformations to replace repeated/redundant remote accesses with one access; and (3) transformations to block or pipeline a group of remote requests together. Both locality and communication techniques have been implemented and incorporated into our EARTH-McCAT compiler framework, and a series of experiments have been conducted to evaluate these techniques. The experimental results show that we are able to achieve up to 26% performance improvement with each technique alone, and up to 29% performance improvement when both techniques are applied together.
APA, Harvard, Vancouver, ISO, and other styles
10

Grove, Duncan A. "Performance modelling of message-passing parallel programs." Title page, contents and abstract only, 2003. http://web4.library.adelaide.edu.au/theses/09PH/09phg8832.pdf.

Full text
Abstract:
This dissertation describes a new performance modelling system, called the Performance Evaluating Virtual Parallel Machine (PEVPM). It uses a novel bottom-up approach, where submodels of individual computation and communication events are dynamically constructed from data-dependencies, current contention levels and the performance distributions of low-level operations, which define performance variability in the face of contention.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Parallel programs"

1

Synchronization of parallel programs. Cambridge, Mass: MIT Press, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Synchronization of parallel programs. Oxford: North Oxford Academic, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tomas, Gerald. Visualization of scientific parallel programs. Berlin: Springer-Verlag, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pelagatti, Susanna. Structured development of parallel programs. London: Taylor & Francis, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Parallel execution of logic programs. Boston: Kluwer Academic Publishers, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Conery, JohnS. Parallel execution of logic programs. Boston, Mass: Kluwer, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pelagatti, Susanna. Structured development of parallel programs. London: Taylor & Francis, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wong, Pak Seng. Parallel evaluation of functional programs. Manchester: University of Manchester, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Conery, John S. Parallel Execution of Logic Programs. Boston, MA: Springer US, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cok, Ronald S. Parallel programs for the transputer. Englewood Cliffs, N.J: Prentice Hall, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Parallel programs"

1

Apt, Krzysztof R., and Ernst-Rüdiger Olderog. "Disjoint Parallel Programs." In Verification of Sequential and Concurrent Programs, 101–24. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4757-2714-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Apt, Krzysztof R., and Ernst-Rüdiger Olderog. "Disjoint Parallel Programs." In Verification of Sequential and Concurrent Programs, 179–206. New York, NY: Springer New York, 1991. http://dx.doi.org/10.1007/978-1-4757-4376-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Korsloot, Mark, and Evan Tick. "Sequentializing Parallel Programs." In Declarative Programming, Sasbachwalden 1991, 310–24. London: Springer London, 1992. http://dx.doi.org/10.1007/978-1-4471-3794-8_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rauber, Thomas, and Gudula Rünger. "Performance Analysis of Parallel Programs." In Parallel Programming, 151–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04818-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rauber, Thomas, and Gudula Rünger. "Performance Analysis of Parallel Programs." In Parallel Programming, 169–226. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37801-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Prakash, Sundeep, and Rajive Bagrodia. "Parallel simulation of data parallel programs." In Languages and Compilers for Parallel Computing, 239–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0014203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Julliand, Jacques, and Guy-René Perrin. "Asynchronous functional parallel programs." In Advances in Computing and Information — ICCI '90, 356–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-53504-7_93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Apt, Krzysztof R., and Ernst-Rüdiger Olderog. "Parallel Programs with Synchronization." In Verification of Sequential and Concurrent Programs, 169–211. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4757-2714-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Voss, Michael, and Rudolf Eigenmann. "Dynamically adaptive parallel programs." In Lecture Notes in Computer Science, 109–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/bfb0094915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gittins, Martin. "Debugging parallel Strand Programs." In Parallel Execution of Logic Programs, 1–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/3-540-55038-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parallel programs"

1

Rubin, Robert, Larry Rudolph, and Dror Zernik. "Debugging parallel programs in parallel." In the 1988 ACM SIGPLAN and SIGOPS workshop. New York, New York, USA: ACM Press, 1988. http://dx.doi.org/10.1145/68210.69236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Phillips, Joel, Kurt Keutzer, and Michael Wrinn. "Architecting parallel programs." In 2008 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE, 2008. http://dx.doi.org/10.1109/iccad.2008.4681535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Castañeda Lozano, Roberto, Murray Cole, and Björn Franke. "Parallelizing Parallel Programs." In PACT '20: International Conference on Parallel Architectures and Compilation Techniques. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3410463.3414663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schwartz-Narbonne, Daniel, Feng Liu, Tarun Pondicherry, David August, and Sharad Malik. "Parallel assertions for debugging parallel programs." In 2011 9th IEEE/ACM International Conference on Formal Methods and Models for Codesign (MEMOCODE 2011). IEEE, 2011. http://dx.doi.org/10.1109/memcod.2011.5970525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Margerm, Steven, Amirali Sharifian, Apala Guha, Arrvindh Shriraman, and Gilles Pokam. "TAPAS: Generating Parallel Accelerators from Parallel Programs." In 2018 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2018. http://dx.doi.org/10.1109/micro.2018.00028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sridharan, Srinath, Gagan Gupta, and Gurindar S. Sohi. "Adaptive, efficient, parallel execution of parallel programs." In PLDI '14: ACM SIGPLAN Conference on Programming Language Design and Implementation. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2594291.2594292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Francioni, Joan M., Larry Albright, and Jay Alan Jackson. "Debugging parallel programs using sound." In the 1991 ACM/ONR workshop. New York, New York, USA: ACM Press, 1991. http://dx.doi.org/10.1145/122759.122765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jackson, J. A., and J. M. Francioni. "Aural signatures of parallel programs." In Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences. IEEE, 1992. http://dx.doi.org/10.1109/hicss.1992.183294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Heirman, Wim, Joni Dambre, Dirk Stroobandt, and Jan Van Campenhout. "Rent's rule and parallel programs." In the tenth international workshop. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1353610.1353628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Perumalla, Kalyan S., and Alfred J. Park. "Simulating billion-task parallel programs." In 2014 International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS). IEEE, 2014. http://dx.doi.org/10.1109/spects.2014.6879997.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Parallel programs"

1

Foster, I. Information hiding in parallel programs. Office of Scientific and Technical Information (OSTI), January 1992. http://dx.doi.org/10.2172/10133018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Foster, I. Language constructs for modular parallel programs. Office of Scientific and Technical Information (OSTI), March 1996. http://dx.doi.org/10.2172/204015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Socha, David, Mary L. Bailey, and David Notkin. Voyeur: Graphical Views of Parallel Programs. Fort Belvoir, VA: Defense Technical Information Center, April 1988. http://dx.doi.org/10.21236/ada197103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Entriken, R. The parallel decomposition of linear programs. Office of Scientific and Technical Information (OSTI), November 1989. http://dx.doi.org/10.2172/5291579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ho, James K., Tak C. Lee, and R. P. Sundarraj. Decomposition of Linear Programs Using Parallel Computation. Fort Belvoir, VA: Defense Technical Information Center, December 1988. http://dx.doi.org/10.21236/ada203214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Downey, Allen B. A Model for Speedup of Parallel Programs. Fort Belvoir, VA: Defense Technical Information Center, January 1997. http://dx.doi.org/10.21236/ada637068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kennedy, Ken, John Mellor-Crummey, Guohua Jin, Vikram Adve, and Robert J. Fowler. Compiling Scientific Programs for Scalable Parallel Systems. Fort Belvoir, VA: Defense Technical Information Center, February 2001. http://dx.doi.org/10.21236/ada387581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Entriken, Robert. The Parallel of Decomposition of Linear Programs. Fort Belvoir, VA: Defense Technical Information Center, November 1989. http://dx.doi.org/10.21236/ada216100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Poplawski, D. A. Synthetic models of distributed memory parallel programs. Office of Scientific and Technical Information (OSTI), September 1990. http://dx.doi.org/10.2172/6569514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Entriken, Robert. A Parallel Decomposition Algorithm for Staircase Linear Programs. Fort Belvoir, VA: Defense Technical Information Center, December 1988. http://dx.doi.org/10.21236/ada204662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography