Littérature scientifique sur le sujet « Distribute and Parallel Computing »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Distribute and Parallel Computing ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Distribute and Parallel Computing"

1

Asma, Anjum, and Parveen Asma. "Optimized load balancing mechanism in parallel computing for workflow in cloud computing environment." International Journal of Reconfigurable and Embedded Systems (IJRES) 12, no. 2 (2023): 276–86. https://doi.org/10.11591/ijres.v12.i2.pp276-286.

Texte intégral
Résumé :
Cloud computing gives on-demand access to computing resources in metered and powerfully adapted way; it empowers the client to get access to fast and flexible resources through virtualization and widely adaptable for various applications. Further, to provide assurance of productive computation, scheduling of task is very much important in cloud infrastructure environment. Moreover, the main aim of task execution phenomena is to reduce the execution time and reserve infrastructure; further, considering huge application, workflow scheduling has drawn fine attention in business as well as scienti
Styles APA, Harvard, Vancouver, ISO, etc.
2

Chang, Furong, Hao Guo, Farhan Ullah, Haochen Wang, Yue Zhao, and Haitian Zhang. "Near-Data Source Graph Partitioning." Electronics 13, no. 22 (2024): 4455. http://dx.doi.org/10.3390/electronics13224455.

Texte intégral
Résumé :
Recently, numerous graph partitioning approaches have been proposed to distribute a big graph to machines in a cluster for distributed computing. Due to heavy communication overhead, these graph partitioning approaches always suffered from long ingress times. Also, heavy communication overhead not only limits the scalability of distributed graph-parallel computing platforms but also reduces the overall performance of clusters. In order to address this problem, this work proposed a near-data source parallel graph partitioning approach noted as NDGP. In NDGP, an edge was preferentially distribut
Styles APA, Harvard, Vancouver, ISO, etc.
3

Sakariya, Harsh Bipinbhai, and Ganesh D. "Taxonomy of Load Balancing Strategies in Distributed Systems." International Journal of Innovative Research in Computer and Communication Engineering 12, no. 03 (2024): 1796–802. http://dx.doi.org/10.15680/ijircce.2024.1203070.

Texte intégral
Résumé :
Large-scale parallel and distributed computing systems are becoming more popular as a result of falling hardware prices and improvements in computer networking technologies. Improved performance and resource sharing are potential benefits of distributed computing systems. We have provided a summary of distributed computing in this essay. The differences between parallel and distributed computing, terms related to distributed computing, task distribution in distributed computing, performance metrics in distributed computing systems, parallel distributed algorithm models, benefits of distributed
Styles APA, Harvard, Vancouver, ISO, etc.
4

Nanuru Yagamurthy, Deepak, and Rajesh Azmeera. "Advances and Challenges in Parallel and Distributed Computing." International Journal of Science and Research (IJSR) 8, no. 1 (2019): 2262–66. http://dx.doi.org/10.21275/sr24517152409.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Anjum, Asma, and Asma Parveen. "Optimized load balancing mechanism in parallel computing for workflow in cloud computing environment." International Journal of Reconfigurable and Embedded Systems (IJRES) 12, no. 2 (2023): 276. http://dx.doi.org/10.11591/ijres.v12.i2.pp276-286.

Texte intégral
Résumé :
Cloud computing gives on-demand access to computing resources in metered and powerfully adapted way; it empowers the client to get access to fast and flexible resources through virtualization and widely adaptable for various applications. Further, to provide assurance of productive computation, scheduling of task is very much important in cloud infrastructure environment. Moreover, the main aim of task execution phenomena is to reduce the execution time and reserve infrastructure; further, considering huge application, workflow scheduling has drawn fine attention in business as well as scienti
Styles APA, Harvard, Vancouver, ISO, etc.
6

Sun, Qi, and Hui Yan Zhao. "Design of Distribute Monitoring Platform Base on Cloud Computing." Applied Mechanics and Materials 687-691 (November 2014): 1076–79. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1076.

Texte intégral
Résumé :
Based on cloud computing distributed network measurement system compared to traditional measurement infrastructure, the use of cloud computing platform measurement data stored in massive large virtual resource pool to ensure the reliability of data storage and scalability, re-use cloud computing platform parallel processing mechanism, the mass measurement data for fast, concurrent analytical processing and data mining. Measuring probe supports a variety of different measurement algorithms deployed to support a variety of data acquisition formats, in the measurement method provides a congestion
Styles APA, Harvard, Vancouver, ISO, etc.
7

Umar, A. "Distributed And Parallel Computing." IEEE Concurrency 6, no. 4 (1998): 80–81. http://dx.doi.org/10.1109/mcc.1998.736439.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ramsay, A. "Distributed versus parallel computing." Artificial Intelligence Review 1, no. 1 (1986): 11–25. http://dx.doi.org/10.1007/bf01988525.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Wismüller, Roland. "Parallel and distributed computing." Software Focus 2, no. 3 (2001): 124. http://dx.doi.org/10.1002/swf.44.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Sewaiwar, Aanchal, and Utkarsh Sharma. "Grid scheduling: Comparative study of MACO & TABU search." COMPUSOFT: An International Journal of Advanced Computer Technology 03, no. 06 (2014): 825–30. https://doi.org/10.5281/zenodo.14742548.

Texte intégral
Résumé :
Grid computing is progressively considered as a Next-generation computational platform that supports wide-area parallel and distributed computing. Scheduling jobs to resources in grid computing is difficult due to the distributed and heterogeneous nature of the resources. In Grid computingfinding optimal schedules for such an environment is (in general) an NP-hard problem, and so heuristic technique must be used. The aim of grid task scheduling is to achieve highsystem throughput and to distribute various computing resources to applications. Many different algorithms have been proposed to solv
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Thèses sur le sujet "Distribute and Parallel Computing"

1

Xu, Lei. "Cellular distributed and parallel computing." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:88ffe124-c2fd-4144-86fe-47b35f4908bd.

Texte intégral
Résumé :
This thesis focuses on novel approaches to distributed and parallel computing that are inspired by the mechanism and functioning of biological cells. We refer to this concept as cellular distributed and parallel computing which focuses on three important principles: simplicity, parallelism, and locality. We first give a parallel polynomial-time solution to the constraint satisfaction problem (CSP) based on a theoretical model of cellular distributed and parallel computing, which is known as neural-like P systems (or neural-like membrane systems). We then design a class of simple neural-like P
Styles APA, Harvard, Vancouver, ISO, etc.
2

Xiang, Yonghong. "Interconnection networks for parallel and distributed computing." Thesis, Durham University, 2008. http://etheses.dur.ac.uk/2156/.

Texte intégral
Résumé :
Parallel computers are generally either shared-memory machines or distributed- memory machines. There are currently technological limitations on shared-memory architectures and so parallel computers utilizing a large number of processors tend tube distributed-memory machines. We are concerned solely with distributed-memory multiprocessors. In such machines, the dominant factor inhibiting faster global computations is inter-processor communication. Communication is dependent upon the topology of the interconnection network, the routing mechanism, the flow control policy, and the method of switc
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kim, Young Man. "Some problems in parallel and distributed computing /." The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487776210795651.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Freeh, Vincent William 1959. "Software support for distributed and parallel computing." Diss., The University of Arizona, 1996. http://hdl.handle.net/10150/290588.

Texte intégral
Résumé :
This dissertation addresses creating portable and efficient parallel programs for scientific computing. Both of these aspects are important. Portability means the program can execute on any parallel machine. Efficiency means there is little or no penalty for using our solution instead of hand-coded, architecture-specific programs. Although parallel programming is necessarily more difficult than sequential programming, it is currently more complicated than it has to be. The Filaments package provides fine-grain parallelism and a shared memory programming model. It can be viewed as a "least comm
Styles APA, Harvard, Vancouver, ISO, etc.
5

Jin, Xiaoming. "A practical realization of parallel disks for a distributed parallel computing system." [Gainesville, Fla.] : University of Florida, 2000. http://etd.fcla.edu/etd/uf/2000/ane5954/master.PDF.

Texte intégral
Résumé :
Thesis (M.S.)--University of Florida, 2000.<br>Title from first page of PDF file. Document formatted into pages; contains ix, 41 p.; also contains graphics. Vita. Includes bibliographical references (p. 39-40).
Styles APA, Harvard, Vancouver, ISO, etc.
6

馬家駒 and Ka-kui Ma. "Transparent process migration for parallel Java computing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31226474.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ma, Ka-kui. "Transparent process migration for parallel Java computing /." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23589371.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Dutta, Sourav. "PERFORMANCE ESTIMATION AND SCHEDULING FOR PARALLEL PROGRAMS WITH CRITICAL SECTIONS." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/dissertations/1353.

Texte intégral
Résumé :
A fundamental problem in multithreaded parallel programs is the partial serialization that is imposed due to the presence of mutual exclusion variables or critical sections. In this work we investigate a model that considers the threads consisting of an equal number L of functional blocks, where each functional block has the same duration and either accesses a critical section or executes non-critical code. We derived formulas to estimate the average time spent in a critical section in presence of synchronization barrier and in absence of it. We also develop and establish the optimality of a f
Styles APA, Harvard, Vancouver, ISO, etc.
9

Winter, Stephen Charles. "A distributed reduction architecture for real-time computing." Thesis, University of Westminster, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238722.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Valente, Fredy Joao. "An integrated parallel/distributed environment for high performance computing." Thesis, University of Southampton, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362138.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Livres sur le sujet "Distribute and Parallel Computing"

1

Hobbs, Michael, Andrzej M. Goscinski, and Wanlei Zhou, eds. Distributed and Parallel Computing. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11564621.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Nagamalai, Dhinaharan, Eric Renault, and Murugan Dhanuskodi, eds. Advances in Parallel Distributed Computing. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24037-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

1960-, Pan Yi, and Yang Laurence Tianruo, eds. Applied parallel and distributed computing. Nova Science Publishers, 2005.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Y, Zomaya Albert, ed. Parallel and distributed computing handbook. McGraw-Hill, 1996.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Özgüner, Füsun, and Fikret Erçal, eds. Parallel Computing on Distributed Memory Multiprocessors. Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-642-58066-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Qi, Luo, ed. Parallel and Distributed Computing and Networks. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22706-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Prasad, Sushil K., Anshul Gupta, Arnold Rosenberg, Alan Sussman, and Charles Weems, eds. Topics in Parallel and Distributed Computing. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93109-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Füsun, Özgüner, Erçal Fikret, North Atlantic Treaty Organization. Scientific Affairs Division., and NATO Advanced Study Institute on Parallel Computing on Distributed Memory Multiprocessors (1991 : Bilkent University), eds. Parallel computing on distributed memory multiprocessors. Springer-Verlag, 1993.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Lee, Roger, ed. Networking and Parallel/Distributed Computing Systems. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53274-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Topping, B. H. V., and P. Iványi, eds. Parallel, Distributed and Grid Computing for Engineering. Saxe-Coburg Publications, 2009. http://dx.doi.org/10.4203/csets.21.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Chapitres de livres sur le sujet "Distribute and Parallel Computing"

1

Diehl, Patrick, Steven R. Brandt, and Hartmut Kaiser. "Distributed Computing and Programming." In Parallel C++. Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-54369-2_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Torres, Jordi, Eduard Ayguadé, Jesús Labarta, and Mateo Valero. "Align and distribute-based linear loop transformations." In Languages and Compilers for Parallel Computing. Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/3-540-57659-2_19.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Fahringer, Thomas. "Tools for Parallel and Distributed Computing." In Parallel Computing. Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-409-6_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Erciyes, K. "Parallel and Distributed Computing." In Computational Biology. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24966-7_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sharker, Monir H., and Hassan A. Karimi. "Distributed and Parallel Computing." In Big Data, 2nd ed. CRC Press, 2024. http://dx.doi.org/10.1201/9781003406969-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Hariri, S., and M. Parashar. "Parallel and Distributed Computing." In Tools and Environments for Parallel and Distributed Computing. John Wiley & Sons, Inc., 2004. http://dx.doi.org/10.1002/0471474835.ch1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kim, Dongmin, and Salim Hariri. "Parallel and Distributed Computing Environment." In Virtual Computing. Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1553-1_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Falsafi, Babak, Samuel Midkiff, JackB Dennis, et al. "Distributed Computer." In Encyclopedia of Parallel Computing. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2274.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Eberle, Hans. "Switcherland — A scalable interconnection structure for distributed computing." In Parallel Computation. Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61695-0_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Chandrasekaran, Ishwarya. "Mobile Computing with Cloud." In Advances in Parallel Distributed Computing. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24037-9_51.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Distribute and Parallel Computing"

1

Kienzle, P. A., N. Patel, and M. McKerns. "Parallel Kernels: An Architecture for Distributed Parallel Computing." In Python in Science Conference. SciPy, 2009. http://dx.doi.org/10.25080/fcfp7555.

Texte intégral
Résumé :
Global optimization problems can involve huge computational resources. The need to prepare, schedule and monitor hundreds of runs and interactively explore and analyze data is a challenging problem. Managing such a complex computational environment requires a sophisticated software framework which can distribute the computation on remote nodes hiding the complexity of the communication in such a way that scientist can concentrate on the details of computation. We present PARK, the computational job management framework being developed as a part of DANSE project, which will offer a simple, effi
Styles APA, Harvard, Vancouver, ISO, etc.
2

Garland, Michael. "Parallel computing with CUDA." In Distributed Processing (IPDPS). IEEE, 2010. http://dx.doi.org/10.1109/ipdps.2010.5470378.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Doolan, Daniel, Sabin Tabirca, and Laurence Yang. "Mobile Parallel Computing." In 2006 Fifth International Symposium on Parallel and Distributed Computing. IEEE, 2006. http://dx.doi.org/10.1109/ispdc.2006.33.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Tutsch, Dietmar. "Reconfigurable parallel computing." In 2010 1st International Conference on Parallel, Distributed and Grid Computing (PDGC 2010). IEEE, 2010. http://dx.doi.org/10.1109/pdgc.2010.5679961.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Crews, Thad. "Session details: Distributed/parallel computing." In SIGCSE04: Technical Symposium on Computer Science Education 2004. ACM, 2004. http://dx.doi.org/10.1145/3244218.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Rashid, Zryan Najat, Subhi R. M. Zebari, Karzan Hussein Sharif, and Karwan Jacksi. "Distributed Cloud Computing and Distributed Parallel Computing: A Review." In 2018 International Conference on Advanced Science and Engineering (ICOASE). IEEE, 2018. http://dx.doi.org/10.1109/icoase.2018.8548937.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Chou, Yu-Cheng, David Ko, and Harry H. Cheng. "Mobile Agent Based Autonomic Dynamic Parallel Computing." In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87750.

Texte intégral
Résumé :
Parallel computing is widely adotped in scientific and engineering applications to enhance the efficiency. Moreover, there are increasing research interests focusing on utilizing distributed networked computers for parallel computing. The Message Passing Interface (MPI) standard was designed to support portability and platform independence of a developed parallel program. However, the procedure to start an MPI-based parallel computation among distributed computers lacks autonomicity and flexibility. This article presents an autonomic dynamic parallel computing framework that provides autonomic
Styles APA, Harvard, Vancouver, ISO, etc.
8

Rakhimov, Mekhriddin, Shakhzod Javliev, and Rashid Nasimov. "Parallel Approaches in Deep Learning: Use Parallel Computing." In ICFNDS '23: The International Conference on Future Networks and Distributed Systems. ACM, 2023. http://dx.doi.org/10.1145/3644713.3644738.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Yu, Yuan, Pradeep Kumar Gunda, and Michael Isard. "Distributed aggregation for data-parallel computing." In the ACM SIGOPS 22nd symposium. ACM Press, 2009. http://dx.doi.org/10.1145/1629575.1629600.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

"Session PD: Parallel & distributed computing." In 2014 9th International Conference on Computer Engineering & Systems (ICCES). IEEE, 2014. http://dx.doi.org/10.1109/icces.2014.7030947.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Distribute and Parallel Computing"

1

Kaplansky, I., and Richard M. Karp. Parallel and Distributed Computing. Defense Technical Information Center, 1986. http://dx.doi.org/10.21236/ada182935.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kaplansky, Irving, and Richard Karp. Parallel and Distributed Computing. Defense Technical Information Center, 1986. http://dx.doi.org/10.21236/ada176477.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Leighton, Tom. Parallel and Distributed Computing Combinatorial Algorithms. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada277333.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Nerode, Anil. Fellowship in Parallel and Distributed Computing. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada225926.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sunderam, V. PVM (Parallel Virtual Machine): A framework for parallel distributed computing. Office of Scientific and Technical Information (OSTI), 1989. http://dx.doi.org/10.2172/5347567.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Chen, H., J. Hutchins, and J. Brandt. Evaluation of DEC`s GIGAswitch for distributed parallel computing. Office of Scientific and Technical Information (OSTI), 1993. http://dx.doi.org/10.2172/10188486.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

George, Alan D. Parallel and Distributed Computing Architectures and Algorithms for Fault-Tolerant Sonar Arrays. Defense Technical Information Center, 1999. http://dx.doi.org/10.21236/ada359698.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

hariri, salim. International ACM Symposium on High Performance Parallel and Distributed Computing Conference for 2017, 2018, and 2019. Office of Scientific and Technical Information (OSTI), 2022. http://dx.doi.org/10.2172/1841180.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Smith, Bradley W. Distributed Computing for Signal Processing: Modeling of Asynchronous Parallel Computation. Appendix G. On the Design and Modeling of Special Purpose Parallel Processing Systems. Defense Technical Information Center, 1985. http://dx.doi.org/10.21236/ada167622.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Pratt, T. J., L. G. Martinez, M. O. Vahle, T. V. Archuleta, and V. K. Williams. Sandia`s network for SC `97: Supporting visualization, distributed cluster computing, and production data networking with a wide area high performance parallel asynchronous transfer mode (ATM) network. Office of Scientific and Technical Information (OSTI), 1998. http://dx.doi.org/10.2172/658446.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!